this feature is a considerable performance improvement,as per the idea 'moves the computation to data side',this is a further usage of it.
Outline
1.what is
2.how to
3.setup to run
4.one more thinking
1.what is
about ten years ago,i usually heard some ad words,maybe belong of intell,"the technology of moving computation".of course ,hadoop,uses it as one of the key features to advance hdfs reads:read on local node.u will see some info like "Launched local maps" from the results of issuing a job.
or,maybe u will occur similar scenarios like rebuilding indexes from hbase by MR,and u want to index to local host directly by local fs without crossing the solr webserver if the maps are co-located in the same node with solr .sure,if u adjust the map slots to one per node,this will run fine,else that will cause multi-threads competition.
in the opposite direction of data stream with front case,the so called 'short circuit reads' is issued by mr mappers,and the datanode lied in same host is hardly to do nothing:maybe only pass file descirptiors etc,and most part of reading files are based on client to act on,so this maybe say that 'reading data without crossing datanode'.so something costs like threads usage,tcp sockets are eliminated by client to increase performance.
2.how to
through comaring the three modes of reading data ,then will get a clear impress
mode | target version | data-flow | cache | other features | safe protection on local files |
common tcp |
before 1.0 or before 0.23.1 |
local datanode->client | - | hdfs build-in before 1.x | y |
hdfs-2246(short circuit reads ) | 1.x,0.23.1+ | read local directly without dn | block path |
complex to setup, through changing user attributes and data dir permissions; pass file offset ,length etc to client,then the later reas file using them |
n,but can be fixed by setuping some properties |
hdfs-347(short circuit reads with secure) | 2.10-beta | read local directly without dn | file descriptor (higheffect) |
simple to setup; using system domain socket to pass fd[1] |
y,passing fd only without other fils needed not |
3.setup to run
here are some properties in hdfs-site.xml from setuping a hdfs-347 configs,check it here for hdfs-2246
<!--short circuit read --> <property> <name>dfs.client.read.shortcircuit</name> <value>true</value> </property> <property> <name>dfs.domain.socket.path</name> <value>/usr/local/hadoop/data-2.5.1/dfs_dn_socket</value> </property>
so the last hdfs-347 is more effective and safe,from my test,this is the result from reading 1 GB file from hdfs
time hadoop fs -get /user/tmp.dd 2tcp:33s
hdfs-347:31s
yes,this is not obvious,as the bottleneck of the test is stuck on suboptimal hard disk.but u see, the difference between them is also reasonable.
4.one more thinking
in according to current common hard-disks(not ssd),if enable short circuit reads,may be broken the io trade off if some other services are deployed in same cluster since this feature will swallow nearly all the io resouces?maybe
Ref:
[1]http://troydhanson.github.io/misc/Unix_domain_sockets.html
HDFS Short-Circuit Local Reads
How Improved Short-Circuit Local Reads Bring Better Performance and Security to Hadoop
相关推荐
Hadoop 3.x(HDFS)----【HDFS 的 API 操作】---- 代码 Hadoop 3.x(HDFS)----【HDFS 的 API 操作】---- 代码 Hadoop 3.x(HDFS)----【HDFS 的 API 操作】---- 代码 Hadoop 3.x(HDFS)----【HDFS 的 API 操作】--...
hadoop-auth-3.1.1.jar hadoop-hdfs-3.1.1.jar hadoop-mapreduce-client-hs-3.1.1.jar hadoop-yarn-client-3.1.1.jar hadoop-client-api-3.1.1.jar hadoop-hdfs-client-3.1.1.jar hadoop-mapreduce-client-jobclient...
Hadoop 2.7.3 Windows64位 编译bin(包含winutils.exe, hadoop.dll),自己用的,把压缩包里的winutils.exe, hadoop.dll 放在你的bin 目录 在重启eclipse 就好了
hadoop-eclipse-plugin-2.6.0.jar
ambari-2.7.5 编译过程中四个大包下载很慢,所以需要提前下载,包含:hbase-2.0.2.3.1.4.0-315-bin.tar.gz ,hadoop-3.1.1.3.1.4.0-315.tar.gz , grafana-6.4.2.linux-amd64.tar.gz ,phoenix-5.0.0.3.1.4.0-315....
赠送jar包:hadoop-hdfs-client-2.9.1.jar 赠送原API文档:hadoop-hdfs-client-2.9.1-javadoc.jar 赠送源代码:hadoop-hdfs-client-2.9.1-sources.jar 包含翻译后的API文档:hadoop-hdfs-client-2.9.1-javadoc-...
赠送jar包:hadoop-hdfs-2.6.5.jar; 赠送原API文档:hadoop-hdfs-2.6.5-javadoc.jar; 赠送源代码:hadoop-hdfs-2.6.5-sources.jar; 赠送Maven依赖信息文件:hadoop-hdfs-2.6.5.pom; 包含翻译后的API文档:hadoop...
Hadoop 3.x(MapReduce)----【Hadoop 序列化】---- 代码 Hadoop 3.x(MapReduce)----【Hadoop 序列化】---- 代码 Hadoop 3.x(MapReduce)----【Hadoop 序列化】---- 代码 Hadoop 3.x(MapReduce)----【Hadoop ...
《Hadoop 2.X HDFS源码剖析》以Hadoop 2.6.0源码为基础,深入剖析了HDFS 2.X中各个模块的实现细节,包括RPC框架实现、Namenode实现、Datanode实现以及HDFS客户端实现等。《Hadoop 2.X HDFS源码剖析》一共有5章,其中...
赠送jar包:hadoop-hdfs-client-2.9.1.jar; 赠送原API文档:hadoop-hdfs-client-2.9.1-javadoc.jar; 赠送源代码:hadoop-hdfs-client-2.9.1-sources.jar; 赠送Maven依赖信息文件:hadoop-hdfs-client-2.9.1.pom;...
赠送jar包:hadoop-hdfs-2.7.3.jar; 赠送原API文档:hadoop-hdfs-2.7.3-javadoc.jar; 赠送源代码:hadoop-hdfs-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-hdfs-2.7.3.pom; 包含翻译后的API文档:hadoop...
Hadoop是大数据技术中最重要的框架之一,是学习大数据必备的第一课,在Hadoop平台之上,可以更容易地开发和运行其他处理大规模数据的框架。尚硅谷Hadoop视频教程再次重磅升级!以企业实际生产环境为背景,增加了更...
flink1.14.0与hadoop3.x的兼容包,放在flink的lib目录下
hadoop.dll-winutils.exe-hadoop2.7.x,本人环境是2.7.2,使用hadoop.dll和winutils.exe之后不再报错
Hadoop_2.X_HDFS源码剖析_带索引书签目录_徐鹏,内容不错,值得阅读!
赠送jar包:hadoop-hdfs-2.7.3.jar; 赠送原API文档:hadoop-hdfs-2.7.3-javadoc.jar; 赠送源代码:hadoop-hdfs-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-hdfs-2.7.3.pom; 包含翻译后的API文档:hadoop...
Hadoop 2.X HDFS源码剖析-高清-完整目录-2016年3月,分享给所有需要的人!
赠送jar包:hadoop-hdfs-2.5.1.jar; 赠送原API文档:hadoop-hdfs-2.5.1-javadoc.jar; 赠送源代码:hadoop-hdfs-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-hdfs-2.5.1.pom; 包含翻译后的API文档:hadoop...
赠送jar包:hadoop-hdfs-2.5.1.jar; 赠送原API文档:hadoop-hdfs-2.5.1-javadoc.jar; 赠送源代码:hadoop-hdfs-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-hdfs-2.5.1.pom; 包含翻译后的API文档:hadoop...
赠送jar包:hadoop-hdfs-2.6.5.jar; 赠送原API文档:hadoop-hdfs-2.6.5-javadoc.jar; 赠送源代码:hadoop-hdfs-2.6.5-sources.jar; 赠送Maven依赖信息文件:hadoop-hdfs-2.6.5.pom; 包含翻译后的API文档:hadoop...