Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
533 views
in Technique[技术] by (71.8m points)

hadoop - Why do I need to keep hbase/lib folder in hdfs?

I have a main cluster which has some data in Hbase, and I want to replicate it. I've already created a backup cluster and created snapshot of the table I want to replicate. I am trying to export the snapshot from source cluster to destination, but I am getting some errors. I am executing

./hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot mySnap -copy-to hdfs://198.58.88.11:9000/hbase

and as a result of execution I got

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/vagrant/hbase/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/vagrant/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-03-05 10:58:43,155 INFO  [main] snapshot.ExportSnapshot: Copy Snapshot Manifest
2015-03-05 10:58:43,596 INFO  [main] Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2015-03-05 10:58:43,597 INFO  [main] jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
2015-03-05 10:58:43,890 INFO  [main] mapreduce.JobSubmitter: Cleaning up the staging area file:/home/vagrant/hadoop/hadoop-datastore/mapred/staging/vagrant1489762780/.staging/job_local1489762780_0001
2015-03-05 10:58:43,892 ERROR [main] snapshot.ExportSnapshot: Snapshot export failed
java.io.FileNotFoundException: File does not exist: hdfs://namenode:9000/home/vagrant/hbase/lib/hbase-client-1.0.0.jar
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
    at org.apache.hadoop.hbase.snapshot.ExportSnapshot.runCopyJob(ExportSnapshot.java:775)
    at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:934)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:1008)
    at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1012)

So, as I understood, it tries to find base-client-1.0.0.jar But is looking in hdfs://namenode:9000/home/vagrant/hbase/lib/hbase-client-1.0.0.jar and not in local storage. Any ideas why it is happening?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

In my case the cause of the problem was misconfiguration of yarn and map-reduce. After configuring them correctly, I was able to export a snapshot without problems.

Make your mapred-site.xml to looks like this

<configuration>
   <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
   </property>
   <property>
      <name>mapreduce.jobtracker.address</name>
      <value>cluster2.master:8021</value>
   </property>
</configuration>

And yarn-site.xml

<property>
  <name>yarn.resourcemanager.hostname</name>
  <value>cluster2.master</value>
  <description>The hostname of the RM.</description>
</property>
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
  <description>shuffle service that needs to be set for Map Reduce to run </description>
</property>

cluster2.master should be changed according to your settings.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...