Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
778 views
in Technique[技术] by (71.8m points)

dataframe - Runnning Spark on cluster: Initial job has not accepted any resources

  1. I have a remote Ubuntu server on linode.com with 4 cores and 8G RAM
  2. I have a Spark-2 cluster consisting of 1 master and 1 slave on my remote Ubuntu server.
  3. I have started PySpark shell locally on my MacBook, connected to my master node on remote server by:

    $ PYSPARK_PYTHON=python3 /vagrant/spark-2.0.0-bin-hadoop2.7/bin/pyspark --master spark://[server-ip]:7077
    
  4. I tried executing simple Spark example from website:

    from pyspark.sql import SparkSession
    
    spark = SparkSession 
        .builder 
        .appName("Python Spark SQL basic example") 
        .config("spark.some.config.option", "some-value") 
        .getOrCreate()
    df = spark.read.json("/path/to/spark-2.0.0-bin-hadoop2.7/examples/src/main/resources/people.json")
    
  5. I have got error

    Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

  6. I have enough memory on my server and also on my local machine, but I am getting this weird error again and again. I have 6G for my Spark cluster, my script is using only 4 cores with 1G memory per node.

    [Spark admin screenshot]

  7. I have Googled for this error and tried to setup different memory configs, also disabled firewall on both machines, but it does not helped me. I have no idea how to fix it.

  8. Is someone faced the same problem? Any ideas?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You are submitting application in the client mode. It means that driver process is started on your local machine.

When executing Spark applications all machines have to be able to communicate with each other. Most likely your driver process is not reachable from the executors (for example it is using private IP or is hidden behind firewall). If that is the case you can confirm that by checking executor logs (go to application, select on of the workers with the status EXITED and check stderr. You "should" see that executor is failing due to org.apache.spark.rpc.RpcTimeoutException).

There are two possible solutions:

  • Submit application from the machine which can be reached from you cluster.
  • Submit application in the cluster mode. This will use cluster resources to start driver process so you have to account for that.

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...