ERROR SparkContext: Error initializing SparkContext

#1

I am using STS and have launched scala prompt after opting “Create Scala Interpreter in demo-spark-scala-app” and then executed following commands line by line (as shown below but when I executed “val sc = new SparkContext(conf)” to get the sc object getting error “16/12/08 16:08:08 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: System memory 449314816 must be at least 471859200. Please increase heap size using the –driver-memory option or spark.driver.memory in Spark configuration.”:

scala>import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import com.typesafe.config.ConfigFactory
import org.apache.hadoop.fs._

scala> val conf = new SparkConf().setAppName(“Average Revenue -Daily”).setMaster(“local”)
conf: org.apache.spark.SparkConf = org.apache.spark.SparkConf@7fac631b

scala> val sc = new SparkContext(conf)
Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
16/12/08 16:08:04 INFO SparkContext: Running Spark version 2.0.2
16/12/08 16:08:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
16/12/08 16:08:05 WARN Utils: Your hostname, localhost.localdomain resolves to a loopback address: 127.0.0.1; using 192.168.44.133 instead (on interface ens33)
16/12/08 16:08:05 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/12/08 16:08:06 INFO SecurityManager: Changing view acls to: fedora
16/12/08 16:08:06 INFO SecurityManager: Changing modify acls to: fedora
16/12/08 16:08:06 INFO SecurityManager: Changing view acls groups to:
16/12/08 16:08:06 INFO SecurityManager: Changing modify acls groups to:
16/12/08 16:08:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(fedora); groups with view permissions: Set(); users with modify permissions: Set(fedora); groups with modify permissions: Set()
16/12/08 16:08:07 INFO Utils: Successfully started service ‘sparkDriver’ on port 36177.
16/12/08 16:08:07 INFO SparkEnv: Registering MapOutputTracker
16/12/08 16:08:08 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: System memory 449314816 must be at least 471859200. Please increase heap size using the –driver-memory option or spark.driver.memory in Spark configuration.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:212)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:194)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:308)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:165)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:256)
at org.apache.spark.SparkContext.(SparkContext.scala:420)
at $line5.$read$$iw$$iw$$iw$$iw$.(:18)
at $line5.$read$$iw$$iw$$iw$$iw$.()
at $line5.$eval$.$print$lzycompute(:7)
at $line5.$eval$.$print(:6)
at $line5.$eval.$print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:415)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:923)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:74)
at scala.tools.nsc.MainGenericRunner.run$1(MainGenericRunner.scala:87)
at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:98)
at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:103)
at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala)
16/12/08 16:08:08 INFO SparkContext: Successfully stopped SparkContext
java.lang.IllegalArgumentException: System memory 449314816 must be at least 471859200. Please increase heap size using the –driver-memory option or spark.driver.memory in Spark configuration.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:212)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:194)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:308)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:165)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:256)
at org.apache.spark.SparkContext.(SparkContext.scala:420)
… 32 elided
scala>
I have made some changes as well after seeing some suggestion on google like changes under STS.ini, spark-defaults.conf file but nothing happened. Uploaded the screen shot as well here…

Pls help…

1 Like

#2

If you are using Spark-Shell, Spark Context object should be created for you.

0 Likes

#3

What is the configuration of your laptop?

@RaviShankarOnInet it is very important to tag these issues properly. This issue is not related to bigdata-labs. It is issue with respect to STS on your laptop for while running spark jobs. Hence “Apache Spark” is more relevant category.

@pramodvspk, @venkatreddy-amalla, @Vinay, @gnanaprakasam - please correct the categories if users does not choose the right one while responding to their questions.

@RakeshTdSharma, can you have a look at this?

1 Like

#4

Hi Guys,

In REPL mode sc is already created and there is no need to create another Sparkcontext again.
So when you submit spark-shell give properties name like AppName and master as --master and vice-a-versa.
Run the same again and share your observations.

1 Like

#5

What is REPL mode? Can you elaborate little bit?

0 Likes

#6

read–eval–print loop (REPL)
is a simple, interactive computer programming environment that takes single user inputs (i.e. single expressions), evaluates them, and returns the result to the user; a program written in a REPL environment is executed piecewise.

Basically For Spark you can use Spark in REPL and Spark-Submit mode.In REPL mode sc is created automatically using spark-shell

0 Likes

#7

Guys you people are going into wrong direction, What I have mentioned and what responses I am getting are different.I am not talking about REPL mode. I clearly mentioned that I am using STS and have launched scala prompt after opting “Create Scala Interpreter in demo-spark-scala-app” option that is available after successfully created the project using SBT and imported inside STS. And to use Spark-Context object there I need to create it manually by executing “val sc = new SparkContext(conf)” after successfully doing followings:
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import com.typesafe.config.ConfigFactory
import org.apache.hadoop.fs._
and
val conf = new SparkConf().setAppName(“Average Revenue -Daily”).setMaster(“local”)

I have already attached a screen shot which is of 6 parts and contains following:

  1. Terminal info (When I launch STS in my Fedora VM, it come-up automatically)
  2. Screen shot with error which I am getting when trying to execute manually command “val sc = new SparkContext(conf)” after successful importing/triggered other necessary commands.
  3. Screen shot of the issue in big.
  4. Screen shot of Spark-default.conf file
  5. Screen shot of STS.ini file
  6. Screen shot of my .bashrc file where I made entries for Spark configuration.

I hope this will make clear to al and I will get more appropriate response for the same.

Regards,
Ravi Shankar

1 Like

#8

Yes Ravi, I got the issue. But I am using Mac at this time.

@RakeshTdSharma, the issue is with scala interpreter in STS.

@pramodvspk, @venkatreddy-amalla - you guys also can have a look.

0 Likes