Compiling but Not able to Run the Spark Word Count Program in Intellij installed in Win 7


#1

My SBT :

name := "NewCode"
version := "0.1"
scalaVersion := "2.10.5"
libraryDependencies += “org.apache.spark” % “spark-core_2.10” % “1.6.3”

My Scala-Spark Code:

import org.apache.spark.{SparkConf, SparkContext}

object testing {
def main(args: Array[String]) = {
val conf = new SparkConf().setAppName(“testing”).setMaster(“local”)
val sc = new SparkContext(conf)
val inputPath="C:\Users\hp\Desktop\Idea-Spark-Input-Files\scala-word-count.txt"
val outputPath="C:\Users\hp\Desktop\Idea-Spark-Input-Files\abcd"
sc.textFile(inputPath).
flatMap(.split(" ")).
map((
, 1)).
reduceByKey(_ + _).
map(rec => rec._1 + “\t” + rec._2).
saveAsTextFile(outputPath)
}
}

Error

Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
17/11/22 22:09:37 INFO SparkContext: Running Spark version 1.6.3
17/11/22 22:09:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
17/11/22 22:09:37 INFO SecurityManager: Changing view acls to: hp
17/11/22 22:09:37 INFO SecurityManager: Changing modify acls to: hp
17/11/22 22:09:37 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hp); users with modify permissions: Set(hp)
17/11/22 22:09:38 INFO Utils: Successfully started service ‘sparkDriver’ on port 50189.
17/11/22 22:09:39 INFO Slf4jLogger: Slf4jLogger started
17/11/22 22:09:39 INFO Remoting: Starting remoting
17/11/22 22:09:39 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.101.1:50202]
17/11/22 22:09:39 INFO Utils: Successfully started service ‘sparkDriverActorSystem’ on port 50202.
17/11/22 22:09:39 INFO SparkEnv: Registering MapOutputTracker
17/11/22 22:09:39 INFO SparkEnv: Registering BlockManagerMaster
17/11/22 22:09:39 INFO DiskBlockManager: Created local directory at C:\Users\hp\AppData\Local\Temp\blockmgr-f0c60baf-1155-415f-989f-ff5775ea7bd9
17/11/22 22:09:39 INFO MemoryStore: MemoryStore started with capacity 1117.9 MB
17/11/22 22:09:39 INFO SparkEnv: Registering OutputCommitCoordinator
17/11/22 22:09:39 INFO Utils: Successfully started service ‘SparkUI’ on port 4040.
17/11/22 22:09:39 INFO SparkUI: Started SparkUI at http://192.168.101.1:4040
17/11/22 22:09:39 INFO Executor: Starting executor ID driver on host localhost
17/11/22 22:09:39 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 50209.
17/11/22 22:09:39 INFO NettyBlockTransferService: Server created on 50209
17/11/22 22:09:39 INFO BlockManagerMaster: Trying to register BlockManager
17/11/22 22:09:39 INFO BlockManagerMasterEndpoint: Registering block manager localhost:50209 with 1117.9 MB RAM, BlockManagerId(driver, localhost, 50209)
17/11/22 22:09:39 INFO BlockManagerMaster: Registered BlockManager
17/11/22 22:09:40 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 107.7 KB, free 1117.8 MB)
17/11/22 22:09:40 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 9.8 KB, free 1117.8 MB)
17/11/22 22:09:40 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:50209 (size: 9.8 KB, free: 1117.9 MB)
17/11/22 22:09:40 INFO SparkContext: Created broadcast 0 from textFile at testing.scala:12
17/11/22 22:09:40 INFO FileInputFormat: Total input paths to process : 1
17/11/22 22:09:41 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
17/11/22 22:09:41 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
17/11/22 22:09:41 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
17/11/22 22:09:41 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
17/11/22 22:09:41 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
17/11/22 22:09:41 INFO SparkContext: Starting job: saveAsTextFile at testing.scala:17
17/11/22 22:09:41 INFO DAGScheduler: Registering RDD 3 (map at testing.scala:14)
17/11/22 22:09:41 INFO DAGScheduler: Got job 0 (saveAsTextFile at testing.scala:17) with 1 output partitions
17/11/22 22:09:41 INFO DAGScheduler: Final stage: ResultStage 1 (saveAsTextFile at testing.scala:17)
17/11/22 22:09:41 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
17/11/22 22:09:41 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
17/11/22 22:09:41 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at testing.scala:14), which has no missing parents
17/11/22 22:09:41 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.1 KB, free 1117.8 MB)
17/11/22 22:09:41 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.3 KB, free 1117.8 MB)
17/11/22 22:09:41 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:50209 (size: 2.3 KB, free: 1117.9 MB)
17/11/22 22:09:41 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
17/11/22 22:09:41 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at testing.scala:14)
17/11/22 22:09:41 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/11/22 22:09:41 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2160 bytes)
17/11/22 22:09:41 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
17/11/22 22:09:41 INFO HadoopRDD: Input split: file:/C:/Users/hp/Desktop/Idea-Spark-Input-Files/scala-word-count.txt:0+352
17/11/22 22:09:41 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2253 bytes result sent to driver
17/11/22 22:09:41 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 147 ms on localhost (1/1)
17/11/22 22:09:41 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/11/22 22:09:41 INFO DAGScheduler: ShuffleMapStage 0 (map at testing.scala:14) finished in 0.166 s
17/11/22 22:09:41 INFO DAGScheduler: looking for newly runnable stages
17/11/22 22:09:41 INFO DAGScheduler: running: Set()
17/11/22 22:09:41 INFO DAGScheduler: waiting: Set(ResultStage 1)
17/11/22 22:09:41 INFO DAGScheduler: failed: Set()
17/11/22 22:09:41 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[6] at saveAsTextFile at testing.scala:17), which has no missing parents
17/11/22 22:09:41 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 48.9 KB, free 1117.7 MB)
17/11/22 22:09:41 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 17.1 KB, free 1117.7 MB)
17/11/22 22:09:41 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:50209 (size: 17.1 KB, free: 1117.8 MB)
17/11/22 22:09:41 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1006
17/11/22 22:09:41 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[6] at saveAsTextFile at testing.scala:17)
17/11/22 22:09:41 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/11/22 22:09:41 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, partition 0,NODE_LOCAL, 1894 bytes)
17/11/22 22:09:41 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
17/11/22 22:09:41 INFO deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
17/11/22 22:09:41 INFO deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
17/11/22 22:09:41 INFO deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
17/11/22 22:09:41 INFO deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
17/11/22 22:09:41 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
17/11/22 22:09:41 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 7 ms
17/11/22 22:09:41 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:678)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:661)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:468)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:905)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:91)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1191)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1183)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
17/11/22 22:09:41 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:678)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:661)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:468)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:905)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:91)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1191)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1183)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

17/11/22 22:09:41 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job
17/11/22 22:09:41 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
17/11/22 22:09:41 INFO TaskSchedulerImpl: Cancelling stage 1
17/11/22 22:09:41 INFO DAGScheduler: ResultStage 1 (saveAsTextFile at testing.scala:17) failed in 0.155 s
17/11/22 22:09:41 INFO DAGScheduler: Job 0 failed: saveAsTextFile at testing.scala:17, took 0.478177 s
Exception in thread “main” org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:678)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:661)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:468)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:905)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:91)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1191)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1183)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1922)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1209)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1154)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1154)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1154)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1060)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1026)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1026)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1026)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:952)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:952)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:952)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:951)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1457)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1436)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1436)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1436)
at testing$.main(testing.scala:17)
at testing.main(testing.scala)
Caused by: org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:678)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:661)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:468)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:905)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:91)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1191)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1183)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
17/11/22 22:09:41 INFO SparkContext: Invoking stop() from shutdown hook
17/11/22 22:09:41 INFO SparkUI: Stopped Spark web UI at http://192.168.101.1:4040
17/11/22 22:09:41 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/11/22 22:09:41 INFO MemoryStore: MemoryStore cleared
17/11/22 22:09:41 INFO BlockManager: BlockManager stopped
17/11/22 22:09:41 INFO BlockManagerMaster: BlockManagerMaster stopped
17/11/22 22:09:41 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/11/22 22:09:41 INFO SparkContext: Successfully stopped SparkContext
17/11/22 22:09:41 INFO ShutdownHookManager: Shutdown hook called
17/11/22 22:09:41 INFO ShutdownHookManager: Deleting directory C:\Users\hp\AppData\Local\Temp\spark-7050891b-a75d-4012-8214-39091c45005d
17/11/22 22:09:41 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
17/11/22 22:09:41 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.

Process finished with exit code 1


#2

On debugging found that I got this error when below command was being executed -

winutils.exe chmod 0644 retail\output_temporary\0_temporary\attempt_201712311401_0011_r_000000_0\part-r-00000

When i executed this command directly in command prompt, got an error related to msvcr100.dll.

Installing Microsoft Visual C++ 2010 solved it for me.