@dgadiraju if you could help me out I’ve been asked this question twice.
As spark does in-memory computation lets say for example if we have 1GB Memory max and there is 1TB of datasets needs to be processed. How would in-memory processing be done ? Will it have any effect on the performance ?
Learn Spark 1.6.x or Spark 2.x on our state of the art big data labs
- Click here for access to state of the art 13 node Hadoop and Spark Cluster