About the bigdata-labs category


#1

This is to discuss about issues related to bigdata lab by itversity.

Going forward, only staff members will be able to create topics on this. Other group member will be able to reply and every one else can view the topics and its responses.

  • Website: labs.itversity.com
  • Functionality issues of big data labs
  • Functionality issues over lab
  • Requests for new functionality or enhancing existing functionality
  • Other unforeseen issues

Also please use appropriate tags for your questions. For missing requirements, please tag with requirements


#3

Could you please let us know when will be the real time datasets/logs for available for practice other than retail_dba dataset??


#4

hi,
Registered and paid for the lab(6 months).now cannot access lab.come up with site cannot be reached error.I am using the url above.

thanks
naveen


#5

Can you share the screenshot?


#6

I currently have a laptop with i3 processor and 8GB RAM, will it suffice for the big data labs?
I’m planning for subscribing for the same.


#7

Hi,

I am trying to load a csv file to create dataframe, but getting below error:
scala> val data = spark.read.csv(“flight”)
java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Permission denied
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.spark.sql.hive.client.HiveClientImpl.(HiveClientImpl.scala:171)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46)
at org.apache.spark.sql.hive.HiveSharedState.externalCatalog(HiveSharedState.scala:45)
at org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:50)
at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48)
at org.apache.spark.sql.hive.HiveSessionState$$anon$1.(HiveSessionState.scala:63)
at org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63)
at org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143)
at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:401)
at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:342)
… 48 elided
Caused by: java.lang.RuntimeException: java.io.IOException: Permission denied
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:515)
… 71 more
Caused by: java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(File.java:2024)
at org.apache.hadoop.hive.ql.session.SessionState.createTempFile(SessionState.java:818)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:513)
… 71 more

Not able to find out where the permissions are required.


#8

Try now. There are some temporary files which is causing the issue.

Sorry for the delay in response.


#9

Thanks.
Both Spark2 and Spark 1.6 are working now.


#10

Hi Admin,

Hope this is the right category to raise this issue.I have just subscribed for Big Data labs and while trying to comeup with Username,it gives the same error as username too short.Have given many different combinations and complex names,still no results.
Would request your help ASAP.

Thanks,
Rohit


#11

Hello Rohit,

Is the issue resolved?
This is overlooked.

Regards,
Durga Gadiraju


#12

Hi Admin,

I am getting the following insufficient memory for Java Runtime error for even just executing a simple hadoop fs -ls command. Every time I log in, after few commands this keeps showing. Attaching the screenshot below.


#13

hi Sir,

How can I practice Scala application on Eclipse. Whether ITVERSITY provide access to eclipse as well??


#14

hi,

how can I access the cluster through putty. I tried to connect to gw01.itversity.com through putty using ssh, but getting hot not found error. Please help.

Regards
Dhanya


#15

hadoop fs -ls /user/sundharittce/
Java HotSpot™ 64-Bit Server VM warning: INFO: os::commit_memory(0x0000787a21000000, 351272960, 0) failed; error=‘Cannot allocate memory’ (errno=12)

There is insufficient memory for the Java Runtime Environment to continue.

Native memory allocation (mmap) failed to map 351272960 bytes for committing reserved memory.

An error report file with more information is saved as:

/home/sundharittce/hs_err_pid23177.log

[sundharittce@gw01 ~]$


#16

I am having issues launching spark because of insufficient memory issues. Please fix this. This is happening quite frequently these days too. Here’s the error message:

[debjanis@gw01 ~]$ spark-shell --conf spark.ui.port=22322
Multiple versions of Spark are installed but SPARK_MAJOR_VERSION is not set
Spark1 will be picked by default
Java HotSpot™ 64-Bit Server VM warning: INFO: os::commit_memory(0x000069e121000000, 716177408, 0) failed; error=‘Cannot allocate memory’ (errno=12)

There is insufficient memory for the Java Runtime Environment to continue.

Native memory allocation (mmap) failed to map 716177408 bytes for committing reserved memory.

An error report file with more information is saved as:

/home/debjanis/hs_err_pid28448.log


#17

Issue is fixed. Some time we are facing the it. After multiple try. it worked


#18

Hi,

If I was to purchase a lab would the labs be equipped with the integrated sbt with Scala IDE for Eclipse? I want to use scala the same way that Durga was using them in the videos with the intelliJ hints that pop up while you are coding.

Kindest Regards,
David


#19

Hi- I have some general question on the bigdata-lab.
If i enroll for 182 days, what are the maximum number of nodes i can use ? And what will be the maximum configurations (RAM, HDD, Process ) on each node ? I am sure this will be a question for all new people planning to enroll. can you please highlight this information somewhere in the information page ?

Thanks


#20

Hi Durga, I just subscribed to labs and facing same issue like Rohit. Please help me to sort out to create username.
Thanks,
Anand


#21

Please use 8 characters in username and try with alphabet and number.