Reading sequence file in SparkContext failed


#1

I am practicing reading various format of files in SparkContext.

I see from Arun’s blog:
sparkContext.sequenceFile(,classOf[],classOf[]);
//read the head of sequence file to understand what two class names need to be used here

So, I inspect the head fo my sequence file which is:
[paslechoix@gw03 ~]$ hdfs dfs -cat orders0312seq/part-m-00000 |head
SEQ!org.apache.hadoop.io.LongWritableordersE▒Ӗ▒LҐ▒▒@▒▒-OCLOSED@▒▒PENDING_PAYMENT@▒▒/COMPLETE@▒▒"{CLOSED@▒▒,COMPLETE@▒COMPLETE@▒▒COMPLET@▒▒

So I wrote my command as:
sc.sequenceFile(“orders0312seq”, classOf[org.apache.hadoop.io.LongWritable]).take(10)

I got the following error:
scala> sc.sequenceFile(“orders0312seq”, classOf[org.apache.hadoop.io.LongWritable]).take(10)
:28: error: type mismatch;
found : Classorg.apache.hadoop.io.LongWritable
required: Int
sc.sequenceFile(“orders0312seq”, classOf[org.apache.hadoop.io.LongWritable]).take(10)

Well, I tried:
sc.sequenceFile(“orders0312seq”, classOf[org.apache.hadoop.io.Int]).take(10)
sc.sequenceFile(“orders0312seq”, classOf[org.apache.hadoop.io.int]).take(10)
sc.sequenceFile(“orders0312seq”, classOf[org.apache.hadoop.io.Integer]).take(10)
sc.sequenceFile(“orders0312seq”, classOf[org.apache.hadoop.io.integer]).take(10)
sc.sequenceFile(“orders0312seq”, classOf[org.apache.hadoop.io.intWritable]).take(10)

All failed.

What would be the right class should I put here?

Thank you.