val aggregrate = revenueMap.aggregateByKey((0.0,0),(x,y)=>(x._1+y,x._2+1),(x,y)=>(x._1+y._1,x._2+y._2))

**:43: error: too many arguments for method aggregateByKey:** *(zeroValue: ((Double, Int), (, ) => , (, ) => ))(seqOp: (((Double, Int), (, ) => , (, ) => ), Float) => ((Double, Int), (, ) => , (, ) => ), combOp: (((Double, Int), (, ) => , (, ) => ), ((Double, Int), (, ) => , (, ) => )) => ((Double, Int), (, ) => , (, ) => ))(implicit evidence$3: scala.reflect.ClassTag[((Double, Int), (, ) => , (, ) => )])org.apache.spark.rdd.RDD[(String, ((Double, Int), (, ) => , (, ) => ))]*

It should be x._1 and y._1. You need to use ’ _ ’ notation instead of just x.1, it should be x._1. replace all .1 and .2 with ._1 and ._2.

Also initialisation part should be in seperated brackets I.e ((0.0,))(combine logic,reduce logic)

Thanks @N_Chakote

I have a RDD as below,

scala> map2.collect().foreach(println)

(cat,(2,7))

(cat,(2,3))

(mouse,(4,8))

(cat,(2,7))

(dog,(1,5))

(mouse,(4,4))

My outcome should look like below , just summing the two column values according to the key

(cat,(6,17))

(mouse,(8,12))

(dog,(1,5))

Tried with : val agg2= map2.aggregateByKey(0,0)((x,y)=>(x._1+y._1,x._2+y._2),(x,y)=>(x._1+y._1,x._2+y._2))

But didn’t work .Can you help me .

I think you can use reduceByKey as you input and output data type is same I.e. tuple and combine and reduce logic is additive and same.

(Underscore is not coming after ‘.’ So do necessary changes)

Val agg2 = map2.reduceByKey((x,y) => (x._1+y._1, x._2+y._2)

reduceByKey worked .Thanks

i’m pasting with ‘_’ but here it in this text box it get removed automatically, next time i will make sure it doesn’t happen while pasting .

Probably you need to format the code.