Issue with scala code for DB query

I am new to Spark, and I have written a scala code in eclipse to query a existing sql server table. Below is the code

import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.SparkConf

object SQLServerTbleCreate {
def main (args: Array[String]){
val conf = new SparkConf()
.setAppName(“test SQL”)
val sc = new SparkContext(conf)
val sqlContext = new org.apache.spark.SQLContext(sc)

    val jdbcSqlConnStr = "jdbc:sqlserver://;databaseName=xxx;user=xxx;password=xxx;"

    val jdbcDF ="jdbc").options(Map("url" -> jdbcSqlConnStr, "dbtable" -> jdbcDbTable)).load()

     val test = sqlContext.sql("SELECT xxxx ,xxxx FROM xxxxx")



I have spark 1.6.1, and scala 2.10. So, I have used below POM dependencies.




org.scala-lang scala-compiler 2.10.4


Please let me know if my code is right, and as well as is my imports correct? I am using spark-submit to execute the code as below

spark-submit --class --master yarn --deploy-mode cluster

When I execute the above, I get either “no suitable drive”, or classpath not found. when I give .jar at the end of the code, I get application-jar closed with failure error.

Can somebody point me to the mistake that I am doing please. It very important for me.