Issue with scala code for DB query

Hello,
I am new to Spark, and I have written a scala code in eclipse to query a existing sql server table. Below is the code

_______________________________-
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.SparkConf

object SQLServerTbleCreate {
def main (args: Array[String]){
val conf = new SparkConf()
.setAppName(“test SQL”)
.set(“spark.executor.memory”,“1g”);
val sc = new SparkContext(conf)
val sqlContext = new org.apache.spark.SQLContext(sc)

    val jdbcSqlConnStr = "jdbc:sqlserver://xxx.xxx;databaseName=xxx;user=xxx;password=xxx;"

    val jdbcDF = sqlContext.read.format("jdbc").options(Map("url" -> jdbcSqlConnStr, "dbtable" -> jdbcDbTable)).load()


     val test = sqlContext.sql("SELECT xxxx ,xxxx FROM xxxxx")
     test.show(10)

    }

}


I have spark 1.6.1, and scala 2.10. So, I have used below POM dependencies.

=============================

org.apache.spark
spark-yarn_2.10
1.6.1

<dependency>
  <groupId>junit</groupId>
  <artifactId>junit</artifactId>
  <version>3.8.1</version>
  <scope>test</scope>
</dependency>

    <dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-sql_2.10</artifactId>
  <version>1.6.1</version>
org.scala-lang scala-compiler 2.10.4


org.apache.spark
spark-core_2.10
1.6.1

Please let me know if my code is right, and as well as is my imports correct? I am using spark-submit to execute the code as below

spark-submit --class com.run.Main --master yarn --deploy-mode cluster

When I execute the above, I get either “no suitable drive”, or classpath not found. when I give .jar at the end of the code, I get application-jar closed with failure error.

Can somebody point me to the mistake that I am doing please. It very important for me.