How to execute a Spark program that loads a Hive table

I am new to Spark. I learnt how to load a hive program from Spark-shell. I tried to do the same from eclipse and here is the program I have written.

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.SaveMode

object SuperSpark {
  case class partclass(id:Int, name:String, salary:Int, dept:String, location:String)
  def main(argds: Array[String]) {
    val warehouseLocation = "file:${system:user.dir}/spark-warehouse"
    val sparkSession = SparkSession.builder.master("local[2]").appName("Saving data into HiveTable using Spark")
                        .config("hive.exec.dynamic.partition", "true")
                        .config("hive.exec.dynamic.partition.mode", "nonstrict")
                        .config("hive.metastore.warehouse.dir", "/user/hive/warehouse")
                         .config("spark.sql.warehouse.dir", warehouseLocation)
    import sparkSession.implicits._

    val partfile ="partfile")
    val partdata = => p.split(","))
    val partRDD  = => partclass(line(0).toInt, line(1), line(2).toInt, line(3), line(4)))
    val partDF   = partRDD.toDF()

What I don’t understand now is how to execute this program ? I’m stuck at these points. 1. How to add connection and db details of Hive tables in the program. could anyone tell how to add those details programmatically ? 2. Should I use the ‘spark-submit’ option or just do ‘run as scala application’ from eclipse to run the above program ?