Loading file from S3 to REDSHIFT

hive
apache-spark
hadoop
scala

#1

Hi All,

I have different multiple flat files(diff file names) in S3 location and want to load in Redshift using Spark(Scala/Python).
I can read the file and load it(multiple codes), the code will be different for each file because the files are different.

I want to parameterize the code so that i do not have to write the code for each load.
can any one suggest how to parameterize the file name in spark(scala)?