Pyspark - save to Impala

Hello,
In case I want to save data to Impala new table “myTable” to a new database called “myDB” in parquet format with snappy compression, is this the right way?

dataframe.write.saveAsTable(‘myDB.myTable’, format=‘hive’, fileFormat=‘parquet’, compression=‘snappy’, mode=‘overwrite’)

Or do I have to create the database first? How?
Thank you.