Spark save dataframe on S3 with SSE enabled?

If anyone has used spark df.write().format.save() on S3 with sse enabled??

there are two libraries below. You can check them out.

  1. https://docs.databricks.com/spark/latest/data-sources/amazon-s3.html

Mostly this should help and you can use following commands.
dbutils.fs.mount(s"s3a://$AccessKey:$SecretKey@$AwsBucketName", s"/mnt/$MountName", “sse-s3”)
dbutils.fs.put(s"/mnt/$MountName", “file content”)

  1. https://github.com/knoldus/spark-s3

Can you please try them post the solution. Let us know if still having issue.