Spark save dataframe on S3 with SSE enabled?

If anyone has used spark df.write() on S3 with sse enabled??

there are two libraries below. You can check them out.


Mostly this should help and you can use following commands.
dbutils.fs.mount(s"s3a://$AccessKey:$SecretKey@$AwsBucketName", s"/mnt/$MountName", “sse-s3”)
dbutils.fs.put(s"/mnt/$MountName", “file content”)


Can you please try them post the solution. Let us know if still having issue.