To add to Matthias excellent answer, if you are running Spark just within a Docker image, you are not taking advantage of Spark right? Where is your Spark cluster running? Usually, you should be able to just run a "Spark submit" command with e.g. ShellTask to submit the job to your cluster
Also, we've already had this conversation about Spark and packaging it into a Docker image, let me check if I can find it