This is not a bug, and Server is working properly based on how you've configured it and have setup your script/tool to be published.
First, you're correct on the data copy point. If you've disabled data copy to the server you can't publish something that needs to upload data as part of the service.
What's going wrong is how you've setup your scratch directory and your assumptions of registering the folder and the service behavior.
You have a few ways to configure/change your publishing to be more successful.
1) Set the scratchworkspace inside ArcMap prior to running the tool and publishing. If its writing to: C:\Users\MyUserName\Documents\ArcGIS\scratch.gdb then I can tell you haven't set it and you're letting it use the default location. A better practice is to make a scratch directory inside your current working directory, point ArcMap at that. Reference the root folder in your datastore. Now it wont copy the data when you publish. See the second screen shot here: A quick tour of authoring and sharing geoprocessing services—Documentation | ArcGIS Enterprise The tool has been built in a directory with a ToolData and a Scratch folder. Everything is nice and contained. Reference the folder the model lives in.
2) Change your intermediate outputs to in_memory. Unless the tool absolutely needs to write to disk, writing to in_memory is generally faster. (I say generally because theres some caveats). This is probably the easiest and best option.
3) Allow data copying to the server during publishing
If you go with option 1) above you need to understand that just because you've referenced the folder during in the datastore does not mean the service will write results to this directory. A Geoprocessing Service when using %scratchGDB% or arcpy.env.scratchGDB writes to a specific JOBS folder inside the arcgisserver directory structure.