Hi there, this is an issue of tremendous importance: being able to mount remote file stores in a docker container. It’s going to be easy to implement, and you can charge a lot for it, because right now, no one can do this, and every workaround is extremely painful. Here it is: Allow your docker offerings to specify a S3 or NFS datastore which then appears as a disk mount in the container. Your backend would use the fantastic rclone tool to mount it, then bind it as a volume at the host level, then it becomes available to the container, similar to how disks become available at /var/data. I’d gladly pay $15 extra a month per instance to do this and you wouldn’t have to allocate any disk for this yourself on your end. You’d also get to count the network bandwidth as an expense to the user. And we’ll all pay it because any other way involves tremendous pain. Here's an example of how rclone can be used to mount a backend, in this example and s3 backend: rclone --config rclone_mount.conf mount dst:/Transcriptions/Media/ /var/mnt --vfs-cache-mode If you provided this, I would move my entire AI pipeline to your docker instances. Right now I have to rent a VPS from Digital Ocean and that has a lot of problems. Also, I see a lot of other almost identical requests, like please allow us to mount an sftp remote store. All of these can be solved with the rclone tool as describe above. This seems like a very easy long hanging fruit, cheap to impliment with high impact revenue implications for Render.com