GPU instances
Log In
N
Naveed Fida
Would really love this. My specific need is hosting a pre-existing model for inference workload. I know going to AWS directly is always an option but since I'm hosting my other services here, having my GPU service here would be very convenient.
M
Mehdi Mehni
We develop Generative AI applications and it would be very helpful to have GPU instances on Render. We would prefer to stay with Render, but if we can't access the necessary resources we may be forced to find another provider that meets our requirements.
D
Daniel Angell
Would significantly simplify our app development
M
Mihail Stojanni
could def use this
D
Dan Croak
I have a Python background worker running on Render that uses OpenAI Whisper to summarize MP4 videos into a few paragraphs of text. It's working but I see a warning "/whisper/transcribe.py:114: UserWarning: FP16 is not supported on CPU; using FP32 instead"
My understanding of this warning is that if I want to optimize this program for speed and memory efficiency, I might want to consider switching to a GPU that supports FP16 operations.
M
Mathieu Cesbron
waiting for this
a
alexander.eric.3
I agree, having tensorflow-gpu helps my models train and predict much faster
j
jasona
Agreed, particularly for ML inference serving. Either standalone GPU instance types or the ability to attach an elastic GPU would allow us to host ML inference servers on Render.
j
jeremy
Any update on this? Would love GPU support!
P
Peter Schröder
Having a service using Tensorflow or another AI library is becoming the industry standard. Using them without GPU support is super slow and having to run them on another provider like gcloud is suboptimal as well. I would love to be able to have that option here as well.
Load More
→