GPU instances
Log In
a
adihanifsdr
2024, waiting for this
k
kavin
Agree this is helpful for any LLM apps because otherwise if you want to use prompt compression then you have to pass it outside of render and then back
F
Frank Faubert
Would also love GPU instances and avoid having to go to AWS directly, as we host other services on Render.
N
Naveed Fida
Would really love this. My specific need is hosting a pre-existing model for inference workload. I know going to AWS directly is always an option but since I'm hosting my other services here, having my GPU service here would be very convenient.
M
Mehdi Mehni
We develop Generative AI applications and it would be very helpful to have GPU instances on Render. We would prefer to stay with Render, but if we can't access the necessary resources we may be forced to find another provider that meets our requirements.
D
Daniel Angell
Would significantly simplify our app development
M
Mihail Stojanni
could def use this
D
Dan Croak
I have a Python background worker running on Render that uses OpenAI Whisper to summarize MP4 videos into a few paragraphs of text. It's working but I see a warning "/whisper/transcribe.py:114: UserWarning: FP16 is not supported on CPU; using FP32 instead"
My understanding of this warning is that if I want to optimize this program for speed and memory efficiency, I might want to consider switching to a GPU that supports FP16 operations.
M
Mathieu Cesbron
waiting for this
a
alexander.eric.3
I agree, having tensorflow-gpu helps my models train and predict much faster
Load More
→