GPU instances
Log In
A
Alex Casteel
Please add GPU instances to your services. NN models are fairly baseline approaches today.
J
John
I would pay $1 million for this feature
J
James Brotchie
Even small GPUs to speed up inference on local encoders would go a long way
N
Nigel Dove
I would use this
N
Nishanth Merwin
Having GPU instances would be very helpful to serve NN models and inferences to our customers.
l
leahtucker
2024, +1 GPU workload support for NLP pipelines on Render.
a
adihanifsdr
2024, waiting for this
k
kavin
Agree this is helpful for any LLM apps because otherwise if you want to use prompt compression then you have to pass it outside of render and then back
F
Frank Faubert
Would also love GPU instances and avoid having to go to AWS directly, as we host other services on Render.
N
Naveed Fida
Would really love this. My specific need is hosting a pre-existing model for inference workload. I know going to AWS directly is always an option but since I'm hosting my other services here, having my GPU service here would be very convenient.
Load More
→