Tutorial: Bokeh
closed
L
Lloyd Chang
When your time permits, please create a tutorial for Bokeh. Thank you.
Meanwhile, I created "Bokeh Hello World Example for Render" at https://github.com/lloydchang/render-examples-bokeh-hello-world
References:
- https://community.render.com/t/slow-performance-with-python-bokeh/6894
- https://community.render.com/t/re-slow-performance-with-python-bokeh/8818
- https://discourse.bokeh.org/t/deploying-bokeh-server-apps-to-aws-google-cloud-azure/7432
- https://discourse.bokeh.org/t/bokeh-server-app-on-aws/2427
- https://github.com/bokeh/demo.bokeh.org
- https://github.com/bokeh/demo.bokeh.org/issues/20
- https://cloud.google.com/architecture/bokeh-and-bigquery-dashboards
Log In
Anurag Goel
closed
We don't plan to create the tutorial just yet, but we'll keep it in mind. Thanks for posting it to community.render.com!
L
Lloyd Chang
Continuation of
----
Hi @Francesco Turci
I empathize with you because I encountered a similar performance issue as you did with Bokeh.
Summary: When your Bokeh server doesn't have a warm cache with subsystems… Then, when you need to start it periodically, then the cold cache with subsystems needs to be warmed up — Hence a cold start upwards of 2 1/2 minutes.
I hope the following explanation helps.
Thank you for your time.
Like you, I also have a simple Bokeh server at https://render-examples-bokeh-hello-world.onrender.com/myapp
Since I just deployed it via Render — Immediately afterwards, it loads in 1 second when measured via
time curl -v https://render-examples-bokeh-hello-world.onrender.com/myapp
real 0m1.427s
which is 1 second.
When I tried the same measuring technique
time curl -v
with your Bokeh server at https://kinetic-gas.onrender.com/maintime curl -v https://kinetic-gas.onrender.com/main
> GET /main HTTP/1.1
> Host: kinetic-gas.onrender.com
> User-Agent: curl/7.61.1
> Accept:
/
>
< HTTP/1.1 200 OK
The result is:
real 2m25.599s
which is nearly 2 minutes and 26 seconds — because its computer's cache with subsystems are most likely cold... and needed to be warmed up.
There is wait that is upwards of 2 minutes in between the
> Accept:
/
output
and
< HTTP/1.1 200 OK
output
When I immediately load your Bokeh server a second time:
time curl -v https://kinetic-gas.onrender.com/main
The result is faster at:
real 0m1.115s
1 second.
I believe the symptom of a 2 1/2 minute cold start in Render is similar to cold starts in competing products like AWS Lambda.
As for the approach to a solution at other companies:
Competing products such as...
• AWS Lambda offers “Provisioned Concurrency”
• Google Cloud Functions offers "Minimum Instances"
• Azure Functions offers "Pre-Warmed Instances"
To solve these User Experience (UX) issues, other companies' customers pay ($) extra money for a paid-feature to keep a warmed cache with subsystems online 24x7x365 — around the clock.
If you want to read more about Cold Start (Computing), please see https://en.wikipedia.org/wiki/Cold_start_(computing)
I don’t know if Render will eventually offer “Provisioned Concurrency” or similar as a payable (?) feature.
"Cold start in computing refers to a problem where a system or its part was created or restarted and is not working at its normal operation. The problem can be related to initialising internal objects or populating cache or starting up subsystems."
Thank you for your time!
L
Lloyd Chang