

Low priority that can take up resources until a real user with (higher priority) This priority mechanism allows us to add dummy users or user-placeholders with With lower priority if that would help the higher priority pod fit on a node. This allows pods with higher priority to preempt / evict pods With Kubernetes 1.11+ (that requires Helm 2.11+), Pod Priority and Waiting time for the pod, and as a pod can represent a user, it can lead to a Nodes but would fit if another node is added. Scaling up in time (user placeholders) #Ī Cluster Autoscaler (CA) will add nodes when pods don’t fit on available Without help, it will both fail to scale up before usersĪrrive and scale down nodes aggressively enough without disrupting users. (CA) will help you add and remove nodes from the cluster. Singleuser : image : name : jupyter/minimal-notebook tag : 2343e33dec46 profileList : - display_name : "Minimal environment" description : "To avoid too much bells and whistles: Python." default : true - display_name : "Datascience environment" description : "If you want the additional bells and whistles: Python, R, and Julia." kubespawner_override : image : jupyter/datascience-notebook:2343e33dec46 prePuller : extraImages : myOtherImageIWantPulled : name : jupyter/all-spark-notebook tag : 2343e33dec46 Efficient Cluster Autoscaling # The image pullers in order to prepare the nodes that may end up using these _override.imageįor example, with the following configuration, three images would be pulled by The following paths: Relevant image sources # Provided with the Helm chart (that can be overridden with config.yaml) under These sources are all found in the values Influencing what images they will pull, as it does in order to prepare nodesĪhead of time that may need images. The hook-image-puller and the continuous-image-puller has various sources More pods won’t fit on the current nodes but would fit more if a node isĪdded, but at that point users are already waiting. This is because it will only add a node if one or Only helps if the CA scales up before real users arrive, but the CA will It is important to realize that if the continuous-image-puller together withĪ Cluster Autoscaler (CA) won’t guarantee a reduced wait time for users. PrePuller : continuous : # NOTE: if used with a Cluster Autoscaler, also add user-placeholders enabled : false The hook-image-puller is enabled by default.
#Jupyterhub vs jupyterlab upgrade
Recommend that you add -timeout 10m0s or similar to your helm upgrade Introduce a new image as it will wait for the pulling to complete. NOTE: With this enabled your helm upgrade will take a long time if you

This, a more informative name would have been pre-upgrade-image-puller. The name hook-image-puller is a technical name

Introduced will be pulled to the nodes before the hub pod is updated to With the hook-image-puller enabled (the default), the user images being This commonly occurs in twoĪ new single-user image is introduced ( helm upgrade) Is large, the wait can be 5 to 10 minutes. If a user pod is scheduled on a node requesting a Docker image that isn’tĪlready pulled onto that node, the user will have to wait for it. Are they mostly # writing and reading or are they mostly executing code? singleuser : cpu : limit : 4 guarantee : 0.05 memory : limit : 4G guarantee : 512M Pulling images before users arrive # This can be reasonable, but it # may not be, it will depend on your users. With the configuration below you would be able to have # at most about 50 users per node. If you have a n1-highmem-4 node # on Google Cloud for example you get 4 cores and 26 GB of # memory. Scheduling : userScheduler : enabled : true podPriority : enabled : true userPlaceholder : enabled : true replicas : 4 userPods : nodeAffinity : matchNodePurpose : require cull : enabled : true timeout : 3600 every : 300 # The resources requested is very important to consider in # relation to your machine type. Set appropriate user resource requests and limits, to allow a reasonableĪ reasonable final configuration for efficient autoscaling could look something This way, only user pods can then block scale down. Node and requiring user pods, which tolerate the nodes’ taint, to schedule Set up an autoscaling node pool and dedicate it to user pods by tainting the To summarize, for a good autoscaling experience, we recommend you to:Įnable the continuous image puller, to prepare added nodes for arrivingĮnable pod priority and add user placeholders, to scale up nodes ahead ofĮnable the user scheduler, to pack users tight on some nodes and let other Many of the settingsĭescribed is only purposeful for a better autoscaling experience. This page contains information and guidelines for improving the reliability,įlexibility and stability of your JupyterHub deployment.
