Create an account


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
News - Blog: Scaling dedicated game servers with Kubernetes – Part 3

#1
Blog: Scaling dedicated game servers with Kubernetes – Part 3

<div style="margin: 5px 5% 10px 5%;"><img src="https://www.sickgamedev.win/wp-content/uploads/2017/10/blog-scaling-dedicated-game-servers-with-kubernetes-part-3.png" width="1225" height="592" title="" alt="" /></div><div><p><strong><em><small>The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.<br />The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.</small></em></strong></p>
<hr />
<p>Originally posted on <a href="http://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-1-containerising-and-deploying/">compoundtheory.com</a>.</p>
<p><em>This is part three of a</em> <a href="http://www.compoundtheory.com/tag/scaling-dedicated-game-servers-with-kubernetes/"><em>five-part series</em></a> <em>on scaling game servers with Kubernetes.</em></p>
<p>In the previous two posts we looked at <a href="https://www.gamasutra.com/blogs/MarkMandel/20170502/297222/Scaling_Dedicated_Game_Servers_with_Kubernetes_Part_1__Containerising_and_Deploying.php">hosting dedicated game servers on Kubernetes</a> and <a href="https://www.gamasutra.com/blogs/MarkMandel/20170713/301596/Scaling_Dedicated_Game_Servers_with_Kubernetes_Part_2__Managing_CPU_and_Memory.php">measuring and limiting their memory and CPU resources</a>. In this instalment we look at how we can use the CPU information from the previous post to determine when we need to scale up our Kubernetes cluster because we’ve run out of room for more game servers as our player base increases.</p>
<h3>Separating Apps and Game Servers</h3>
<p>The first step we should make before starting to write code to increase the size of the Kubernetes cluster, is to separate our applications — such as match makers, the game server controllers, and the soon-to-be-written node scaler — onto different nodes in the cluster than where the game servers would be running. This has several benefits:</p>
<ol>
<li>The resource usage of our applications is now going to have no effect on the game servers, as they are on different machines. This means that if the matchmaker has a CPU spike for some reason, there is an extra barrier to ensure there is no way it could unduly affect a dedicated game server in play.</li>
<li>It makes scaling up and down capacity for dedicated game servers easier – as we only need to look at game server usage across a specific set of nodes, rather than all potential containers across the entire cluster.</li>
<li>We can use bigger machines with more CPU cores and memory for the game server nodes, and smaller machines with less cores and memory for the controller applications as they need less resources, in this instance. We essentially are able to pick the right size of machine for the job at hand. This is gives us great flexibility while still being cost effective.</li>
</ol>
<p>Kubernetes makes setting up a heterogenous cluster relatively straightforward and gives us the tools to specify where Pods are scheduled within the cluster – via the power of <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector">Node Selectors</a> on our Pods. It’s worth noting that that there is also a more sophisticated <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature">Node Affinity</a> feature in beta, but we don’t need it for this example, so we’ll ignore its extra complexity for now. To get started, we need to assign <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/">labels (a set of key-value pairs)</a> to the nodes in our cluster. This is exactly the same as you would have seen if you’ve ever created <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-intro/">Pods with Deployments and exposed them with Services</a>, but applied to nodes instead. I’m using Google Cloud Platform’s Container Engine, and it uses <a href="https://cloud.google.com/container-engine/docs/node-pools">Node Pools</a> to apply labels to nodes in the cluster as they are created and set up heterogenous clusters – but you can also do similar things on other cloud providers, as well as directly through the <a href="https://kubernetes.io/docs/api-reference/v1.8/#node-v1-core">Kubernetes API</a> or the <a href="https://kubernetes.io/docs/user-guide/kubectl/v1.8/#label">command line client</a>. In this example, I added the labels <code>role:apps</code> and <code>role:game-server</code> to the appropriate nodes in my cluster. We can then add a <code>nodeSelector</code> option to our Kubernetes configurations to control which nodes in the cluster Pods are scheduled onto.</p>
<p><em><img alt="" class="aligncenter wp-image-955" src="https://www.sickgamedev.win/wp-content/uploads/2017/10/blog-scaling-dedicated-game-servers-with-kubernetes-part-3.png" /></em></p>
<p>For example, here is the <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/matchmaker/deployment.yaml">configuration for the matchmaker application</a>, where you can see the <code>nodeSelector</code> set to <code>role:apps</code> to ensure it has container instances created only on the application nodes (those tagged with the “apps” role).</p>
<pre class="lang:yaml decode:true" title="deployment.yaml">
apiVersion: extensions/v1beta1
kind: Deployment
metadata: name: matchmaker
spec: replicas: 5 template: metadata: labels: role: matchmaker-server spec: nodeSelector: role: apps # here is the node selector containers: - name: matchmaker image: gcr.io/soccer/matchmaker ports: - containerPort: 8080
</pre>
<p>By the same token, we can adjust the configuration from the previous article to make all the dedicated game server Pods schedule just on the machines we specifically designated for them, i.e. those tagged with <code>role: game-server</code>:</p>
<pre class="lang:yaml decode:true" title="pod.yaml">
apiVersion: v1
kind: Pod
metadata: generateName: "game-"
spec: hostNetwork: true restartPolicy: Never nodeSelector: role: game-server # here is the node selector containers: - name: soccer-server image: gcr.io/soccer/soccer-server:0.1 env: - name: SESSION_NAME valueFrom: fieldRef: fieldPath: metadata.name resources: limits: cpu: "0.1"
</pre>
<p>Note that <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/sessions/kubernetes.go#L69">in my sample code</a>, I use the Kubernetes API to provide a configuration identical to the one above, but the yaml version is easier to understand, and it is the format we’ve been using throughout this series.</p>
<h3>A Strategy for Scaling Up</h3>
<p>Kubernetes on cloud providers tends to come with automated scaling capabilities, such as the <a href="https://cloud.google.com/container-engine/docs/cluster-autoscaler">Google Cloud Platform Cluster Autoscaler</a>, but since they are generally built for stateless applications, and our dedicated game servers store the game simulation in memory, they won’t work in this case. However, with the tools that Kubernetes gives us, it’s not particularly difficult to build our own custom Kubernetes cluster autoscaler! Scaling up and down the nodes in a Kubernetes cluster probably makes more sense for a cloud environment, since we only want to pay for the resources that we need/use. If we were running in our own premises, it may make less sense to change the size of our Kubernetes cluster, and we could just run a large cluster(s) across all the machines we own and leave them at a static size, since adding and removing physical machines is far more onerous than on the Cloud and wouldn’t necessarily save us money since we own/lease the machines for much longer periods. There are multiple potential strategies for determining when you want to scale up the number of nodes in your cluster, but for this example we’ll keep things relatively simple:</p>
<ul>
<li>Define a minimum and maximum number of nodes for game servers, and make sure we are within that limit.</li>
<li>Use CPU resource capacity and usage as our metric to track how many dedicated game servers we can fit on a node in our cluster (in this example we’re going to assume we always have enough memory).</li>
<li>Define a buffer of CPU capacity for a set number of game servers at all times in the cluster. I.e. add more nodes if at any point you couldn’t add n number of servers to the cluster without running out of CPU resources in the cluster at any point in time.</li>
<li>Whenever a new dedicated game server is started, calculate if we need to add a new node in the cluster because the CPU capacity across the nodes is under the buffer amount.</li>
<li>As a fail-safe, every n seconds, also calculate if we need to add a new node to the cluster because the measured CPU capacity resources are under the buffer.</li>
</ul>
<p><img alt="" class="aligncenter wp-image-959" src="https://www.sickgamedev.win/wp-content/uploads/2017/10/blog-scaling-dedicated-game-servers-with-kubernetes-part-3.gif" /></p>
<h3>Creating a Node Scaler</h3>
<p>The node scaler essentially runs an event loop to carry out the strategy outlined above. Using <a href="https://golang.org/">Go</a> in combination with the native <a href="https://github.com/kubernetes/client-go">Kubernetes Go client library</a> makes this relatively straightforward to implement, as you can see below in the <code>Start()</code> function of my node scaler. Note that I’ve removed most of the error handling and other boilerplate to make the event loop clearer, but the <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/server.go#L135-L186">original code is here</a> if you are interested.</p>
<pre class="lang:go decode:true" title="server.go">
// Start the HTTP server on the given port
func (s *Server) Start() error { // Access Kubernetes and return a client s.cs, _ = kube.ClientSet() // ... there be more code here ... // Use the K8s client's watcher channels to see game server events gw, _ := s.newGameWatcher() gw.start() // async loop around either the tick, or the event stream // and then scaleNodes() if either occur. go func() { log.Print("[Info][Start] Starting node scaling...") tick := time.Tick(s.tick) // ^^^ MAIN EVENT LOOP HERE ^^^ for { select { case &lt;-gw.events: log.Print("[Info][Scaling] Received Event, Scaling...") s.scaleNodes() case &lt;-tick: log.Printf("[Info][Scaling] Tick of %#v, Scaling...", tick) s.scaleNodes() } } }() // Start the HTTP server return errors.Wrap(s.srv.ListenAndServe(), "Error starting server")
}
</pre>
<p>For those of you who aren’t as familiar with Go, let’s break this down a little bit:</p>
<ol>
<li><code>kube.ClientSet()</code> – we have a <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/pkg/kube/kubernetes.go">small piece of utility code</a>, which returns to us a Kubernetes <a href="https://godoc.org/k8s.io/client-go/kubernetes#Clientset">ClientSet</a> that gives us access to the Kubernetes API of the cluster that we are running on.</li>
<li><code>gw, _ := s.newGameWatcher</code> – Kubernetes has APIs that allow you to watch for changes across the cluster. In this particular case, the code here returns a data structure containing a <a href="https://tour.golang.org/concurrency/2">Go Channel</a> (essentially a blocking-queue), specifically <code>gw.events</code>, that will return a value whenever a Pod for a game is added or deleted in the cluster.  Look here for the <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/kubernetes.go#L136">full source for the gameWatcher</a>.</li>
<li><code>tick := time.Tick(s.tick)</code> – this creates another Go Channel that blocks until a given time, in this case 10 seconds, and then returns a value. If you would like to look at it, here is the <a href="https://golang.org/pkg/time/#Tick">reference for time.Tick</a>.</li>
<li>The main event loop is under the “// ^^^ MAIN EVENT LOOP HERE ^^^” comment. Within this code block is a <code><a href="https://tour.golang.org/concurrency/5">select statement</a></code>. This essentially declares that the system will block until <em>either</em> the <code>gw.events</code> channel or the <code>tick</code> channel (firing every 10s) returns a value, and then execute <code>s.scaleNodes()</code>. This means that a <code>scaleNodes</code> command will fire whenever a game server is added/removed <em>or</em> every 10 seconds.</li>
<li><code>s.scaleNodes()</code> – run the scale node strategy as outlined above.</li>
</ol>
<p>Within <code>s.scaleNodes()</code> we query the CPU limits that we set on each Pod, as well as the total CPU available on each Kubernetes node within the cluster, through the Kubernetes API. We can see the configured CPU limits in the Pod specification via the <a href="https://kubernetes.io/docs/api-reference/v1.8/#podspec-v1-core">Rest API</a> and <a href="https://godoc.org/k8s.io/api/core/v1#PodSpec">Go Client</a>, which gives us the ability to track how much CPU each of our game servers is taking up, as well as any of the Kubernetes management Pods that may also exist on the node. Through the <a href="https://kubernetes.io/docs/api-reference/v1.8/#nodespec-v1-core">Node specification</a>, the <a href="https://godoc.org/k8s.io/api/core/v1#NodeSpec">Go client</a> can also track the amount of CPU capacity available in each node. From here it is a case of summing up the amount of CPU used by Pods, subtracting it from the capacity for each node, and then determining if one or more nodes need to be added to the cluster, such that we can <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/scaler.go#L40">maintain that buffer</a> space for new game servers to be created in. If you dig into the code in this example, you’ll see that we are using the APIs on Google Cloud Platform to <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/gce/gce.go#L78">add new nodes to the cluster</a>. The APIs that are provided for Google Compute Engine <a href="https://cloud.google.com/compute/docs/instance-groups/">Managed Instance Groups</a> allow us to add (and remove) instances from the Nodepool in the Kubernetes cluster. That being said, any cloud provider will have similar APIs to let you do the same thing, and <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/scaler.go#L29">here</a> you can see the interface we’ve defined to abstract this implementation detail in such a way that it could be easily modified to work with another provider.</p>
<h3>Deploying the Node Scaler</h3>
<p>Below you can see the deployment YAML for the node scaler. As you can see, environment variables are used to set all the configuration options, including:</p>
<ul>
<li>Which nodes in the cluster should be managed</li>
<li>How much CPU each dedicated game server needs</li>
<li>The minimum and maximum number of nodes</li>
<li>How much buffer should exist at all times</li>
</ul>
<pre class="lang:yaml decode:true" title="deployment.yaml">
apiVersion: extensions/v1beta1
kind: Deployment
metadata: name: nodescaler
spec: replicas: 1 # only want one, to avoid race conditions template: metadata: labels: role: nodescaler-server spec: nodeSelector: role: apps strategy: type: Recreate containers: - name: nodescaler image: gcr.io/soccer/nodescaler env: - name: NODE_SELECTOR # the nodes to be managed value: "role=game-server" - name: CPU_REQUEST # how much CPU each server needs value: "0.1" - name: BUFFER_COUNT # how many servers do we need buffer for value: "30" - name: TICK # how often to tick over and recheck everything value: "10s" - name: MIN_NODE # minimum number of nodes for game servers value: "1" - name: MAX_NODE # maximum number of nodes for game servers value: "15"
</pre>
<p>You may have noticed that we set the deployment to have <code>replicas: 1</code>. We did this because  we always want to have only one instance of the node scaler active in our Kubernetes cluster at any given point in time. This ensures that we do not have more than one process attempting to scale up, and eventually scale down, our nodes within the cluster, which could definitely lead to race conditions and likely cause all kinds of weirdness. Similarly, to ensure that the node scaler is properly shut down before creating a new instance of it if we want to update the node scaler, we also configure <code><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment">strategy.type: Recreate</a></code> so that Kubernetes will destroy the currently running node scaler Pod before recreating the newer version on updates, also avoiding any potential race conditions.</p>
<h3>See it in Action</h3>
<p>Once we have deployed our node scaler, let’s tail the logs and see it in action. In the video below, we see via the logs that when we have one node in the cluster assigned to game servers, we have capacity to potentially start forty dedicated game servers, and have configured a requirement of a buffer of 30 dedicated game servers. As we fill the available CPU capacity with running dedicated game servers via the matchmaker, pay attention to how the number of game servers that can be created in the remaining space drops and eventually, a new node is added to maintain the buffer!</p>
<p>[embedded content]</p>
<h3>Next Steps</h3>
<p>The fact that we can do this without having to build so much of the foundation is one of the things that gets me so excited about Kubernetes. While we touched on the Kubernetes client in the <a href="http://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-1-containerising-and-deploying/">first</a> post in this series, in this post we’ve really started to take advantage of it. This is what I feel the true power of Kubernetes really is – an integrated set of tools for running software over a large cluster, that you <em>have a huge amount of control over</em>. In this instance, we haven’t had to write code to spin up and spin down dedicated game servers in very specific ways – we could just leverage Pods. When we want to take control and react to events within the Kubernetes cluster itself, we have the Watch APIs that enable us to do just that! It’s quite amazing the core set of utility that Kubernetes gives you out of the box that many of us have been building ourselves for years and years. That all being said, scaling up nodes and game servers in our cluster is the comparatively easy part; scaling down is a trickier proposition. We’ll need to make sure nodes don’t have game servers on them before shutting them down, while also ensuring that game servers don’t end up widely fragmented across the cluster, but in the next post in this series we’ll look at how Kubernetes can also help in these areas as well! In the meantime, as with the previous posts – I welcome questions and comments here, or <a href="https://twitter.com/neurotic">reach out to me via Twitter</a>. You can see my <a href="http://www.gdcvault.com/play/1024328/">presentation at GDC</a> this year as well as check out the code <a href="https://github.com/markmandel/paddle-soccer">in GitHub</a>, which is still being actively worked on! All posts in this series:</p>
<ol>
<li><a href="https://www.gamasutra.com/blogs/MarkMandel/20170502/297222/Scaling_Dedicated_Game_Servers_with_Kubernetes_Part_1__Containerising_and_Deploying.php">Containerising and Deploying</a></li>
<li><a href="http://www.gamasutra.com/blogs/MarkMandel/20170713/301596/Scaling_Dedicated_Game_Servers_with_Kubernetes_Part_2__Managing_CPU_and_Memory.php">Managing CPU and Memory</a></li>
<li><strong>Scaling Up Nodes</strong></li>
<li>Scaling Down Nodes (upcoming)</li>
<li>Running Globally (upcoming)</li>
</ol>
</div>
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Forum software by © MyBB Theme © iAndrew 2016