Kubernetes? Why not?
| Sujith Subbiah |
A few days after I joined Influx Worldwide, I was working on resolving a bug when I was drawn into a conversation. The discussion was about providing server-costings for a client’s ticketing website — they were looking to migrate to one of the popular cloud platforms. After looking through several sheets, reading through numerous blogs and meeting with some high-profile people, we presented the client with the numbers.
These numbers were for a classic web server architecture (a few high-spec computers somewhere inside the data centre of a cloud platform) and I believed they were ideal, considering the number of screens involved and volume of transactions handled per month.
The story continues #
A few weeks passed and I gained a little more understanding about the cinema domain. I felt there was a lot of disconnect between how the ticketing backend system were setup. The major disconnect was: how could a cinema which has fewer screens, or one based out of single location or very few locations, afford such a high-spec server.
Every cinema, be it small or big, has to be quick enough to sell out as soon as a movie is released and that means that their system has to be capable of delivering when they need it. Another side to the same problem is when there are not many people looking at a ticketing website, the application cannot eat up all the revenue that the cinema has made.
Fundamentally, a ticketing website needs to be on high-spec mode when there is demand and on low-spec mode during off-peak periods.
No wait…. This blog is not about auto-scaling. So, why is this important to note?
Auto-scaling is a feature by which we can add additional hardware of similar specs in a series to make a system work like a high-spec backend system during peak hours and discard additional instances when they are not required.
This is an extremely important concept and a valuable feature that a lot of platforms provide to enable systems to have additional resources at times when they are needed.
Why didn’t we stop here? #
Actually, I did stop at this point and start implementing auto-scaling for our upcoming product-suite. As an organisation, we had one major problem with this approach — each of the cloud platforms had different ways of implementing auto-scaling. This can be a huge problem for our clients as well; they might get stuck with one of the cloud platforms or face multiple challenges while migrating across cloud platforms.
So this meant we couldn’t stop here!!
The Problem #
How to achieve high-performance when required and pay less during off-peak hours with easy portability across cloud platforms?
After a lot of investigation, I decided to try out Kubernetes. After a few days of learning at https://kubernetes.io/, I started to get a feel that this should solve our problem.
The major benefits were:
- We could deploy in very low specs/basic configuration and hardware
- We could scale (up and down) automatically
- We could set up a system that can easily grow with the client
- We could easily port our code across multiple cloud platforms and even onto a premise-server
So what is Kubernetes? #
“Kubernetes, at its basic level, is a system for running and coordinating containerized applications across a cluster of machines. It is a platform designed to completely manage the life cycle of containerized applications and services using methods that provide predictability, scalability, and high availability…” says one of the cloud platforms.
Kubernetes, or K8s, is like a conductor in an orchestra. Just as the conductor determines the ensemble and coordinates the performance, Kubernetes decides how many containers are required at a particular point in time to serve the website.
Oh… and what are these “containers”? A container is a small program built to solve a specific problem. It is like a trumpet in the wind section of the orchestra. The conductor knows how many trumpets are needed and when they need to play during a performance.
For example #
Let’s look at how Kubernetes can orchestrate the ticketing process and make it more efficient.
Imagine you have set up a container to handle showtimes-listing, that is capable of responding to 500 users. When the number of users increases rapidly to 1600, the Kubernetes engine senses the demand for additional resouces and adds, say, three additional containers to respond to the hike in demand/number of users. When the user-count goes down, the K8s engine releases the free containers, thereby saving cost.
Kubernetes will also help in automatic system recovery as well, in case one or a few containers go down for some reason. I agree that the program needs to run flawlessly forever but let’s talk practicality. Kubernetes is capable of bringing the system up without requiring someone to keep looking into it.
Sounds great, doesn’t it?
So, this is where we are at the moment.
Our research in performance, stability and efficiency continues towards Making Cinemas Great Again! Stay tuned for the follow-up to this blog for updates on how Kubernetes is impacting our performance and development.
A few more interesting reads about K8s: #
We’re Influx #
Influx is a digital technology company which specialises in working with cinemas. We build websites, mobile apps, kiosks and other modules over your existing ticketing platforms. Our origins are in Chennai (+ offices in Dubai & Dallas) and have worked with over 30+ cinema chains globally including PVR and SPI Cinemas. We’re constantly thinking about how to help cinemas sell more tickets and improve the guest experience.
Our motto: Making Cinemas Great Again :).
Sujith Subbiah, our Software Research Engineer, has a bachelor’s degree in computer science and over nine years experience in building client consultancy solutions. He heads the team working with our cinema middleware software application and cinema-specific Content Management System. So, what Sujith and his team basically do is give shape and power to the bright ideas that come out of our brilliant minds.