Case studies of companies using Knative for serverless computing
Have you ever wondered how companies leverage serverless computing to scale their applications and reduce operational costs? Knative is a popular open-source serverless platform that enables developers to run containerized workloads on Kubernetes clusters. In this article, we'll explore some case studies of companies that used Knative to streamline their serverless computing infrastructure and improve their development workflow.
What is Knative?
Before we dive into the case studies, let's briefly introduce Knative and its features. Knative is an open-source platform that extends Kubernetes to support serverless workloads, such as functions and event-driven applications. Knative provides a set of building blocks that simplify the development, deployment, and operation of serverless functions, including:
- Serving: A platform for deploying and managing HTTP-based containerized workloads, such as functions or microservices.
- Eventing: A system for declaratively routing and processing events between services and functions.
- Build: A mechanism for building container images from source code using various build systems.
Knative is designed to be vendor-agnostic, so you can run it on any Kubernetes distribution, on-premises or in the cloud. Knative also integrates with other Kubernetes tools and services, such as Istio for service mesh and Prometheus for monitoring.
Case study #1: Pinterest
Pinterest is a popular image sharing and social media platform that allows users to discover, save, and share images and videos. The platform handles billions of pageviews and millions of uploads per day, which requires a highly scalable and resilient infrastructure. Pinterest adopted Knative to run its serverless functions and improve its development workflow.
Problem
Pinterest had a legacy serverless infrastructure that was based on AWS Lambda and API Gateway. However, this architecture had some limitations that hindered the company's ability to innovate quickly and scale efficiently. For example, AWS Lambda had a cold start problem, where the first invocation of a function could take several seconds to spin up a new container. This affected the user experience and increased the cost of running functions. Additionally, AWS Lambda did not provide granular control over the networking and security aspects of the functions, which made it challenging to integrate with other services and comply with regulatory requirements.
Solution
Pinterest decided to migrate its serverless workloads to Knative on Google Kubernetes Engine (GKE). Knative offered several advantages over the legacy infrastructure, such as:
- Low cold start times: Knative scales down to zero replicas when no requests are incoming, so there are no cold starts for subsequent requests.
- Customizable networking: Knative supports Istio, which provides fine-grain control over traffic routing, load balancing, and security policies.
- Multi-cloud support: Knative is vendor-agnostic, so Pinterest can run the same workloads on GKE as well as on other Kubernetes clusters, such as on-premises or in AWS.
- Faster development cycles: Knative's build system allows developers to build and deploy container images faster, which enables rapid iteration and experimentation.
Results
After migrating to Knative, Pinterest achieved the following benefits:
- 20x reduction in cold start times: Knative's zero-scaling feature eliminated the cold start problem and reduced the latency of function invocations from several seconds to less than 100ms.
- Simplified networking: Knative's Istio integration allowed Pinterest to define custom routing and security rules for its functions, which improved the observability and security of the infrastructure.
- Faster deployment cycles: Knative's build system reduced the time it took to build and deploy a function from 30 minutes to 5 minutes, which improved the agility of the development team.
- Lower operational costs: Knative's fine-grain scaling and automatic scaling reduced the number of idle containers and minimized the wasted resources, which resulted in a 30% cost reduction for Pinterest.
Case study #2: Adidas
Adidas is a global sportswear brand that designs, develops, and sells running, basketball, soccer, and other sports equipment and apparel. Adidas uses Knative to run its serverless functions on a hybrid cloud infrastructure and manage its e-commerce and marketing websites.
Problem
Adidas had a monolithic e-commerce platform that was difficult to scale and maintain. The platform used a traditional LAMP stack (Linux, Apache, MySQL, PHP) deployed on bare-metal servers, which made it challenging to add new features or optimize the performance. Additionally, Adidas had a separate marketing website that was hosted on a cloud infrastructure, which caused a lack of consistency and integration between the two websites.
Solution
Adidas decided to modernize its infrastructure by adopting a microservices architecture and a serverless platform. Adidas chose Knative as its serverless platform because it provided a unified platform for running containerized workloads across its hybrid cloud infrastructure. Knative also provided the following advantages:
- Scalability and high availability: Knative can automatically scale up or down the number of replicas based on the demand, which ensures consistent performance and availability.
- Customizable deployment: Adidas could customize the deployment options of its functions, such as the resource requests and limits, the environment variables, and the command-line arguments.
- Multi-cloud support: Knative enabled Adidas to run its workloads on any Kubernetes cluster, regardless of the cloud provider, which gave the company more flexibility in choosing the best infrastructure for each workload.
- Faster time-to-market: Knative's build system allowed Adidas to build, test, and deploy its code in a fast and repeatable way, which reduced the time-to-market for new features.
Results
After adopting Knative, Adidas achieved the following results:
- 40% reduction in page loading times: Knative's auto-scaling feature enabled Adidas to handle peak traffic without sacrificing the performance or incurring additional costs.
- Improved agility: Knative's build system allowed Adidas to experiment with new features and iterate faster, which improved the agility and competitiveness of the development team.
- Lower operational costs: Knative's fine-grain scaling and automatic scaling reduced the number of idle containers and minimized the cloud bills, which resulted in a 25% cost reduction.
- Better observability: Knative's Istio integration provided better observability and tracing of the function invocations, which improved the troubleshooting and monitoring of the infrastructure.
Case study #3: T-Mobile
T-Mobile is a mobile network operator that provides wireless voice and data services to millions of customers in the USA. T-Mobile uses Knative to run its serverless workloads on a Kubernetes cluster and enable its developers to focus on building new features instead of managing infrastructure.
Problem
T-Mobile had a traditional infrastructure based on virtual machines and monolithic applications. This architecture was becoming difficult to scale and maintain, and it prevented the company from adopting agile practices, such as continuous integration and continuous delivery. Additionally, T-Mobile wanted to adopt a serverless platform that would simplify the development and deployment of functions and reduce the operational costs.
Solution
T-Mobile chose Knative as its serverless platform because it provided a unified platform for running containerized workloads on Kubernetes clusters. Knative also provided the following advantages:
- Easier developer experience: Knative provided a simple and intuitive interface for developers to deploy their functions and manage the deployment options, such as the resources, the environment variables, and the ingress settings.
- Fast feedback loops: Knative's build system enabled T-Mobile to build and deploy its code continuously and provide fast feedback to the developers, which improved the quality and speed of development.
- Improved observability: Knative's Istio integration provided better tracing, metrics, and monitoring of the function invocations, which improved the visibility and control of the infrastructure.
- Multi-cloud support: Knative enabled T-Mobile to run its workloads on any Kubernetes cluster, regardless of the cloud provider, which gave the company more flexibility in choosing the best infrastructure for each workload.
- Lower operational costs: Knative's fine-grain scaling and automatic scaling reduced the number of idle containers and minimized the cloud bills, which resulted in a 35% cost reduction.
Results
After adopting Knative, T-Mobile achieved the following benefits:
- Faster time-to-market: Knative enabled T-Mobile to reduce the time-to-market for new features from months to weeks, which made the company more competitive and responsive to customer needs.
- Improved developer productivity: Knative's seamless integration with the CI/CD pipeline enabled the developers to focus on building new features instead of managing infrastructure.
- Better reliability and scalability: Knative's fine-grain scaling and automatic scaling improved the reliability and scalability of the infrastructure, which reduced the risk of downtime and increased the customer satisfaction.
- Lower operational costs: Knative's fine-grain scaling and automatic scaling reduced the cloud bills and minimized the wastage of resources, which resulted in a 35% cost reduction.
Conclusion
These case studies showed how Knative can improve the development workflow, scalability, and cost efficiency of serverless workloads on Kubernetes clusters. Knative provides a unified platform for running containerized functions and microservices, with features such as automatic scaling, customizable networking, and fast deployment cycles. Knative also integrates with other Kubernetes tools and services, such as Istio for service mesh and Prometheus for monitoring. If you're interested in running serverless workloads on Kubernetes, give Knative a try!
Editor Recommended Sites
AI and Tech NewsBest Online AI Courses
Classic Writing Analysis
Tears of the Kingdom Roleplay
Learn Go: Learn programming in Go programming language by Google. A complete course. Tutorials on packages
ML Startups: Machine learning startups. The most exciting promising Machine Learning Startups and what they do
Learn AWS / Terraform CDK: Learn Terraform CDK, Pulumi, AWS CDK
Entity Resolution: Record linkage and customer resolution centralization for customer data records. Techniques, best practice and latest literature
Terraform Video - Learn Terraform for GCP & Learn Terraform for AWS: Video tutorials on Terraform for AWS and GCP