Week 4 Issue #11
We’ve scaled Kubernetes clusters to 7,500 nodes, producing a scalable infrastructure for large models like GPT-3, CLIP, and DALL·E, but also for rapid small-scale iterative research such as Scaling Laws for Neural Language Models. Scaling a single Kubernetes cluster to this size is rarely done and requires some special care, but the upside is a simple infrastructure that allows our machine learning research teams to move faster and scale up without changing their code.
WordPress is the most popular CMS on the market, period. On the other hand, Docker is the trendiest techs since the first iPhone. In this guide we're going to show you how to run your WordPress site in a Docker container and what are the benefits for any WP developer.
Please consider supporting the Weekly DevOps / SRE Report. Subscribe to the phpops Newsletter on our website!
GitLab is phasing out the Bronze and Starter tiers and moving to a three-tier subscription model. Existing customers on Bronze and Starter tiers can choose to remain on the same tier until the end of their subscription period, and may renew at the current price for one additional year or upgrade to Premium at a significant discount. ...
Welcome to The 7th Annual StackShare Awards! 🎉
Learn about the pros and cons of using mono repositories and multi repositories along with the most logical use case for each.
Observability has gained a lot of popularity in recent years. Modern DevOps paradigms encourage building robust applications by incorporating automation, Infrastructure as Code, and agile development. To assess the health and “robustness” of IT systems, engineering teams typically use logs, metrics, and traces, which are used by various developer tools to facilitate observability. But what is observability exactly, and how does it differ from monitoring?
This cheat sheet is a comprehensive version, listen, it will not make any sense without having an idea of what the actual resources on AWS are. I highly suggest you finish a course on DevOps Professional Certification or have 2–3 years of working experience on AWS and get back here.
CI/CD (continuous integration and Continuous Delivery/Deployment) is part of the fabric of our lives. We go to conferences about CI/CD, we write articles and blog about CI/CD, we list CI/CD on our LinkedIn page. Nobody even debates about whether it’s the right thing to do or not. We are all on the same page. Right?
If you frequently set up environments AWS and resources that need to be up and down on demand and wish you had some way to automate that process. Basically, bring up those AWS resources like EC2 instances, DBs, etc; and tear them down when you don’t need them; this blog post is something that might be for you.
All software teams have technical debt — parts of the code that weren’t created with today’s challenges in mind, or were written poorly, or were expedient hacks that are now problematic. Having tech debt isn’t necessarily a bad thing; if people spent all their time making code perfect, nothing would ever get done. But too much accumulated debt makes it slower to deliver new features and is a source of bugs and quality issues.
So far in this series, we explored the various questions one might have when starting off with Kubernetes and its ecosystem and did our best to answer them. Now that justice has been done to clear the clouded thoughts you may have, let us now dive into the next important step in our journey with Kubernetes and the infrastructure as a whole.