The Global Container Runtime: Six Regions to Deploy Apps Anywhere and Everywhere
10 minWe're excited to announce not one but… 4 new regions in public preview! Everyone with Starter access or more can now deploy simultaneously across our 6 regions in EU, US, and APAC in literally 1 minute. 🇺🇸 🇩🇪 🇸🇬 🇫🇷 🇯🇵 🌐
You can now deploy across multiple continents with one CLI command, one API call, or one click: we build your application and deploy it on high-performance hardware with built-in load-balancing and private networking across 6 locations.
While the world has been swept away with generative AI, we've stayed focused on our core mission: allowing you to push applications to production with your team, globally, and in minutes. If you're into AI or LLMs, we didn't forget you, you can deploy inference workloads globally on our high-performance hardware.
Here is how you can deploy a Golang application across continents in one CLI command:
That's it! In less than 5 minutes, your app will be deployed in 6 regions and, thanks to our global networking layer, your users' traffic will be automatically routed to the nearest location.
You can also do this with the control panel:
For those new to Koyeb, here's what we're all about: we're building a platform that allows you to deploy and run full-stack apps and APIs written with your favorite framework and language without having to deal with infrastructure. No server, networking, or Kubernetes management necessary to get your apps running globally. You simply push your code to your Git repository and we handle the build process and deploy inside of Firecracker microVMs with a ready-to-use network stack.
The public preview adoption has been massive: 50,000 developers have already joined us to build and deploy faster. But we still see teams everywhere struggling to build, run, and scale apps and these new locations will help teams all around the world do exactly this.
New regions follow the same pricing, starting with the nano size at $2.7/mo, accounted per second. We're also excited to share that we will make getting started on the platform even more accessbile with a new Hobby plan including a free machine. Managed PostgreSQL databases are also coming shortly.
Let's dive into why we are building a platform to simplify global deployments, what this requires, how it works, and what is left to do.
- Deploying globally to improve performance: anywhere or everywhere
- Simplifying global: abstracting deployments, networking, providers
- Deploying everywhere in minutes
- More locations and more details to come
Deploy globally to improve performance: anywhere or everywhere?
The key driver for global deployments is performance: you don't want your users to wait several seconds each time they click on your app waiting for requests to cross oceans and continents.
With many solutions, global means one of two things:
- anywhere: you can deploy in different locations one-by-one. You can easily deploy in a single region to improve performances for users in that region, but you still need to manage the complexity of deploying across multiple regions. This can include global load-balancing, edge acceleration, continuous deployment, and network security.
- everywhere: you can deploy in all of your provider's locations. You can't select precisely where your code will be running, even if it hurts performance, costs, or compliance.
We provide both: anywhere and everywhere.
You can easily deploy to one region, all regions, or anything in between:
If you deploy in 4 regions, users will be routed automatically to the nearest location via our global edge network which has 255 locations.
Our goal is to simplify global deployment while providing an ability to maintain control over:
- performance: deploying everywhere can improve or degrade performance, depending, for instance, on the location of your database or the location of other services and APIs you rely on.
- compliance: some applications need to run in specific locations and only in these locations in order to respect regulation.
Simplifying going from local to global: abstracting deployments, networking, and providers
Our target is to provide an outstanding developer experience and a serverless feeling. We want to be able to deploy and run code globally with the same ops-less feeling that we have when we run locally.
Deploy a GitHub repo or a container
We made the choice to support the usual full-stack apps, microservices, and monolith frameworks because, as of today, it's still the standard way of building applications.
You can simply git push
projects build with your favorite framework to a repo and we'll build your code into a container and deploy it for you. To do this, we build with buildpacks or using a Dockerfile in your repository.
We believe that functions are still not standard or easy to test and maintain. We don't want you to have to learn a non-standard way of writing apps and APIs to deploy to production.
If you already have a Docker or OCI image built with a CI pipeline you like, that's fine by us: you can deploy just as easily from a pre-built container.
Managing the global complexity
Once you've built a container, how do you deploy on hundreds of servers located in several locations and make the app accessible? Deploying globally means dealing with several orechestration and networking components:
- Continuous deployment: The first step is to get all of the servers to execute your code. To do this, our control plane performs calls on per-location nomad clusters. Nomad will, in turn, ask nomad agents to spawn Firecracker microVMs with your code and the needed network configuration on bare metal servers. On your end, nothing needs to be done and the new deployment is only promoted to production once it's detected as functional by the health checks.
- Global load-balancing: Having microVMs running your code is a good first step, but they need to be accessible. For this, we provision a global load-balancing layer which will receive public traffic, route the traffic to the nearest region, and load-balance between the microVMs supporting your app in that region.
- Edge acceleration and caching: To accelerate your app, we terminate the initial TLS connection at the edge and provide built-in content caching. The edge network is also in charge of stopping DDoS attacks. The traffic is fully encrypted as it flows between the edge and core locations.
- Private networking and service mesh: If your app is composed of several microservices, our built-in service mesh works across regions. If you're not familiar with services meshes, you can basically think of it as a combination of automatic private networking , DNS, load-balancing, TLS. You could view this as a VPC on steroïds. No need to deal with internal IPs, DNS, load-balancing, or encryption: it's all provided by default and across regions.
- End-to-end encryption: All external and internal communications should be end-to-end encrypted. We do it for you using mTLS everywhere.
- End-to-end support for gRPC, HTTP/2 and WebSockets: This might be obvious, but this is far from automatic. Our global networking supports gRPC, HTTP/2, and WebSockets. If you're curious, we recently wrote about how we implemented end-to-end gRPC and HTTP/2.
Deploying everywhere in minutes
Now let's dive into how to deploy everywhere or anywhere using the Koyeb demo app.
One CLI command to deploy globally
Using the Koyeb CLI, we can deploy the application to the Frankfurt, Washington, Tokyo, Singapore, Paris, and San Francisco regions with the following command:
This automatically triggers the deployment of the app with traffic routed to port 3000. The CLI returns the domain of my app, in this case multi-region-yann.koyeb.app.
We made a lot of tweaks recently to accelerate network propagation. In about 3 minutes, your app should be live in the US, Europe, and Asia.
After about 2 minutes, my app is running in multiple locations.
On the left side of the image, I'm connecting to the Paris edge location which routes me to the Paris location, the nearest one to me at that time. Latencies are high as I'm on a train with poor connectivity. On the other side, I'm located in Vancouver, I go through the Vancouver edge location and I'm routed to San Francisco, the nearest location in this case.
This works with one or dozens of containers per location. The traffic will be automatically routed to the region closest to the request and load-balanced inside of each location.
Automate your deployments
If you need to automatically create new apps and services, all of this is doable through our Terraform Provider, Pulumi provider, Go SDK, or API.
Coming next: engineering blog post about the internals and managed Postgres
We're super excited to see how you will leverage this new regions! With just a single CLI command, API call, or a simple click, you can now seamlessly deploy your applications worldwide in the regions that matter most to you. Our advanced networking capabilities and built-in multi-region deployments let you to scale and optimize your apps like never before.
The best part, our pricing remains consistent across all these new regions, starting at just $2.7 per month for a nano instance. Check out our pricing page to learn more
Stay tuned for our upcoming engineering post, where we'll delve deeper into how we built our multi-region engine and how you can leverage our advanced networking capabilities.
If you're eager to explore the platform, don't hesitate to try deploying one of our example applications. You can also deploy any of your projects using git or from any container registry.
Finally, you can keep up with all the latest updates by joining our vibrant and friendly serverless community or follow us on Twitter at @gokoyeb.