Deploy and scale high-performance background jobs with Koyeb Workers
Today, we are thrilled to announce workers are generally available on Koyeb! You can now easily deploy high performance workers to process background jobs in all of our locations.
It's now simple to deploy workers from a GitHub repository and rely on our built-in CI/CD engine: simply connect your repository and we build, deploy, and scale your workers on high-performance servers all around the world. We provide up to 64 vCPU and 64GB of RAM per worker backed by high-end CPUs so that you never lack processing power.
You can use any of your favorite frameworks to process background jobs: Celery in Python, Sidekiq in Ruby, Bull in Node.js, Temporal in Go, Faktory with your favorite language, or any kind of worker! Workers are ideal for handling background and asynchronous tasks and offloading long-running tasks from web services and APIs to improve your applications' performance.
Workers were one of the most requested features on the platform, and now, you’ll benefit from the same seamless experience you've come to expect for all of your APIs and full-stack apps. No more hacks to get them running on the platform — it's now fully native.
As we just announced the availability of 4 new locations in early access, this means that you can now run workers in 6 locations across 3 continents with a simple git push! And all of this without compromising on performance and starting at $0.000001/s ($2.7/mo).
Speaking of performance, in our tests, Koyeb Workers are up to 100x faster for $1 spent than alternative solutions. We're working on advanced benchmarks to prove and illustrate this — stay tuned!
Deploying workers on Koyeb is similar to any API or full-stack app you’d deploy on the platform: you can use the control panel, the CLI, or the Koyeb API to create your worker and git push
to deploy your code to production. Here is how to do deploy with the control panel:
For a full run, we will show you how to deploy with the CLI.
Deploy workers using the CLI
To highlight the performance of workers, we decided to deploy GitHub runners on Koyeb to accelerate integration and non-regression tests. After this, your GitHub actions will use runners deployed on Koyeb instead of the default GitHub runner.
In our tests, we accelerated the testing speed from 10 minutes to 3 minutes for the Koyeb control panel source repository. This is with a Koyeb Xlarge instance which has 8 cores and 8GB of RAM instead of a large GitHub runner with 16 cores and 64GB of RAM resulting in 100x more performance per dollar.
Deploying high-performance GitHub runners
To get started, you will need:
- A repository with GitHub actions configured: we will configure new self-hosted runners for this repository.
- A GitHub Runner token: full instructions on how to get one are available in the GitHub self-hosted Runner documentation.
To simplify the configuration, we built a repository which can be used to automatically deploy and configure self-hosted runners using a Dockerfile: example-github-self-hosted-runner.
With this repository and a single command, you can deploy and run your first worker:
koyeb app init gh-runner --region was --type worker \
--git github.com/koyeb/self-hosted-runner --git-branch main \
--env GITHUB_REPOSITORY=https://github.com/<YOUR_ORG>/<YOUR_REPO> --env GITHUB_RUNNER_TOKEN=<YOUR_GITHUB_RUNNER_TOKEN> \
--git-builder docker --instance-type=xlarge
Et voilà! Your first worker is deployed in was
(aka Washington, D.C. / US-East) to accelerate all of your GitHub builds.
Scaling your worker
If we need more workers, we can increase the number of deployed runner
instances. This helps improve performance, process more jobs concurrently, and increase availability.
Here is how to scale to four workers:
koyeb service update gh-runner/gh-runner --scale 4
Behind the scenes, a new Deployment is created and four Fircracker microVMs running on bare metal servers are launched to support the workers.
Distributing workers across multiple locations to enhance redundancy
Up until this point, our workers have been running in a single location in the Washington, D.C. area. If the servers where the workers are running were to crash, the redundancy would be automatically managed.
But what would happen if Washington, D.C. were to suddenly go offline? While this is a very unlikely scenario, we might want to run in a second location just in case.
We can update the configuration to deploy two workers each to two different locations by typing:
koyeb service update gh-runner/gh-runner --region was --region sfo --scale 2
That's it, two workers per region (four in total) are now running in Washington, D.C. and San Francisco!
Deploy globally and process data faster
If the data you want to process is spread across multiple regions, you will probably want to deploy worker groups in each of your regions to reduce data transfers and accelerate your jobs. This is easily doable thanks to our new regions.
We will deploy two GitHub runner services: one for Europe and one for Asia.
# Create the runner-eu service in Frankfurt, Germany and Paris, France
koyeb service create gh-runner-eu --app gh-runner --region fra --region par --type worker \
--git github.com/koyeb/self-hosted-runner --git-branch main \
--env GITHUB_REPOSITORY=https://github.com/<YOUR_ORG>/<YOUR_REPO> --env GITHUB_RUNNER_TOKEN=<YOUR_GITHUB_RUNNER_TOKEN> \
--git-builder docker --instance-type=xlarge
# Create the runner-asia service in Singapore and Tokyo, Japan
koyeb service create gh-runner-asia --app gh-runner --region sin --region tyo --type worker \
--git github.com/koyeb/self-hosted-runner --git-branch main \
--env GITHUB_REPOSITORY=https://github.com/<YOUR_ORG>/<YOUR_REPO> --env GITHUB_RUNNER_TOKEN=<YOUR_GITHUB_RUNNER_TOKEN> \
--git-builder docker --instance-type=xlarge
Keep in mind the physical reality: Singapore / Tokyo are over 3,000 miles apart and Washington, D.C. / San Francisco are nearly 2,500 miles apart, so you might want to create even more localized deployments.
You now know how to deploy and run workers on Koyeb, configure horizontal scaling, and scale them globally across multiple regions worldwide.
Pay only for what you use
Koyeb Workers are available for everyone on the Starter plan and above.
With the Starter plan, you can run 2 nano workers for free using the free $5.50 monthly credit. Any usage above the monthly $5.50 credit will be charged per second.
Prices vary depending on the instance size. They are displayed per month to simplify readability but they are accounted for by the second: you only pay for what you use.
Instance Size | CPU | RAM | DISK | Price per month, charged per second |
---|---|---|---|---|
Nano | 1 | 256MB | 2.5GB | $2.7/mo |
Micro | 1 | 512MB | 5GB | $5.4/mo |
Small | 1 | 1GB | 10GB | $10.7/mo |
Medium | 2 | 2GB | 20GB | $21.4/mo |
Large | 4 | 4GB | 40GB | $42.9/mo |
X-Large | 8 | 8GB | 80GB | $86/mo |
XX-Large | 16 | 16GB | 160GB | $172/mo |
XXX-Large | 32 | 32GB | 320GB | $343/mo |
Larger instance sizes up to 64GB of RAM and 64vCPU per service are available for organizations on the Startup plan. If you need more resources than that, contact us and we'll get you set up. Check our pricing page for more details.
Get started deploying workers on Koyeb today!
We are excited to support deploying and running workers on the Koyeb platform. Our job is to make it easy for you to run and scale your web services and workers as your traffic increases.
You can dive deeper by reading our real-world tutorials about How to Use Node-Schedule to Build and Deploy a Hacker News Aggregator and Deploy a Python Celery Worker with RabbitMQ.
With Koyeb's built-in continuous deployment, multi-region deployments across continents, global load balancing across our edge network of 250+ locations, and zero-configuration private networking, deploying apps in production takes only a few minutes.
Sign up today and start deploying for free with $5.50 of credit granted to your account every month. The future of efficient and performant processing awaits()
. 😉
Not ready to say goodbye? You're welcome to join the friendliest serverless community or tweet us at @gokoyeb. We'd love to hear from you!