Scale to Zero: Optimize GPU and CPU Workloads
Starting today, your workloads running on GPU and CPU scale down to zero when idle, wake automatically on request, and scale out horizontally based on various multiple scaling criteria.
Starting today, your workloads running on GPU and CPU scale down to zero when idle, wake automatically on request, and scale out horizontally based on various multiple scaling criteria.
Backup your high-performance Volumes, simplify data management, and enable reproducibility!
Introducing the fastest way to deploy and scale your apps on AWS infrastructure. Today, we are announcing the public preview of AWS Regions on Koyeb for businesses.
We modernized our control panel to transform your infrastructure’s deployment experience. Under the hood of the control panel is high-performance infrastructure, advanced networking, and powerful features.
We are back for Koyeb's second launch week! Get the replay on all our exciting announcements here.
Startups build on Koyeb to bring their ideas to market faster and scale with ease. Apply for the Koyeb Startup Program today to power your applications with the best infrastructure for your business.
Today, we are releasing the AWS ecosystem and regions on Koyeb for businesses. The fastest way to deploy and scale your apps on AWS infrastructure.
Today, we are announcing the public preview of our Serverless GPUs. Perfect for inference, fine-tuning, and all your AI workloads, our Serverless GPUs offer blazing-fast deployments and exceptional performance for your GPU-backed workloads.
We are throwing Koyeb's very first launch week! Catch up on all our exciting announcements here!
Today, we're excited to share that GPU Instances designed to support AI inference workloads are available in private preview. These GPUs provide up to 48GB of vRAM, 733 TFLOPS and 900GB/s of memory bandwidth to support large models including LLMs and text-to-image models.
Autoscaling is available in public preview to all users starting today. Easily handle unpredictable spikes and varying workloads to respond to demand dynamically.
Serverless Postgres is in public preview: fully-managed, fault-tolerant, and scalable serverless Postgres Database Service available directly inside of Koyeb.
We're excited to announce our $7M seed round led by Serena with the participation of ISAI, Samsung Next, MongoDB, and incredible angels. In this post, we dive into our mission, why we are uniquely positionned and what's next.
Deploy Node.js, Go, Python, Java, and anything with a Dockerfile in 6 regions across 3 continents on high-performance microVMs.
Today, we are excited to announce the support of Dockerfile based deployments in general availability. Building and deploying using Dockerfiles offer more flexibility and control over the build process of your applications to let you deploy any kind of applications, frameworks, and runtimes.
Today, we are thrilled to announce workers are generally available on Koyeb! You can now easily deploy workers to process any background jobs with high-end performance in all of our locations.
Today, we are happy to introduce you to the Koyeb Terraform Provider! The Koyeb Terraform provider is a Hashicorp recognized partner provider.
Pulumi is a modern infrastructure as code platform that allows you to define, deploy, and manage cloud infrastructure on any cloud using your favorite programming languages. Learn how to deploy a simple Golang application on Koyeb using Pulumi by writing infrastructure code in TypeScript, Golang, and Python.
Today, we are super excited to share that the Koyeb platform is available for everyone in public preview. Koyeb is the developer platform to build, deploy and scale full-stack applications where your users are. We've been working on the platform since early 2021. The private preview has been intense with over 10,000 developers joining the community and now over 3000 applications running on the platform.
We recently wrote about how the Koyeb Serverless Engine runs microVMS to host your Services but we skipped a big subject: Global Networking. This is a deep dive to understand the life of an end user's request for a service hosted on Koyeb. We explore the technology and components that make up our internal architecture by following the journey of a request from an end-user, through Koyeb's Global Edge Network, and to the application running in one of our Core locations.
Caching is an effective technique for improving performance and reducing latency speeds for the requests of your web services and apps. CDNs bring your content even closer to end-users. Learn about cache control: what it is, how to configure it, and when to use it.
FaaS and CaaS are two popular deployment strategies with their own unique advantages and ideal use cases. Knowing what distinguishes them can help when deciding how to build and deploy your web apps.
Virtualization technology is evolving. Firecracker is an emerging solution that combines the security and isolation of bare metal instances with the density and performance of containers.
The serverless computing era is here. Learn about the implications of going serverless as well as the benefits and existing challenges to implementing this emerging technology.
The serverless computing era is here. Learn about the history and evolution of cloud computing to see why developers and businesses are excited about serverless technology.
The Koyeb CLI is now available and ready to let you manage all your Koyeb resources directly from your shell! The Koyeb CLI is a critical piece to improve the deployment experience and provide a fast way to interact with Koyeb when you develop your projects.
Today, we are excited to share more about the technology we are building to help you with your cloud journey and to deploy all your platforms in the 2020s.