GPU tutorials
Discover how to build, deploy and run GPU applications in production on Koyeb. The fastest way to deploy applications globally.
Use FLUX, PyTorch, and Streamlit to Build an AI Image Generation App
Learn how to use the FLUX model to generate images based on prompts in real-time.
Use Stable Diffusion and PyTorch to Build an Image Inpainting Service
Learn how to create an image inpainting service using Stable Diffusion and Gradio and deploy it on Koyeb.
Fine-Tune MistralAI and Evaluate the Fine-Tuned Model on Koyeb Serverless GPUs
A step-by-step guide to learn how to fine-tune MistralAI on domain-specific knowledge and evaluate the fine-tuned model's performance.
Fine-Tune Llama 3.1 8B using QLORA on Koyeb Serverless GPUs
Learn how to fine-tune an Llama 3.1 8B using QLORA on your favourite project's documentation. This will enable the model to answer questions with (hopefully) correct, and up-to-date information.
Using ComfyUI, ComfyUI Manager, and Flux to Generate High-Quality Images on Koyeb
Learn how to deploy ComfyUI, an image creation workflow manager, to Koyeb to generate images with Flux, an advanced image generation AI model.
Using YOLO for Real-Time Object Detection with Koyeb GPUs
Understand how the YOLO algorithm works and use it to identify and manipulate images through computer vision.
Using OpenAI Whisper to Transcribe Podcasts on Koyeb
Learn how to use OpenAI Whisper to build an app to generate transcription of podcast audio files in real-time.
Use Continue, Ollama, Codestral, and Koyeb GPUs to Build a Custom AI Code Assistant
This guide shows how to use Continue with Ollama, a self-hosted AI solution to run the Mistral Codestral model on Koyeb GPUs
Deploy Fooocus and Generate AI Images on Koyeb GPUs
Learn how to deploy and configure Fooocus, an AI-based image generation tool, on Koyeb.
Deploy the vLLM Inference Engine to Run Large Language Models (LLM) on Koyeb
Learn how to set up a vLLM Instance to run inference workloads and host your own OpenAI-compatible API on Koyeb.