Hugging Face tutorials

Discover how to build, deploy and run Hugging Face applications in production on Koyeb. The fastest way to deploy applications globally.

Use Stable Diffusion and PyTorch to Build an Image Inpainting Service
Haziqa Sajid
Haziqa Sajid

Use Stable Diffusion and PyTorch to Build an Image Inpainting Service

Learn how to create an image inpainting service using Stable Diffusion and Gradio and deploy it on Koyeb.

Oct 10, 2024
11 min read
Deploy the vLLM Inference Engine to Run Large Language Models (LLM) on Koyeb
Justin Ellingwood
Justin Ellingwood

Deploy the vLLM Inference Engine to Run Large Language Models (LLM) on Koyeb

Learn how to set up a vLLM Instance to run inference workloads and host your own OpenAI-compatible API on Koyeb.

Jun 12, 2024
12 min read
Use pgvector and Hugging Face to Build an Optimized FAQ Search with Sentence Similarity
Chuks Opia
Chuks Opia

Use pgvector and Hugging Face to Build an Optimized FAQ Search with Sentence Similarity

In this tutorial, we showcase how to deploy a FAQ search service built with Hugging Face's Inference API, pgvector, Koyeb's Managed Postgres. The optimized FAQ Search leverages sentence similarity searching to provide the most relevant results to a user's search terms.

Nov 27, 2023
25 min read

Deploy AI apps to production in minutes

Koyeb is a developer-friendly serverless platform to deploy apps globally. No-ops, servers, or infrastructure management.
All systems operational
© Koyeb