Using LangServe to build REST APIs for LangChain Applications
9 minIntroduction
LangChain is a powerful framework for building applications with AI language models. It simplifies the process of interfacing with local or remote LLMs by making it easy to template prompts, configure query contexts, and chain discrete processes together to form complex pipelines.
LangServe is a LangChain project that helps you build and deliver these applications over a REST API. Under the hood, it uses FastAPI to construct routes and build web services, and leverages Pydantic to handle data validation.
In this guide, we will demonstrate how to build an application with LangChain and LangServe and deploy it to Koyeb. The application will serve a REST API where users can submit queries. It will pass these, with contextual information, to OpenAI to generate responses.
You can deploy and preview the example application from this guide by clicking the Deploy to Koyeb button below:
Be sure to set the OPENAI_API_KEY
environment variable during configuration. You can consult the repository on GitHub to find out more about the example application that this guide uses.
Requirements
To successfully follow and complete this guide, you need:
- Python installed on your local computer.
- A GitHub account to host your LangServe application.
- A Koyeb account to deploy and run the preview environments for each pull request.
- An OpenAI API key so that our application can send queries to OpenAI.
Steps
To complete this guide and deploy a LangServe application, you'll need to follow these steps:
- Set up the project directory
- Create the app directory structure and install dependencies
- Create the LangServe application
- Test the application
- Adjust the Dockerfile
- Publish the repository to GitHub
- Deploy to Koyeb
Set up the project directory
To get started, create and then move into a project directory that will hold the application and assets we will be creating:
Inside, create a file called .env
in your text editor. Inside, define the OPENAI_API_KEY
environment variable by setting it to your OpenAI API key:
Our application will read the API key from this file to authenticate its requests to OpenAI services.
Next, create and activate a new Python virtual environment for the project. This will isolate our project's dependencies from system packages to avoid conflicts and offer better reproducability:
Your virtual environment should now be activated.
Create the app directory structure and install dependencies
Now that we are working within a virtual environment, we can begin to install the packages our application will use and set up the project directory.
Standard Python installations include the pip
package manager by default. However, the LangServe project uses the poetry
by default. Because of this, we'll install our dependencies in two stages.
First, install the langchain-cli
package to get access to the langchain
command line tool. We'll also take this opportunity to install poetry
itself and make sure pip
is up-to-date:
Next, with the newly installed langchain
command, initialize a LangChain project in the current directory:
Note: Be sure to include the trailing dot to target the current directory.
You will be asked whether you wish to install any packages. Despite the wording, this prompt actually refers to LangChain templates and not Python packages. Press ENTER to continue without adding any templates.
With the new project files, your directory should now look similar to this:
The pyproject.toml
file is the primary file that the langchain
command and poetry
both use to record dependency information and configure project metadata. Because this now makes the directory a valid poetry
project, we can use poetry
to install the remaining dependencies:
langserve[all]
: The server and client components of the LangServe library.langchain-openai
: The package containing OpenAI integration for LangChain.python-decouple
: A package that can be used to read environment variables and.env
files.
Our project directory now has all of the dependencies and project files necessary for us to begin working.
Create the LangServe application
To create a basic LangServe application, open the app/server.py
file in your text editor. Inside, replace the existing contents with the following code:
Let's take a moment to go over what this application does.
The code begins by importing all of the necessary classes, functions, and other materials from the packages we've installed. Afterwards, it initializes a FastAPI()
instance that will serve as the primary application object for the application.
Next, we initialize instances of the ChatOpenAI
and ChatPromptTemplate
classes and assign them to the model
and prompt
variables, respectively. For the ChatOpenAI
instance, we pass in the OpenAI API key from our .env
file using the config
object from python-decouple
. For the ChatPromptTemplate
, we set the prompt to ask for a summary of the given topic. We then chain these two together in a chain
variable.
We add a route to serve the new chain at /openai
. Afterwards, we use uvicorn
to serve the applcition on all interfaces using port 8000.
Test the application
We can test that the application works as expected by typing the following inside the main project directory:
This will start up the application server. Navigate to 127.0.0.1:8000/openai/playground
in your web browser to view the prompt page. You can test that everything is working correctly by entering a question or topic.
When you are finished, press CTRL-C to stop the server.
Adjust the Dockerfile
When we initialized a new LangChain project, it copied a Dockerfile
to our project directory. Koyeb will build a Docker image from this file when we deploy later on. The generated Dockerfile
works as expected, but we will adjust it slightly to allow Koyeb to pass in the port to run on during the build process.
Open the Dockerfile
in your text editor and replace the final two lines (EXPOSE
and CMD
) with the following:
The ARG
line marks PORT
as a build argument. Afterwards, we modify the hardcoded instances of port 8000 with a reference to that build argument, retaining 8000 as the fallback value if no build value is provided.
Publish the repository to GitHub
The application is almost ready to deploy. We just need to commit the changes to Git and push the repository to GitHub.
In the project directory, initialize a new Git repository by running the following command:
The project creation process includes a very basic .gitignore
file. Replace it with a more complete one for Python projects by typing:
Next, add the project files to the staging area and commit them. If you don't have an existing GitHub repository to push the code to, create a new one and run the following commands to commit and push changes to your GitHub repository:
Note: Make sure to replace <YOUR_GITHUB_USERNAME>/<YOUR_REPOSITORY_NAME>
with your GitHub username and repository name.
Deploy to Koyeb
Once the repository is pushed to GitHub, you can deploy the LangServe application to Koyeb. Any changes in the deployed branch of your codebase will automatically trigger a redeploy on Koyeb, ensuring that your application is always up-to-date.
To get started, open the Koyeb control panel and complete the following steps:
- In the Overview tab, click Create Web Service.
- Select GitHub as the deployment option.
- Choose the repository containing your application code. Alternatively, you can enter our public LangServe example repository into the Public GitHub repository field at the bottom of the page:
https://github.com/koyeb/example-langserve
. - In the Builder section, choose Dockerfile.
- In the Environment variables section, click Add Variable to configure a new environment variable. Create a variable called
OPENAI_API_KEY
. Select the Secret type and choose Create secret in the value. In the form that appears, create a new secret containing your OpenAI API key. - Name the App and Service, for example
example-langserve
. - Click the Deploy button.
Koyeb will clone the GitHub repository and use the Dockerfile
file to build a new container image for the project. Once the build is complete, a container will be started from the image to run your application.
Once the deployment is healthy, visit your Koyeb Service's subdomain (you can find this on your Service's detail page) with /openai/playground
appended to the end. It will have the following format:
You should see your LangServe application's prompt, allowing you to ask questions and get responses from the OpenAI API.
Conclusion
In this tutorial, we demonstrated how to use LangChain, LangServe, and the OpenAI API to build a basic page with an AI prompt. LangChain provides an ergonomic interface for working with OpenAI and other models, while LangServe makes it easy to create and serve sites exposing these tools.
This tutorial is only a basic guide to get you started with these projects. To learn more about how to build robust services using more advanced features of these frameworks, take a look at the LangChain documentation and try out some of their project templates.