Using Autonomous AI Agents with SerpApi and AutoGPT to Build a Trip Planner
Introduction
Autonomous AI agents have emerged as indispensable tools for leveraging artificial intelligence to perform tasks and make decisions without direct human intervention. These agents are capable of analyzing data, learning from patterns, and executing actions to achieve specific goals, all autonomously.
In this tutorial, you will be creating and deploying a trip planner using an autonomous AI agent in a Next.js application on Koyeb. You will learn how to use an autonomous AI agent to continuously communicate its actions and the reasoning it is using, allowing your Next.js application to display real-time updates to users based on their trip preferences.
By the end of this tutorial, you will have a functioning Next.js application that will prepare a trip itinerary based on the given destination and a trip length. The output will include all of the steps taken as well as the reasoning behind specific choices.
Find out more details about the application you will build by visiting the project repository. You can deploy the Next.js application as configured in this guide using the Deploy to Koyeb button below:
Note: You will need to replace the values of the environment variables in the configuration with your own OPENAI_API_KEY
and SERPAPI_API_KEY
.
Requirements
To successfully follow this tutorial, you will need the following:
- Node.js and
npm
installed. The demo app in this tutorial uses version 18 of Node.js. - Git installed.
- An OpenAI account.
- A SerpApi account.
- A Koyeb account to deploy the application.
Steps
To complete this guide and deploy the Next.js trip planner application, you will need to follow these steps:
- Generate an OpenAI API key
- Generate a SerpApi API key
- Create a new Next.js application
- Configure shadcn/ui components
- Build the trip planner API endpoint
- Building the trip planner user interface
- Deploy the Next.js application to Koyeb
- Conclusion
Generate an OpenAI API key
HTTP requests to the OpenAI API require an authorization token. To generate this token, while logged into your OpenAI account, navigate to the API keys page and click Create new secret key after entering name for your token. Copy and securely store this token for later use as OPENAI_API_KEY
environment variable.
Locally, set and export the OPENAI_API_KEY
environment variable by executing the following command:
export OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"
Generate a SerpApi API key
HTTP requests to the SerpApi require an authorization token. To generate this token, while logged into your SerpApi account, navigate to the dashboard, scroll down to Your Private API Key section, and click the clipboard icon. Copy and securely store this token for later use as SERPAPI_API_KEY
environment variable.
Locally, set and export the SERPAPI_API_KEY
environment variable by executing the following command:
export SERPAPI_API_KEY="<YOUR_SERPAPI_API_KEY>"
Create a new Next.js application
Let's get started by creating a new Next.js project. Open your terminal and run the following command:
npx create-next-app@latest my-app
When prompted, choose:
Yes
when prompted to use TypeScript.No
when prompted to use ESLint.Yes
when prompted to use Tailwind CSS.No
when prompted to usesrc/
directory.Yes
when prompted to use App Router.No
when prompted to customize the default import alias (@/*
).
Once that is done, move into the project directory and start the app in development mode by executing the following command:
cd my-app
npm run dev
The app should be running on localhost:3000. Stop the development server by pressing CTRL-C so we can move on to installing LangChain in the application.
Next, in your terminal window, run the command below to install the necessary libraries and packages for building the application:
npm install serpapi langchain @langchain/community @langchain/openai
The above command installs the following packages:
serpapi
: A library to scrape and parse search engine results using SerpApi.langchain
: A framework for developing applications powered by language models.@langchain/community
: A collection of third party integrations for plug-n-play with LangChain core.@langchain/openai
: A LangChain package to interface with the OpenAI series of models.
Configure shadcn/ui components
To quickly prototype the user interface, you will set up the shadcn/ui
components with Next.js. Specifically, from shadcn/ui
, you will be able to use baked-in, accessible <select>
and <button>
HTML elements. In your terminal window, run the command below to begin configuring shadcn/ui
for the project:
npx shadcn-ui@latest init
You will be asked a few questions to configure a components.json
, answer with the following:
- Which style would you like to use? Default
- Which color would you like to use as base color? Slate
- Would you like to use CSS variables for colors? yes
With the above, you've set up a CLI that allows us to easily add React components to your Next.js application.
In your terminal window, run the command below to get the button and select elements:
npx shadcn-ui@latest add select
npx shadcn-ui@latest add button
After running the above commands, you should now see a ui
directory inside the components
directory containing button.tsx
and select.tsx
.
Open the app/layout.tsx
, and make the following changes:
import type { Metadata } from "next";
import { cn } from "@/lib/utils"; // [!code ++]
import { Inter } from "next/font/google"; // [!code --]
import { Inter as FontSans } from "next/font/google"; // [!code ++]
import "./globals.css";
const inter = Inter({ subsets: ["latin"] }); // [!code --]
export const metadata: Metadata = {
title: "Create Next App",
description: "Generated by create next app",
};
const fontSans = FontSans({ // [!code ++]
subsets: ["latin"], // [!code ++]
variable: "--font-sans", // [!code ++]
}); // [!code ++]
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body className={inter.className}>{children}</body> // [!code --]
<body className={cn( // [!code ++]
"min-h-screen bg-background font-sans antialiased flex flex-col items-center", // [!code ++]
fontSans.variable // [!code ++]
)}> // [!code ++]
<div className="mt-3 flex flex-col w-full max-w-3xl px-10"> // [!code ++]
{children} // [!code ++]
</div> // [!code ++]
</body> // [!code ++]
</html>
);
}
In the code above, you have imported the Inter
font (from next/font
), and made sure that it's present on each route in your Next.js application. Let's update the tailwind.config.ts
to create a CSS class for the font:
import type { Config } from "tailwindcss";
import { fontFamily } from "tailwindcss/defaultTheme" // [!code ++]
const config = {
. . .
theme: {
. . .
extend: {
fontFamily: { // [!code ++]
sans: ["var(--font-sans)", ...fontFamily.sans], // [!code ++]
}, // [!code ++]
colors: {
. . .
In the changes above, you have imported the fontFamily
from Tailwind CSS in order to use it to define the set of fallback fonts. Also, you have added a CSS class for the sans
font family by using the extend.fontFamily
configuration.
Build the trip planner API endpoint
In this section, you will learn how to create a dynamic API endpoint in Next.js, use LangChain tools with your autonomous AI agent, understand it's responses and reasoning, and stream the intermediary responses to the user.
In Next.js, creating a route.(ts|js)
or page.(tsx|jsx)
file in the app
directory maps it to a route in your application. The name of the directory containing the file maps to the route's URL pathname.
The structure below is what our app
directory will eventually look like:
app/
├── api/
│ └── route.ts
├── favicon.ico
├── globals.css
├── layout.tsx
└── page.tsx
The files involved in routing are:
page.tsx
will serve as the homepage, i.e.localhost:3000
.api/route.ts
will serve responses tolocalhost:3000/api
.
This corresponds to the following chart:
URL | Matched Routes |
---|---|
/ | app/page.tsx |
/api | app/api/route.ts |
Begin by creating an api
directory inside the app
directory by executing the following command in your terminal window:
mkdir app/api
Create a dynamic Next.js route handler
Create a file named route.ts
in the app/api
directory with the following code. It streams the reasoning and steps taken by the autonomous AI agent while it generates the trip itinerary:
// File: app/api/route.ts
import type { NextRequest } from 'next/server'
// Exporting a constant to declare the endpoint as not a static one
export const dynamic = 'force-dynamic'
// Handling GET request
export async function GET(request: NextRequest) {
// Extracting place and days from request query parameters
const searchParams = request.nextUrl.searchParams
const place = searchParams.get('place')
const days = searchParams.get('days')
// If place or days are missing, return 500 status
if (!place || !days) return new Response(null, { status: 500 })
}
The code above does the following:
- Imports the
NextRequest
helper function that extends the Web Request API. - Forces the endpoint to be always dynamic by exporting the
dynamic
constant asforce-dynamic
. - Exports a
GET
HTTP handler which responds to incoming GET requests on/api
. - Deconstructs and validates the
place
anddays
query parameters from the incoming request URL.
Integrate LangChain tools and the SerpApi search API
LangChain provides a set of tools and integrations to facilitate various tasks, such as reading from or writing to files on the disk and interact with external services via APIs.
Using ReadFileTool
and WriteFileTool
, the agents can store and access data persistently, ensuring that valuable information is retained across sessions.
With the SerpAPI
LangChain tool, the agent can execute real-time searches to gather up-to-date information relevant to trip planning. When a user requests a trip itinerary, LangChain's integration with SerpApi allows it to dynamically fetch data such as weather forecasts, local events, transportation options, and points of interest for the specified destination.
For example, if a user plans a trip to Bangalore for a specific number of days, the SerpApi tool can retrieve events, available transportation services, and popular tourist attractions in Bangalore. This real-time data ensures that the generated itinerary is tailored to the user's preferences and reflects the latest information available.
Make the following changes to the route.ts
file to add the file system and LangChain-compatible SerpApi tools to your autonomous AI agent:
// File: app/api/route.ts
// [!code ++]
// Import SerpAPI LangChain Tool
import { SerpAPI } from '@langchain/community/tools/serpapi'
// Import FileStore, Read and Write Tools
import { NodeFileStore } from 'langchain/stores/file/node'
// [!code ++]
import { ReadFileTool, WriteFileTool } from 'langchain/tools'
// [!code ++]
import type { NextRequest } from 'next/server'
import { tmpdir } from 'os'
// [!code ++]
// Exporting a constant to declare the endpoint as not a static one
export const dynamic = 'force-dynamic'
// Handling GET request
export async function GET(request: NextRequest) {
// Extracting place and days from request query parameters
const searchParams = request.nextUrl.searchParams
const place = searchParams.get('place')
const days = searchParams.get('days')
// If place or days are missing, return 500 status
if (!place || !days) return new Response(null, { status: 500 })
// Creating a NodeFileStore with temporary directory
const store = new NodeFileStore(tmpdir()) // [!code ++]
// Initializing tools array with necessary tools
const tools = [
// [!code ++]
new ReadFileTool({ store }), // [!code ++]
new WriteFileTool({ store }), // [!code ++]
new SerpAPI(process.env.SERPAPI_API_KEY, {
// [!code ++]
location: 'Bangalore, India', // [!code ++]
hl: 'en', // [!code ++]
gl: 'in', // [!code ++]
}), // [!code ++]
] // [!code ++]
}
The above additions to the code do the following:
- Imports the
tmpdir
utility fromos
package to obtain the location of the system's temporary directory. - Imports
NodeFileStore
from LangChain file system stores to read and write the generated itinerary from and to a file, respectively. - Imports LangChain's
ReadFileTool
to use as a tool with the file store to read from a given file. - Imports LangChain's
WriteFileTool
to use as a tool with the file store to write to a given file. - Creates a new file store instance using temporary directory as the base path.
- Creates an new, in-place instance of the
SerpAPI
class with a given location, language, and country.
Create an in-memory vector store with OpenAI
Vector stores are crucial for autonomous AI agents because they allow the agent to represent and understand data in a continuous vector space, enabling them to perform tasks like information retrieval. With a vector store, the AI agent can efficiently store and retrieve vector representations of data, enabling more effective decision-making and problem-solving capabilities.
Moreover, using an in-memory vector store can significantly enhance the speed and responsiveness of the AI agent, crucial for real-time applications.
Make the following additions to the code in the route.ts
file to use OpenAI's embeddings generation to populate and manage the vectors within the in-memory store:
// File: app/api/route.ts
// Import SerpAPI LangChain Tool
import { SerpAPI } from '@langchain/community/tools/serpapi'
// Import OpenAI LangChain Embeddings Generator
import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai'
// Import FileStore, Read and Write Tools
import { NodeFileStore } from 'langchain/stores/file/node'
import { ReadFileTool, WriteFileTool } from 'langchain/tools'
// [!code ++]
// Import In-Memory Vector Store
import { MemoryVectorStore } from 'langchain/vectorstores/memory'
import type { NextRequest } from 'next/server'
import { tmpdir } from 'os'
// [!code ++]
// Exporting a constant to declare the endpoint as not a static one
export const dynamic = 'force-dynamic'
// Handling GET request
export async function GET(request: NextRequest) {
// Extracting place and days from request query parameters
const searchParams = request.nextUrl.searchParams
const place = searchParams.get('place')
const days = searchParams.get('days')
// If place or days are missing, return 500 status
if (!place || !days) return new Response(null, { status: 500 })
// Creating a NodeFileStore with temporary directory
const store = new NodeFileStore(tmpdir())
// Initializing tools array with necessary tools
const tools = [
new ReadFileTool({ store }),
new WriteFileTool({ store }),
new SerpAPI(process.env.SERPAPI_API_KEY, {
location: 'Bangalore, India',
hl: 'en',
gl: 'in',
}),
]
// Initializing vectorStore with OpenAIEmbeddings
const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings()) // [!code ++]
}
The additions in the code above do the following:
- Imports both
ChatOpenAI
andOpenAIEmbeddings
from LangChain's OpenAI integration. These are used to define a language model and assist with internal vector embeddings generation, respectively. - Imports the
MemoryVectorStore
class from LangChain. - Creates a new instance of an in-memory vector store, using OpenAI to generate the embeddings.
Integrate AutoGPT as the autonomous AI agent
AutoGPT is an autonomous AI agent designed to automate a wide range of tasks through natural language understanding and generation.
AutoGPT leverages advanced language models, such as ChatOpenAI
, to understand user queries and generate responses or perform actions accordingly. In this case, AutoGPT is utilized to generate trip itineraries based on user input regarding the destination and duration of the trip. By interpreting and processing natural language commands, AutoGPT autonomously plans and organizes travel details, enhancing user experience and convenience.
Each autonomous task executed by AutoGPT is represented by a generations matrix containing intermediary results or insights generated during the agent's decision-making process. Each element in this array may represent a command executed by the agent (such as write the itinerary to a file) or a thought process (such as the need to search for popular tourist attractions) that led to a specific action or recommendation.
The structure of an item in the generations matrix, when parsed as JSON, has the following form:
{
"thoughts": {
"text": "thought",
"reasoning": "reasoning",
"plan": "- short bulleted\n- list that conveys\n- long-term plan",
"criticism": "constructive self-criticism",
"speak": "thoughts summary to say to user"
},
"command": {
"name": "command name",
"args": {
"arg name": "value"
}
}
}
command
: The action that will be performed by the agent. In our case, depending on the tools used, this can bewrite_file
,read_file
andsearch
representing the usage ofWriteFileTool
,ReadFileTool
andSerpAPI
's search tool respectively. Our usage is focused on the args received in thewrite_file
command as it is used to write the generated itinerary to the disk.thoughts
: The "thinking" of the agent in terms of what is it planning to do next, what is the reasoning behind it's action, and what the things are that it can do to improve itself.
To implement this, make the following code additions in the route.ts
file to use AutoGPT as your autonomous AI agent:
// File: app/api/route.ts
. . .
// Import AutoGPT
import { AutoGPT } from "langchain/experimental/autogpt"; // [!code ++]
// Exporting a constant to declare the endpoint as not a static one
export const dynamic = "force-dynamic";
// Handling GET request
export async function GET(request: NextRequest) {
. . .
// Initializing assistant object with configuration
const assistant = { // [!code ++]
maxIterations: 2, // [!code ++]
aiRole: "Assistant", // [!code ++]
aiName: "Rishi Raj Jain", // [!code ++]
memory: vectorStore.asRetriever(), // [!code ++]
};
// Initializing TextEncoder
const encoder = new TextEncoder(); // [!code ++]
// Creating a custom ReadableStream
const customReadable = new ReadableStream({ // [!code ++]
async start(controller) { // [!code ++]
// Initializing ChatOpenAI instance
const llm = new ChatOpenAI({ // [!code ++]
temperature: 0, // [!code ++]
streaming: true, // [!code ++]
callbacks: [ // [!code ++]
{ // [!code ++]
// Handling LLM end event
handleLLMEnd(output) { // [!code ++]
const generations = JSON.parse(output.generations[0][0].text); // [!code ++]
// Enqueueing data to the controller
controller.enqueue( // [!code ++]
encoder.encode(`data: ${JSON.stringify(generations)}\n\n`) // [!code ++]
); // [!code ++]
}, // [!code ++]
}, // [!code ++]
], // [!code ++]
}); // [!code ++]
// Initializing AutoGPT instance with llm, tools, and assistant
const autogpt = AutoGPT.fromLLMAndTools(llm, tools, assistant); // [!code ++]
// Running AutoGPT with a specific command
await autogpt.run([ // [!code ++]
`write an itinerary for a trip to ${place} for ${days} days`, // [!code ++]
]); // [!code ++]
// Closing the controller
controller.close(); // [!code ++]
}, // [!code ++]
}); // [!code ++]
// Returning a Response with the custom ReadableStream and necessary headers
return new Response(customReadable, { // [!code ++]
headers: { // [!code ++]
Connection: "keep-alive", // [!code ++]
"Content-Encoding": "none", // [!code ++]
"Cache-Control": "no-cache, no-transform", // [!code ++]
"Content-Type": "text/event-stream; charset=utf-8", // [!code ++]
}, // [!code ++]
}); // [!code ++]
}
The additions above do the following:
- Imports the
AutoGPT
autonomous AI agent from LangChain. - Creates a streaming response by instantiating the
ReadableStream
class. - Returns the stream handler as a standard web response.
- Inside the
start
callback of stream handler, creates a new instance ofChatOpenAI
class (asllm
), with the following callback:handleLLMEnd
: This is called when the chat model finishes processing a request to generate its output. This callback is invoked multiple times during the process as each task that the autonomous AI agent takes to work upon is an individual request in itself. Inside this callback, you queue the generated (intermediary) result as a message in the stream response.
- Uses the
llm
defined above with the defined set of tools earlier to create an AutoGPT instance. - Finally, it invokes AutoGPT to start planning the trip itinerary for the place and number of days obtained per the incoming GET request.
Building the trip planner user interface
To start building the application's user interface, open the app/page.tsx
file and replace the existing code with the following:
"use client";
// Importing useState hook from React
import { useState } from "react";
// Importing components from the UI library
import {
Select,
SelectContent,
SelectItem,
SelectTrigger,
SelectValue,
} from "@/components/ui/select";
import { Button } from "@/components/ui/button";
// Default function component
export default function () {
// State variables for selected days and place
const [days, setDays] = useState<string>();
const [place, setPlace] = useState<string>();
// Rendered JSX
return (
<>
{/* Title */}
<h1 className="text-xl font-semibold">Trip Planner</h1>
{/* Select inputs for choosing place and days */}
<div className="mt-3 flex flex-row gap-x-3 items-center">
{/* Select for choosing place */}
<Select onValueChange={setPlace}>
<SelectTrigger className="w-[180px]">
<SelectValue placeholder="Place" />
</SelectTrigger>
<SelectContent>
<SelectItem value="Delhi">Delhi</SelectItem>
<SelectItem value="Bangalore">Bangalore</SelectItem>
<SelectItem value="Hyderabad">Hyderabad</SelectItem>
</SelectContent>
</Select>
{/* Select for choosing days */}
<Select onValueChange={setDays}>
<SelectTrigger className="w-[180px]">
<SelectValue placeholder="Days" />
</SelectTrigger>
<SelectContent>
<SelectItem value="2">2</SelectItem>
<SelectItem value="3">3</SelectItem>
<SelectItem value="4">4</SelectItem>
<SelectItem value="5">5</SelectItem>
<SelectItem value="6">6</SelectItem>
</SelectContent>
</Select>
</div>
{/* Button to trigger planning */}
<Button className="mt-3 max-w-max">
Plan →
</Button>
</>
);
}
The code above does the following:
- Uses
use client
to enable client side interactivity on the homepage. - Imports the
useState
hook from React. - Imports all of the utilities related to
Select
andButton
component created usingshadcn/ui
. - Exports an unnamed React component.
- Defines
days
andplace
as state variables which are updated to reflect the user's choice in the respective drop-downs (created usingSelect
component).
Let's move on to streaming the response from the API endpoint that you created earlier and rendering a user interface from the same. Update the homepage code in the app/page.tsx
file with the following:
"use client";
// Importing useState hook from React
import { useState } from "react";
// Importing components from the UI library
import {
Select,
SelectContent,
SelectItem,
SelectTrigger,
SelectValue,
} from "@/components/ui/select";
import { Button } from "@/components/ui/button";
// Default function component
export default function () {
// State variables for selected days and place, and messages received from the server
const [days, setDays] = useState<string>();
const [place, setPlace] = useState<string>();
const [messages, setMessages] = useState<any[]>([]); // [!code ++]
// Function to connect to the server and receive messages
const connectToStream = (days: string, place: string) => { // [!code ++]
// Creating a new EventSource for server-sent events (SSE)
const eventSource = new EventSource(`/api?place=${place}&days=${days}`); // [!code ++]
// Event listener for receiving messages
eventSource.addEventListener("message", (event) => { // [!code ++]
// Parsing the JSON data received from the server
const tmp = JSON.parse(event.data); // [!code ++]
// Updating the messages state with the new message
setMessages((messages) => [...messages, tmp]); // [!code ++]
// If it outputs the trip, close the stream.
if (tmp.command.name === "write_file") eventSource.close(); // [!code ++]
}); // [!code ++]
}; // [!code ++]
// Function to trigger planning based on selected days and place
const plan = () => days && place && connectToStream(days, place); // [!code ++]
// Rendered JSX
return (
<>
{/* Title */}
<h1 className="text-xl font-semibold">Trip Planner</h1>
{/* Select inputs for choosing place and days */}
<div className="mt-3 flex flex-row gap-x-3 items-center">
{/* Select for choosing place */}
<Select onValueChange={setPlace}>
<SelectTrigger className="w-[180px]">
<SelectValue placeholder="Place" />
</SelectTrigger>
<SelectContent>
<SelectItem value="Delhi">Delhi</SelectItem>
<SelectItem value="Bangalore">Bangalore</SelectItem>
<SelectItem value="Hyderabad">Hyderabad</SelectItem>
</SelectContent>
</Select>
{/* Select for choosing days */}
<Select onValueChange={setDays}>
<SelectTrigger className="w-[180px]">
<SelectValue placeholder="Days" />
</SelectTrigger>
<SelectContent>
<SelectItem value="2">2</SelectItem>
<SelectItem value="3">3</SelectItem>
<SelectItem value="4">4</SelectItem>
<SelectItem value="5">5</SelectItem>
<SelectItem value="6">6</SelectItem>
</SelectContent>
</Select>
</div>
{/* Button to trigger planning */}
<Button className="mt-3 max-w-max" > // [!code --]
<Button // [!code ++]
className="mt-3 max-w-max" // [!code ++]
onClick={plan} // [!code ++]
> // [!code ++]
Plan →
</Button>
{/* Displaying messages received from the server */} // [!code ++]
{messages.map((i, _) => ( // [!code ++]
<div // [!code ++]
key={_} // [!code ++]
className="mt-3 border-t border-black flex flex-col items-start text-left" // [!code ++]
> // [!code ++]
{/* Displaying action */} // [!code ++]
<span className="border-b w-full mt-3 font-medium text-blue-600"> // [!code ++]
Action // [!code ++]
</span> // [!code ++]
<span className="mt-3">{i.thoughts.speak}</span> // [!code ++]
{/* Displaying reasoning */} // [!code ++]
<span className="border-b w-full mt-3 font-medium text-orange-600"> // [!code ++]
Why? // [!code ++]
</span> // [!code ++]
<span className="mt-3">{i.thoughts.reasoning}</span> // [!code ++]
{/* Displaying next steps */} // [!code ++]
<span className="border-b w-full mt-3 font-medium text-purple-600"> // [!code ++]
Next Steps // [!code ++]
</span> // [!code ++]
<div className="w-full h-[1px] mt-3" /> // [!code ++]
{/* Displaying each next step */} // [!code ++]
{i.thoughts.plan.map((j: string) => ( // [!code ++]
<div key={j}>- {j}</div> // [!code ++]
))} // [!code ++]
{/* Displaying trip details if command is to write a file */} // [!code ++]
{i.command.name === "write_file" && ( // [!code ++]
<> // [!code ++]
<span className="border-b w-full mt-3 font-medium text-green-600"> // [!code ++]
Your Trip // [!code ++]
</span> // [!code ++]
<span className="mt-3 whitespace-pre">{i.command.args.text}</span> // [!code ++]
</> // [!code ++]
)} // [!code ++]
</div> // [!code ++]
))} // [!code ++]
</>
);
}
The code additions above do the following:
- Defines
messages
as a state variable that is updated as new messages are received from the API endpoint. - Creates a
connectToStream
function which initiates a request to/api
endpoint with the user-selectedplace
anddays
as the query parameters. - Listens to new messages from the stream, parses them into JSON and appends them to the list of messages obtained.
- Closes the connection if the latest message obtained from the API has
write_file
as the command. This is done to limit the usage of endpoint to obtain a single trip itinerary, as the autonomous AI agent could do multiple rounds of generations and that would spike your OpenAI API costs. - Attaches an
onClick
listener to the "Plan" button so that ifdays
andplace
is selected, theconnectToStream
function is invoked. - Loops over the reactive
messages
array to create an informative interface describing the reasoning, action, and steps taken to generate the trip itinerary in real-time.
Great! You have successfully created a trip planner that uses an autonomous AI agent (AutoGPT) to generate a trip itinerary. It communicates its thought process and reasoning throughout the process of planning the trip.
To test the application locally, double-check that you've exported your OpenAI and SerpApi keys and run the development server with the following:
export OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"
export SERPAPI_API_KEY="<YOUR_SERPAPI_API_KEY>"
npm run dev
If you visit http://localhost:3000
in your web browser, you will be able to test out the application locally. Once you make your selections and hit "Plan", you should see the autonomous AI agent plan a trip over the few moments. Press CTRL-C to stop the development server when you are finished.
In the upcoming section, you will proceed to deploy the application online on the Koyeb platform.
Deploy the Next.js application to Koyeb
Koyeb is a developer-friendly serverless platform to deploy apps globally. No-ops, servers, or infrastructure management and it has supports for different tech stacks such as Rust, Golang, Python, PHP, Node.js, Ruby, and Docker.
With the app now complete, the final step is to deploy it online on Koyeb.
We will use git-driven deployment to deploy on Koyeb. To do this, we need to create a new GitHub repository from the GitHub web interface or by using the GitHub CLI with the following command:
gh repo create <YOUR_GITHUB_REPOSITORY> --private
Next.js automatically initialized a git repository when we created a new project. Add a new remote pointing to your GitHub repository by typing:
git remote add origin git@github.com:<YOUR_GITHUB_USERNAME>/<YOUR_GITHUB_REPOSITORY>.git
git branch -M main
Next, add all the files in your project directory to the git repository and push them to GitHub:
git add .
git commit -m "Initial commit"
git push -u origin main
To deploy the code on the GitHub repository, visit the Koyeb control panel, and while on the Overview tab, click the Create Web Service button to start the deployment process:
- Select GitHub as the deployment method.
- Choose the repository for your code from the repository drop-down menu.
- In the Environment variables section, click Add variable to include additional environment variables. Add the
OPENAI_API_KEY
andSERPAPI_API_KEY
variables by selecting the Secret type and selecting Create new secret in the value field. Specify the secret name and its corresponding value and click Create. - Choose a name for your App and Service, for example
ai-trip-planner
, and click Deploy.
During the deployment on Koyeb, the process identifies the build
and start
scripts outlined in the package.json
file, using them to build and launch the application. The deployment progress can be tracked through the logs presented. Upon the completion of deployment and the successful execution of vital health checks, your application will be operational.
If you would like to look at the code for the demo application, you can find it in the project repository.
Conclusion
In this tutorial, you used AutoGPT, an autonomous AI agent, through LangChain to create a trip planner. The application continuously communicates the steps it takes and the reasoning behind them in order to generate an itinerary.
Autonomous AI agents can reduce human workload by making independent decisions in real-time, an ability well-suited for personalized use cases. Their self-reasoning capacity and ability to execute real world actions makes autonomous AI agents uniquely powerful and helpful.
Given that the application was deployed with the git deployment method, subsequent code push to the deployed branch will automatically initiate a new build for your application. Changes to your application will become live once the deployment is successful. In the event of a failed deployment, Koyeb retains the last operational production deployment, ensuring the uninterrupted operation of your application.