Get Chatty with Your Own AI Assistant: A Guide to Building with Node.js, Express, and OpenAI
Developing a conversational AI assistant using Node.js, Express JS, and OpenAI’s text completion API.
Hey there! I’m excited to share some great news with you — we’ve got an update on the latest and greatest way to build with the new, lightning-fast and 10 times more cost-effective openAI model, GPT3.5 Turbo - check it out here!
Building a conversational AI assistant is like having your own personal Jarvis from “Iron Man” but with a more limited range of abilities and less snarky comments. That being said, it’s still pretty cool! And building one is now easier than ever before, thanks to the cloud-based natural language processing (NLP) APIs like OpenAI’s text completion API.
In this article, we’ll explore how to build a simple conversational AI assistant using Node.js, Express JS, and OpenAI’s text completion API. But before we dive into the implementation, let’s quickly go over what these technologies are and why we’re using them.
Node.js is an open-source, cross-platform JavaScript runtime environment that allows us to run server-side applications. Think of it as a superpower for your computer to handle all the heavy lifting. Express JS, on the other hand, is a web framework for Node.js that makes building web applications as easy as eating a piece of cake. Well, maybe not that easy, but it’s still pretty darn convenient! And last but not least, OpenAI’s text completion API is the cherry on top of our cake. It’s a cloud-based API that provides an easy way to complete text inputs using advanced machine learning models.
With these technologies in mind, let’s take a look at the code to see how we can build our own personal Jarvis.
First we create a new folder and name it anything really, then we initialize the NodeJS project using
npm init
After following the prompts and creating a NodeJS project, we install the dependencies(the things we listed above; you read it, didn’t you? :^)
npm install express openai body-parser dotenv
Create a .env file to store your environment variables.
Your project directory should now look like this:
Now let dive into index.js. If there’s no index.js in your project, you can go ahead and create one.
We start by importing the required dependencies and setting them up:
const express = require('express');
const openAI = require('openai');
// Importing the dotenv module to access environment variables
require('dotenv').config();
// Importing the body-parser module to parse incoming request bodies
const bp = require('body-parser');
// Creating a new Express app
const app = express();
// Using body-parser middleware to parse incoming request bodies as JSON
app.use(bp.json());
// Using body-parser middleware to parse incoming request bodies as URL encoded data
app.use(bp.urlencoded({ extended: true }));
Then we set up the OpenAI API client using our API key stored as an environment variable in our .env file.
You can get your own API key by making an account at openai.com and then visiting https://platform.openai.com/account/api-keys to manage your keys.
// Importing and setting up the OpenAI API client
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
Next, we define a conversation context prompt which provides some context for the conversation with the AI assistant. You can get creative here and design the personality of your assistant.
Just remember to edit the “conversationContextPrompt”.
The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.
Human: Hello, who are you?
AI: I am an AI created by OpenAI. How can I help you today?
Human:
const conversationContextPrompt = "The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: ";
Next, we define an endpoint to handle incoming requests. When a request is received, the code extracts the user’s message from the request body, adds it to the context we just defined and then calls the OpenAI API to complete the message. The response data is then sent back to the client.
// Defining an endpoint to handle incoming requests
app.post('/converse', (req, res) => {
// Extracting the user's message from the request body
const message = req.body.message;
// Calling the OpenAI API to complete the message
openai.createCompletion({
model: "text-davinci-003",
// Adding the conversation context to the message being sent
prompt: conversationContextPrompt + message,
temperature: 0.9,
max_tokens: 150,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0.6,
stop: [" Human:", " AI:"],
})
.then((response) => {
// Sending the response data back to the client
res.send(response.data.choices);
});
});
Finally, we start the Express app and listens on port 3000 for incoming requests.
// Starting the Express app and listening on port 3000
app.listen(3000, () => {
console.log('Conversational AI assistant listening on port 3000!');
});
The final index.js should look like this:
const express = require("express");
const openAI = require("openai");
// Importing the dotenv module to access environment variables
require("dotenv").config();
// Importing the body-parser module to parse incoming request bodies
const bp = require("body-parser");
// Creating a new Express app
const app = express();
// Using body-parser middleware to parse incoming request bodies as JSON
app.use(bp.json());
// Using body-parser middleware to parse incoming request bodies as URL encoded data
app.use(bp.urlencoded({ extended: true }));
// Importing and setting up the OpenAI API client
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
// Defining a conversation context prompt
const conversationContextPrompt =
"The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: ";
// Defining an endpoint to handle incoming requests
app.post("/converse", (req, res) => {
// Extracting the user's message from the request body
const message = req.body.message;
// Calling the OpenAI API to complete the message
openai
.createCompletion({
model: "text-davinci-003",
// Adding the conversation context to the message being sent
prompt: conversationContextPrompt + message,
temperature: 0.9,
max_tokens: 150,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0.6,
stop: [" Human:", " AI:"],
})
.then((response) => {
// Sending the response data back to the client
res.send(response.data.choices);
});
});
// Starting the Express app and listening on port 3000
app.listen(3000, () => {
console.log("Conversational AI assistant listening on port 3000!");
});
Testing
For this tutorial, we will using Postman to test the API endpoint locally.
Here we sent a POST request to http://localhost:3000/converse with the JSON body:
{
"message": "The Not so Great Gatsby, who wrote it?"
}
We got the response:
[
{
"text": "\nAI: The Great Gatsby was written by F. Scott Fitzgerald and published in 1925.",
"index": 0,
"logprobs": null,
"finish_reason": "stop"
}
]
Note that the response is an array when you are working with the output! So you will need to specify the index “e.g. response[0].text” to work with just the text output.
The codebase provided in the above example serves as a solid foundation for readers who want to develop their own conversational AI assistant. With this codebase, readers can build upon the existing functionality to create a more sophisticated AI assistant that can remember context, understand user intent, and respond appropriately.
For example, the chat context can be concatenated and stored in a database so that the AI assistant can remember the conversation history and use it to provide a more natural and conversational experience. Additionally, the code can be extended to connect with a front-end using a web socket, which will enable the AI assistant to interact with users in real-time.
Readers can also experiment with different OpenAI models to see which one works best for their use case. They can also fine-tune the models and even train their own custom models using OpenAI’s GPT-3.
In summary, the codebase serves as a starting point for readers to create their own AI assistant and to extend its functionality to meet their specific requirements. The possibilities are endless, and with some creativity and technical know-how, readers can build a truly remarkable AI assistant that is tailored to their specific needs.
Conclusion
Building a conversational AI assistant with Node.js, Express, and OpenAI’s text completion API is a fun and easy process that can result in a powerful conversational AI. The flexibility and variety of models and APIs provided by OpenAI make it simple to create a conversational AI that meets specific needs. Whether a seasoned developer or a beginner, building a conversational AI with Node.js and OpenAI is a valuable use of time. Give it a try and see where imagination leads!