Harness AI Potential: Create a Robust AI Chatbot with GPT-3.5 Turbo Model Utilizing NodeJS
Introduction to the GPT-3.5-Turbo Model and its Capabilities
OpenAI’s latest and most advanced language model is the GPT-3.5-Turbo Model, which powers the widely-used ChatGPT. This model is not only easier to use than the previous Davinci model, but it is also ten times cheaper.
With the capabilities of GPT-3.5-Turbo, anyone now has the exciting opportunity to build their very own chatbot that can be just as powerful as ChatGPT. It’s as simple as processing messages and responses as an array. This chat model takes a series of messages as input and returns a response message as output. The array format makes dialog conversations easy.
Link to the full code and installation instructions on GitHub
Setting Up the Environment
Before starting, we need to install the necessary modules:
- express: A renowned web application framework for Node.js.
- openai: A module offering access to the OpenAI API.
- dotenv: A module that loads environment variables from a
.env
file. - body-parser: A module for parsing incoming request bodies.
To install these modules, open your terminal, navigate to the project directory, and execute the following command:
npm install express openai dotenv body-parser
Remember to obtain your API key from OpenAI’s website.
Developing the Chatbot
We will now go on to making the Chatbot. If you don’t have any index.js in your folder, you can go ahead to make one.
Import the express, body-parser, and openai modules.
const express = require("express");
const openAI = require("openai");
// Importing the body-parser module to parse incoming request bodies
const bp = require("body-parser");
Load the environment variable OPENAI_API_KEY from the .env file using the dotenv module.
// Importing the dotenv module to access environment variables
require("dotenv").config();
Initialize a chatArray variable as a storage container for conversation history. Start the array with a single object representing a system message to customize the AI assistant’s behavior.
//chatArray[Acts like a storage]
const chatArray = [{"role": "system", "content": "You are a helpful assistant."}];
Set up the express app and configure the body-parser middleware to parse incoming request bodies as JSON and URL-encoded data.
// Creating a new Express app
const app = express();
// Using body-parser middleware to parse incoming request bodies as JSON
app.use(bp.json());
// Using body-parser middleware to parse incoming request bodies as URL encoded data
app.use(bp.urlencoded({ extended: true }));
Configure the OpenAI API client. The Configuration object provides the API key from the environment variables, while the OpenAIApi object facilitates API requests.
// Importing and setting up the OpenAI API client
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
Define an endpoint at /converse to manage incoming POST requests.
Extract the user’s message from the request body and pass it to the OpenAI API’s createChatCompletion() method along with the conversation history stored in chatArray.
Append the AI’s response to chatArray and return the entire conversation history as a JSON response.
app.post("/converse", (req, res) => {
// Extracting the user's message from the request body
const message = req.body.message;
// Calling the OpenAI API to complete the message
openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: chatArray.concat([{ role: "user", content: message }])
}).then((response) => {
// Save the user's message and the AI's response to the chatArray
chatArray.push({ role: "user", content: message });
chatArray.push({ role: "assistant", content: response.data.choices[0].message.content });
// Return the chatArray as a JSON response
res.json(chatArray);
});
});
Start the Express App.
// Starting the Express app and listening on port 3000
app.listen(3000, () => {
console.log("Conversational AI assistant listening on port 3000!");
});
Your index.js
file should now look like this:
const express = require("express");
const openAI = require("openai");
// Importing the body-parser module to parse incoming request bodies
const bp = require("body-parser");
// Importing the dotenv module to access environment variables
require("dotenv").config();
//chatArray[Acts like a storage]
const chatArray = [{"role": "system", "content": "You are a helpful assistant."}];
// Creating a new Express app
const app = express();
// Using body-parser middleware to parse incoming request bodies as JSON
app.use(bp.json());
// Using body-parser middleware to parse incoming request bodies as URL encoded data
app.use(bp.urlencoded({ extended: true }));
// Importing and setting up the OpenAI API client
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
// Defining an endpoint to handle incoming requests
app.post("/converse", (req, res) => {
// Extracting the user's message from the request body
const message = req.body.message;
// Calling the OpenAI API to complete the message
openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: chatArray.concat([{ role: "user", content: message }])
}).then((response) => {
// Save the user's message and the AI's response to the chatArray
chatArray.push({ role: "user", content: message });
chatArray.push({ role: "assistant", content: response.data.choices[0].message.content });
// Return the chatArray as a JSON response
res.json(chatArray);
});
});
// Starting the Express app and listening on port 3000
app.listen(3000, () => {
console.log("Conversational AI assistant listening on port 3000!");
});
Testing the Endpoint
To test the /converse
endpoint locally, use Postman desktop.
While in the project directory, run the following command in your terminal:
node index.js
You should see a message indicating
“Conversational AI assistant listening on port 3000!”.
Now, you are ready to proceed.
Open Postman and send a POST request to http://localhost:3000/converse
with the JSON body: { “message”: “Hello there! I’m Joshua” }
Example Response:
Building a chatbot with the GPT-3.5-Turbo model using NodeJS and Express is an excellent starting point. With this foundation, you can enhance your chatbot, creating a powerful AI assistant capable of handling a broad array of tasks and interactions.
Apply this knowledge to various applications and continue advancing on your AI journey. Good luck!