How to Build your own custom ChatGPT Using Python & OpenAI

Urvashi
5 min readMay 29, 2023

If you want to build a chat bot like ChatGPT or BingChat, then you’re in the right place!

Thanks to OpenAI, now you can easily have access to the powerful APIs behind these chat bots that enable you to integrate AI capabilities into your own applications.

In this post, you’ll learn how to build your own AI-powered chat bot in Python using the openai package.

Setup Your OpenAI API Key

To access the OpenAI API, you need to obtain an API key from the OpenAI platform:

1. Visit the OpenAI API platform at https://platform.openai.com/.

2. You may be prompted to create an account or log in if you already have one.

3. In order to use the OpenAI API, you need to set up a paid account on https://platform.openai.com/account/billing/overview. You get a few free credits in a new account.

4. Once you’re done, navigate to https://platform.openai.com/account/api-keys.

5. Click on the “Create new secret key” button to generate your unique API key.

6. Copy the API key to use later and make sure to keep it secure. Treat it like a password, as it provides access to your OpenAI API resources.

Remember, the API key is sensitive information and should not be shared publicly or with unauthorised individuals!

Let’s build!

Installing the openai package

Before you start, make sure you have the OpenAI Python client installed. You can install it with pip:

pip install openai

Setting the API Key as Environment Variable

Obtain the API key and store it as an environment variable so that you can use it in the program later:

export OPENAI_API_KEY=’your-api-key-here’

Make sure to replace “your-api-key-here” with your actual OpenAI API key in the above command.

Onto the Code

Create a new file called chat.py and import the following packages in it:

import openai
import os

The openai package will be used to access the OpenAI API whereas the os package will be used to load environment variables.

Now, load the API key and set the value of openai.api_key:

openai.api_key = os.getenv("OPENAI_API_KEY")

Let’s create a chatbot function that will be invoked when you run your script:

import openai
import os

# Load your API key from an environment variable or secret management service
openai.api_key = os.getenv("OPENAI_API_KEY")

def chatbot():
# Keep repeating the following
while True:
# Prompt user for input
message = input("User: ")

# Exit program if user inputs "quit"
if message.lower() == "quit":
break

if __name__ == "__main__":
print("Start chatting with the bot (type 'quit' to stop)!")
chatbot()

Right now, the script creates an infinite while loop that accepts a user message each time and quits when the user enters the message “quit”.

But at this point, your chatbot is not really responding to the user messages.

To do that, we’ll be using OpenAI’s Chat Completion endpoint that uses language models like gpt-3.5-turbo and gpt-4 and deliver intelligent responses to the user messages.

Since we want our chat bot to remember the user’s previous messages, we’ll be maintaining the conversation history as a list to provide context for each new user message and then send this entire list to the API.

def chatbot():
# Create a list to store all the messages for context
messages = []

# Keep repeating the following
while True:
# Prompt user for input
message = input("User: ")

# Exit program if user inputs "quit"
if message.lower() == "quit":
break

# Add each new message to the list
messages.append({"role": "user", "content": message})

# Request gpt-3.5-turbo for chat completion
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages
)

# Print the response and add it to the messages list
chat_message = response['choices'][0]['message']['content']
print(f"Bot: {chat_message}")
messages.append({"role": "assistant", "content": chat_message})

Notice how the each message is represented as a dictionary with role and content keys.

role is the role of the author of the message and can be one of the following values: “system”, "user”, or “assistant”; and content is the actual content of the message.

“user” message is the message input by the user and the “assistant” message is the message returned by the AI model.

But what is a “system” message then?

A conversation may begin with a “system” message to gently instruct the AI assistant. This message sets the stage and instructs the assistant on how to respond.

For example, in the case of ChatGPT, a system message may look like this:

You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible. Knowledge cutoff: {knowledge_cutoff} Current date: {current_date}

You can set the “system” message while creating the messages list like so:

messages = [
{"role": "system", "content": "You are a helpful assistant."},
]

You can even add multiple system messages but note that with the gpt-3.5-turbo-0301 model, it doesn’t pay strong attention to the system message. So if there are important instructions or guidance you want to give, it’s often better to include them in a “user” message. That way, the assistant can fully understand and respond to your specific requests.

Putting it all together!

import openai
import os

# Load your API key from an environment variable or secret management service
openai.api_key = os.getenv("OPENAI_API_KEY")

def chatbot():
# Create a list to store all the messages for context
messages = [
{"role": "system", "content": "You are a helpful assistant."},
]

# Keep repeating the following
while True:
# Prompt user for input
message = input("User: ")

# Exit program if user inputs "quit"
if message.lower() == "quit":
break

# Add each new message to the list
messages.append({"role": "user", "content": message})

# Request gpt-3.5-turbo for chat completion
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages
)

# Print the response and add it to the messages list
chat_message = response['choices'][0]['message']['content']
print(f"Bot: {chat_message}")
messages.append({"role": "assistant", "content": chat_message})

if __name__ == "__main__":
print("Start chatting with the bot (type 'quit' to stop)!")
chatbot()

Now you’re ready to start chatting with your AI chat bot!

Run the script using:

export OPENAI_API_KEY='your-api-key-here'
python3 chat.py
AI Chatbot

Extra: Playing around with system messages

Sarcastic Assistant

Use the following system message and see how the responses of the bot change!

messages = [
{"role": "system", "content": "You are a sarcastic assistant."},
]
Sarcastic AI Chatbot

French Assistant

How about a French assistant?

messages = [
{"role": "system", "content": "You are a French assistant. You only speak and respond in French."},
]
French AI Chatbot

I hope you had fun building your own custom chat bot!

You can learn more about the OpenAI API by clicking here.

--

--