🤖 Your First AI Endpoint
Now you’re going to create a simple API route that connects to OpenAI and sends back AI responses.
OpenAI has three different API endpoints you can use:
🔹 Chat Completion API
The classic format. Still useful for many use cases. Multiple providers use this SDK like Grok, Gemini, Claude, so you can use one SDK for all of them. But it lacks access to the newest tools and models.
const completion = await client.chat.completions.create({ model: "gpt-4.1", messages: [ { role: "user", content: "Write a one-sentence bedtime story about a unicorn.", }, ],});console.log(completion.choices[0].message.content);
🔹 Agent SDK
Built specifically for developing AI agents. Great for advanced workflows, multi-tool environments, and memory-based agents — but not the best starting point.
const agent = new Agent({ name: 'History Tutor', instructions: 'You provide assistance with historical queries. Explain important events and context clearly.',});const result = await run(agent, 'When did sharks first appear?');console.log(result.finalOutput);
🔹 Response API (NEW)
The future of OpenAI interaction. It simplifies how you talk to models, unlocks the latest tools, and offers more flexible foundations.
const response = await client.responses.create({ model: "gpt-4.1", input: "Write a one-sentence bedtime story about a unicorn.",});console.log(response.output_text);
✅ For this course you’re going to start with the Response API to learn all its features, then move to the Agent SDK to build AI agents.
🔍 Understanding the Response API
Section titled “🔍 Understanding the Response API”Before you build the endpoint, let’s break down how the Response API works:
Creating the OpenAI Client
Section titled “Creating the OpenAI Client”const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});
What this does:
- ✅ Initializes the OpenAI SDK — Sets up the client so you can interact with OpenAI’s API using simple function calls
- 🔐 Uses your API key — Pulls your secret key from the .env file to authenticate your requests
- 🌐 Connects to OpenAI’s servers — Handles all the HTTP communication and security behind the scenes
Making the API Call
Section titled “Making the API Call”const response = await client.responses.create({ model: "gpt-4.1", input: message,});
Breaking it down:
client.responses.create()
- Calls the Response API, which is the new, simplified way to interact with OpenAI’s modelsmodel: "gpt-4.1"
- Specifies which model to use. You can replace this with “gpt-3.5-turbo”, “gpt-4”, or “gpt-4o” depending on your use case and access levelinput: message
- This is the prompt you’re sending to the model — usually the user’s message or question from your frontend
Getting the Response
Section titled “Getting the Response”After making the API call, you get a response object. To access the model’s reply:
response.output_text;
🛠️ Create Your AI Endpoint
Section titled “🛠️ Create Your AI Endpoint”Let’s build this step by step so you understand exactly what you’re doing and why.
Step 1: Import the OpenAI SDK
Section titled “Step 1: Import the OpenAI SDK”First, import the OpenAI library at the top of your index.js
file. Add this to your existing imports:
import express from "express";import { config } from "dotenv";import cors from "cors";import OpenAI from "openai"; // Add this line
config();
Step 2: Create the OpenAI Client (Outside Routes)
Section titled “Step 2: Create the OpenAI Client (Outside Routes)”Add this right after you create your Express app, but before your routes:
const app = express();const port = process.env.PORT || 8000;
// Create OpenAI client once - reuse for all requestsconst openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});
app.use(cors());app.use(express.json());
Why you create it here: Placing the client outside ensures it’s created once when the server starts — not every time a user sends a request. This improves performance and avoids unnecessary overhead.
Step 3: Create the Chat Endpoint
Section titled “Step 3: Create the Chat Endpoint”Now add the actual endpoint that handles chat requests:
// AI Chat endpointapp.post("/api/chat", async (req, res) => { try { // Extract the message from the request body const { message } = req.body;
// Validate that we received a message if (!message) { return res.status(400).json({ error: "Message is required" }); }
// Call OpenAI Response API using our pre-created client const response = await openai.responses.create({ model: "gpt-4.1", input: message, });
// Send back the AI response to the frontend res.json({ response: response.output_text, success: true, }); } catch (error) { console.error("OpenAI API Error:", error); res.status(500).json({ error: "Failed to get AI response", success: false, }); }});
Breaking Down Each Part:
Section titled “Breaking Down Each Part:”Route Definition:
app.post("/api/chat")
- Creates a POST endpoint (you use POST because you’re sending data)async (req, res)
- Makes the function async so you can useawait
for the OpenAI call
Input Handling:
const { message } = req.body
- Extracts the message from the JSON sent by the frontend- Input validation ensures you have something to send to OpenAI
Error Handling:
try/catch
catches any errors (network issues, API key problems, etc.)- Returns proper HTTP status codes so your frontend knows what happened
- Logs errors to help you debug issues
📝 Update Your Complete index.js
Section titled “📝 Update Your Complete index.js”Here’s your complete index.js
file with the AI endpoint added:
import express from "express";import { config } from "dotenv";import cors from "cors";import OpenAI from "openai"; // 🆕 NEW ADDITION: Import OpenAI SDK
config();
const app = express();const port = process.env.PORT || 8000;
// 🆕 NEW ADDITION: Create OpenAI client once for better performanceconst openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});
app.use(cors());app.use(express.json());
// Test routeapp.get("/", (req, res) => { res.send("Backend is running successfully.");});
// 🆕 NEW ADDITION: AI Chat endpoint that connects to OpenAIapp.post("/api/chat", async (req, res) => { try { const { message } = req.body;
if (!message) { return res.status(400).json({ error: "Message is required" }); }
// Call OpenAI Response API using our pre-created client const response = await openai.responses.create({ model: "gpt-4.1", input: message, });
res.json({ response: response.output_text, success: true, }); } catch (error) { console.error("OpenAI API Error:", error); res.status(500).json({ error: "Failed to get AI response", success: false, }); }});
app.listen(port, () => { console.log(`🚀 Server running on http://localhost:${port}`);});
🧪 Test Your AI Endpoint
Section titled “🧪 Test Your AI Endpoint”Method 1: Using curl (Terminal)
Section titled “Method 1: Using curl (Terminal)”curl -X POST http://localhost:8000/api/chat \ -H "Content-Type: application/json" \ -d '{"message": "Write a one-sentence bedtime story about a unicorn."}'
Method 2: Using a REST Client
Section titled “Method 2: Using a REST Client”If you have Postman or Insomnia:
- Method: POST
- URL:
http://localhost:8000/api/chat
- Headers:
Content-Type: application/json
- Body (JSON):
{ "message": "Write a one-sentence bedtime story about a unicorn."}
You should get a response like:
{ "response": "Once upon a time, a gentle unicorn named Luna sprinkled stardust dreams across the sleepy meadow, helping all the woodland creatures drift into the most peaceful slumber.", "success": true}
🔧 Alternative: Chat Completion API
Section titled “🔧 Alternative: Chat Completion API”If you prefer the traditional Chat Completion format, here’s the alternative endpoint:
// Alternative using Chat Completion APIapp.post("/api/chat-completion", async (req, res) => { try { const { message } = req.body;
if (!message) { return res.status(400).json({ error: "Message is required" }); }
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, });
const completion = await client.chat.completions.create({ model: "gpt-4.1", messages: [ { role: "user", content: message, }, ], });
res.json({ response: completion.choices[0].message.content, success: true, }); } catch (error) { console.error("OpenAI API Error:", error); res.status(500).json({ error: "Failed to get AI response", success: false, }); }});
🔧 Troubleshooting
Section titled “🔧 Troubleshooting”Issue | Fix |
---|---|
401 Unauthorized | Check your OPENAI_API_KEY in .env |
400 Bad Request | Make sure you’re sending JSON with a message field |
insufficient_quota | Add more credits to your OpenAI account |
rate_limit_exceeded | Wait a moment and try again |
Server crashes | Check console for error details |
✅ What’s Next
Section titled “✅ What’s Next”Next, you’ll build a React frontend to create a proper chat interface. Time to make it look good! 🎨