Skip to content

🤖 Your First AI Endpoint

Now you’re going to create a simple API route that connects to OpenAI and sends back AI responses.

OpenAI has three different API endpoints you can use:

🔹 Chat Completion API
The classic format. Still useful for many use cases. Multiple providers use this SDK like Grok, Gemini, Claude, so you can use one SDK for all of them. But it lacks access to the newest tools and models.

const completion = await client.chat.completions.create({
model: "gpt-4.1",
messages: [
{
role: "user",
content: "Write a one-sentence bedtime story about a unicorn.",
},
],
});
console.log(completion.choices[0].message.content);

🔹 Agent SDK
Built specifically for developing AI agents. Great for advanced workflows, multi-tool environments, and memory-based agents — but not the best starting point.

const agent = new Agent({
name: 'History Tutor',
instructions: 'You provide assistance with historical queries. Explain important events and context clearly.',
});
const result = await run(agent, 'When did sharks first appear?');
console.log(result.finalOutput);

🔹 Response API (NEW)
The future of OpenAI interaction. It simplifies how you talk to models, unlocks the latest tools, and offers more flexible foundations.

const response = await client.responses.create({
model: "gpt-4.1",
input: "Write a one-sentence bedtime story about a unicorn.",
});
console.log(response.output_text);

For this course you’re going to start with the Response API to learn all its features, then move to the Agent SDK to build AI agents.


Before you build the endpoint, let’s break down how the Response API works:

const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});

What this does:

  • ✅ Initializes the OpenAI SDK — Sets up the client so you can interact with OpenAI’s API using simple function calls
  • 🔐 Uses your API key — Pulls your secret key from the .env file to authenticate your requests
  • 🌐 Connects to OpenAI’s servers — Handles all the HTTP communication and security behind the scenes
const response = await client.responses.create({
model: "gpt-4.1",
input: message,
});

Breaking it down:

  • client.responses.create() - Calls the Response API, which is the new, simplified way to interact with OpenAI’s models
  • model: "gpt-4.1" - Specifies which model to use. You can replace this with “gpt-3.5-turbo”, “gpt-4”, or “gpt-4o” depending on your use case and access level
  • input: message - This is the prompt you’re sending to the model — usually the user’s message or question from your frontend

After making the API call, you get a response object. To access the model’s reply:

response.output_text;

Let’s build this step by step so you understand exactly what you’re doing and why.

First, import the OpenAI library at the top of your index.js file. Add this to your existing imports:

import express from "express";
import { config } from "dotenv";
import cors from "cors";
import OpenAI from "openai"; // Add this line
config();

Step 2: Create the OpenAI Client (Outside Routes)

Section titled “Step 2: Create the OpenAI Client (Outside Routes)”

Add this right after you create your Express app, but before your routes:

const app = express();
const port = process.env.PORT || 8000;
// Create OpenAI client once - reuse for all requests
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
app.use(cors());
app.use(express.json());

Why you create it here: Placing the client outside ensures it’s created once when the server starts — not every time a user sends a request. This improves performance and avoids unnecessary overhead.

Now add the actual endpoint that handles chat requests:

// AI Chat endpoint
app.post("/api/chat", async (req, res) => {
try {
// Extract the message from the request body
const { message } = req.body;
// Validate that we received a message
if (!message) {
return res.status(400).json({ error: "Message is required" });
}
// Call OpenAI Response API using our pre-created client
const response = await openai.responses.create({
model: "gpt-4.1",
input: message,
});
// Send back the AI response to the frontend
res.json({
response: response.output_text,
success: true,
});
} catch (error) {
console.error("OpenAI API Error:", error);
res.status(500).json({
error: "Failed to get AI response",
success: false,
});
}
});

Route Definition:

  • app.post("/api/chat") - Creates a POST endpoint (you use POST because you’re sending data)
  • async (req, res) - Makes the function async so you can use await for the OpenAI call

Input Handling:

  • const { message } = req.body - Extracts the message from the JSON sent by the frontend
  • Input validation ensures you have something to send to OpenAI

Error Handling:

  • try/catch catches any errors (network issues, API key problems, etc.)
  • Returns proper HTTP status codes so your frontend knows what happened
  • Logs errors to help you debug issues

Here’s your complete index.js file with the AI endpoint added:

import express from "express";
import { config } from "dotenv";
import cors from "cors";
import OpenAI from "openai"; // 🆕 NEW ADDITION: Import OpenAI SDK
config();
const app = express();
const port = process.env.PORT || 8000;
// 🆕 NEW ADDITION: Create OpenAI client once for better performance
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
app.use(cors());
app.use(express.json());
// Test route
app.get("/", (req, res) => {
res.send("Backend is running successfully.");
});
// 🆕 NEW ADDITION: AI Chat endpoint that connects to OpenAI
app.post("/api/chat", async (req, res) => {
try {
const { message } = req.body;
if (!message) {
return res.status(400).json({ error: "Message is required" });
}
// Call OpenAI Response API using our pre-created client
const response = await openai.responses.create({
model: "gpt-4.1",
input: message,
});
res.json({
response: response.output_text,
success: true,
});
} catch (error) {
console.error("OpenAI API Error:", error);
res.status(500).json({
error: "Failed to get AI response",
success: false,
});
}
});
app.listen(port, () => {
console.log(`🚀 Server running on http://localhost:${port}`);
});

Terminal window
curl -X POST http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "Write a one-sentence bedtime story about a unicorn."}'

If you have Postman or Insomnia:

  1. Method: POST
  2. URL: http://localhost:8000/api/chat
  3. Headers: Content-Type: application/json
  4. Body (JSON):
{
"message": "Write a one-sentence bedtime story about a unicorn."
}

You should get a response like:

{
"response": "Once upon a time, a gentle unicorn named Luna sprinkled stardust dreams across the sleepy meadow, helping all the woodland creatures drift into the most peaceful slumber.",
"success": true
}

If you prefer the traditional Chat Completion format, here’s the alternative endpoint:

// Alternative using Chat Completion API
app.post("/api/chat-completion", async (req, res) => {
try {
const { message } = req.body;
if (!message) {
return res.status(400).json({ error: "Message is required" });
}
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const completion = await client.chat.completions.create({
model: "gpt-4.1",
messages: [
{
role: "user",
content: message,
},
],
});
res.json({
response: completion.choices[0].message.content,
success: true,
});
} catch (error) {
console.error("OpenAI API Error:", error);
res.status(500).json({
error: "Failed to get AI response",
success: false,
});
}
});

IssueFix
401 UnauthorizedCheck your OPENAI_API_KEY in .env
400 Bad RequestMake sure you’re sending JSON with a message field
insufficient_quotaAdd more credits to your OpenAI account
rate_limit_exceededWait a moment and try again
Server crashesCheck console for error details

Next, you’ll build a React frontend to create a proper chat interface. Time to make it look good! 🎨