🤖 Your First AI Endpoint
Time for the exciting part - making your server intelligent! 🧠
You’re about to create an API endpoint that sends messages to OpenAI and gets smart responses back. This is the foundation of every AI application.
🔍 OpenAI API Options (Quick Overview)
Section titled “🔍 OpenAI API Options (Quick Overview)”OpenAI gives you three ways to interact with their models:
💬 Chat Completion API - The classic way (still great for many apps)
🤖 Agent SDK - For advanced AI agents (we’ll use this later)
⚡ Response API - The newest, simplest approach (we’re starting here)
Why Response API? It’s cleaner, more flexible, and gives you access to the latest features. Perfect for learning.
// This is all you need for AI responses!const response = await client.responses.create({ model: "gpt-4o-mini", input: "Your message here"});
🛠️ Step 1: Add OpenAI to Your Server
Section titled “🛠️ Step 1: Add OpenAI to Your Server”Open your index.js
file and add the OpenAI import:
import express from "express";import { config } from "dotenv";import cors from "cors";import OpenAI from "openai"; // 👈 Add this line
config();
Why at the top? ES6 imports must go at the beginning of your file.
🔧 Step 2: Create the OpenAI Client
Section titled “🔧 Step 2: Create the OpenAI Client”Add this after your Express app setup, but before your routes:
const app = express();const PORT = process.env.PORT || 8000;
// Create OpenAI client (do this once!)const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});
app.use(cors());app.use(express.json());
Important: Create the client once when your server starts, not on every request. This is much more efficient.
🤖 Step 3: Create Your AI Chat Endpoint
Section titled “🤖 Step 3: Create Your AI Chat Endpoint”Now for the main event - add this endpoint:
// AI Chat endpointapp.post("/api/chat", async (req, res) => { try { // Get the message from the request const { message } = req.body;
// Make sure we have a message if (!message) { return res.status(400).json({ error: "Message is required" }); }
// Call OpenAI and get a response const response = await openai.responses.create({ model: "gpt-4o-mini", // Start with the cheaper model input: message, });
// Send the AI's response back res.json({ response: response.output_text, model: "gpt-4o-mini", success: true });
} catch (error) { console.error("OpenAI Error:", error); res.status(500).json({ error: "Failed to get AI response", success: false }); }});
What each part does:
POST /api/chat
- Your frontend will send messages here{ message }
- Extract the user’s message from the requestopenai.responses.create()
- Send message to OpenAI, get AI responseresponse.output_text
- The actual AI response texttry/catch
- Handle errors gracefully (API issues, network problems, etc.)
📝 Your Complete index.js File
Section titled “📝 Your Complete index.js File”Here’s how your complete index.js
should look:
import express from "express";import { config } from "dotenv";import cors from "cors";import OpenAI from "openai";
config();
const app = express();const PORT = process.env.PORT || 8000;
// Create OpenAI client onceconst openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});
// Middlewareapp.use(cors());app.use(express.json());
// Test routeapp.get("/", (req, res) => { res.json({ message: "🤖 OpenAI Backend is running!", status: "ready" });});
// AI Chat endpointapp.post("/api/chat", async (req, res) => { try { const { message } = req.body;
if (!message) { return res.status(400).json({ error: "Message is required" }); }
const response = await openai.responses.create({ model: "gpt-4o-mini", input: message, });
res.json({ response: response.output_text, model: "gpt-4o-mini", success: true });
} catch (error) { console.error("OpenAI Error:", error); res.status(500).json({ error: "Failed to get AI response", success: false }); }});
// Start serverapp.listen(PORT, () => { console.log(`🚀 Server running on http://localhost:${PORT}`); console.log(`🤖 AI endpoint ready at /api/chat`);});
🧪 Test Your AI Endpoint
Section titled “🧪 Test Your AI Endpoint”Start your server:
npm run dev
Test with curl:
curl -X POST http://localhost:8000/api/chat \ -H "Content-Type: application/json" \ -d '{"message": "Hello! Tell me a fun fact about space."}'
Success looks like:
{ "response": "Did you know that one day on Venus (243 Earth days) is longer than one Venus year (225 Earth days)? This means a day on Venus lasts longer than its year!", "model": "gpt-4o-mini", "success": true}
Using Postman/Insomnia:
- Method: POST
- URL:
http://localhost:8000/api/chat
- Headers:
Content-Type: application/json
- Body:
{"message": "Your question here"}
🔧 Common Issues & Solutions
Section titled “🔧 Common Issues & Solutions”❌ “401 Unauthorized”
- Check your API key in
.env
- Make sure there are no extra spaces around the key
❌ “insufficient_quota”
- Add more credits to your OpenAI account
- Remember, you need at least $5 to start
❌ “Cannot POST /api/chat”
- Make sure your server is running (
npm run dev
) - Check you’re using the correct URL and method (POST)
❌ Server crashes
- Check the console for error details
- Make sure your
.env
file exists and has the API key
Congratulations! 🎉 Your server can now talk to AI!
This is huge - you’ve just created the foundation for any AI application. Your backend can now:
- Receive messages from users
- Send them to OpenAI
- Return intelligent responses
- Handle errors gracefully
👉 Next: Building the Chat Interface - Let’s create a beautiful frontend!