OpenAI Responses API for TypeScript Developers
- api
- gpt
- javascript
- openai
- responses
- typescript
Overview
OpenAI released a new Responses API in March 2025. The Responses API is the successor to the Chat Completions API, and is the standard way to interact with OpenAI’s LLMs. Check out OpenAI’s comparison of the two APIs to understand the main differences between the two.
Here’s how you can utilize the new API in a TypeScript server application.
Setup
I’m going to use the domco Vite plugin for this project, but you can use any popular JS server framework.
npm create domco@latest
Install the openai
package, I’m going to use dotenv
to manage environment variables. If you are using a different framework, environment variable setup might be done for you, so be sure to review the framework’s documentation. I’m also using zod
to validate form inputs.
npm i openai dotenv zod
Create an API key and add it to a .env
file in your root directory.
OPENAI_API_KEY="your-api-key"
Be sure .env
is included in your .gitignore
file as well so you do not commit secret information to your repository.
.env
Frontend
Lets add a <form>
to our HTML page to submit a message to our API, and a <div>
element to put the response from the assistant into.
We’ll also add a <script>
tag pointing to /client/main.ts
to add some client side JavaScript that will handle our form submission.
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link rel="icon" type="image/svg+xml" href="/circle.svg" />
<link rel="stylesheet" href="/client/style.css" />
<title>domco-openai</title>
<script type="module" src="/client/main.ts"></script>
</head>
<body class="prose">
<header><h1>OpenAI Responses API</h1></header>
<main>
<form action="/chat" method="POST">
<label for="message">Message</label>
<textarea name="message" id="message"></textarea>
<button>Send</button>
</form>
<h2>Assistant</h2>
<div id="assistant">
<!-- response goes here -->
</div>
</main>
</body>
</html>
In main.ts
, add an event listener to the form that will execute on the submit
event. We’ll handle the submission with JavaScript so we can easily stream the assistant’s response into the page.
// get references to our elements
const form = document.querySelector("form")!;
const assistant = document.querySelector("#assistant")!;
// add a submit listener
form.addEventListener("submit", async (e) => {
// prevent full page reload, instead handle with JS
e.preventDefault();
// set a loading state
assistant.innerHTML = "Loading...";
const { action, method } = form; // values from the corresponding attributes
// creates FormData with all the elements from the form
const body = new FormData(form);
// make a request to our API
const res = await fetch(action, { method, body });
// obtain the reader from the body stream
const reader = res.body?.pipeThrough(new TextDecoderStream()).getReader();
if (!res.ok || !reader) {
assistant.innerHTML = "nope";
return;
}
// clear loading state
assistant.innerHTML = "";
while (true) {
// read each value from the stream
const { done, value } = await reader.read();
if (done) break;
// add each chunk to the assistant <div>
if (value) assistant.innerHTML += value;
}
});
Now if we submit our form, given the /chat
action, we should see a 404
message in the console and our nope
message in the response. We need to add a new /chat
route on the backend to handle this request.
Backend
Setup
First, let’s get the message from the form from req.formData
. We can just return the message as text to start with.
// src/server/+app.ts
import { html } from "client:page";
import * as z from "zod";
export default {
async fetch(req: Request) {
const url = new URL(req.url);
if (url.pathname === "/") {
return new Response(html, { headers: { "content-type": "text/html" } });
}
// create new route /chat
if (req.method === "POST" && url.pathname === "/chat") {
const data = await req.formData();
// get the message based on the textarea's `name` attribute
const message = z.string().parse(data.get("message")) ?? "Empty message.";
// just send back for now
return new Response(message, {
headers: { "content-type": "text/plain" },
});
}
return new Response("Not found", { status: 404 });
},
};
Now when the form is submitted, the same message should be rendered below.
OpenAI Client
To generate a response using the Responses API, create an src/server/ai.ts
module, import the OpenAI client, and provide your API key.
// @/server/ai.ts
// side effect import sets up environment variables
import "dotenv/config";
import { OpenAI } from "openai";
if (!process.env.OPENAI_API_KEY) throw new Error("OPENAI_API_KEY is not set.");
export const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
Now we can import this module and use the client
to create an AI response.
// src/server/+app.ts
import * as ai from "@/server/ai";
import { html } from "client:page";
export default {
async fetch(req: Request) {
// ...
if (req.method === "POST" && url.pathname === "/chat") {
// ...
const response = await ai.client.responses.create({
input: message,
model: "gpt-4.1-nano", // cheapest model
});
return new Response(response.output_text, {
headers: { "content-type": "text/plain" },
});
}
},
};
Send the response.output_text
back as our response. You’ve now created a simple chat application with the Responses API!
Streaming
Instead of waiting for the entire message to buffer on the server before sending it back we can stream the response to the client as it comes in to give the user a faster response.
Add the stream: true
property to the responses.create
argument.
const response = await ai.client.responses.create({
input: message,
model: "gpt-4.1-nano",
stream: true,
});
Now the response
is an AsyncIterable
stream, so we can iterate through each ResponseStreamEvent
to send the data as it streams in.
Create a new ReadableStream
body to handle the stream.
const body = new ReadableStream<string>({
async start(c) {
for await (const event of response) {
// TODO: enqueue the text
}
c.close(); // end the stream
},
}).pipeThrough(new TextEncoderStream());
The ResponseStreamEvent
has a type
property that distinguishes what kind of event is being sent. The "response.output_text.delta"
type is the one we are looking for. It contains the change in the output text since the last event in the delta
property.
const body = new ReadableStream({
async start(c) {
for await (const event of response) {
if (event.type === "response.output_text.delta") {
c.enqueue(event.delta); // send the difference since last time
}
}
c.close();
},
}).pipeThrough(new TextEncoderStream());
Now we can pass this stream into our Response
constructor to stream the contents from our /chat
route on the fly.
// ...
return new Response(body, { headers: { "content-type": "text/plain" } });
Persisting conversation state
OpenAI makes it possible to retrieve previous messages from the same conversation using an ID. This is nice because you do not have to send any of the previous messages when you are having a multi-message conversation, or store anything in a database of your own.
Obtain the id
from the FormData
, pass it into the previous_response_id
property.
Set store: true
to instruct OpenAI to store the conversation.
//...
// get the id from the hidden input element
const id = z.string().nullable().parse(data.get("id"));
const response = await ai.client.responses.create({
input: message,
model: "gpt-4.1-nano", // cheapest model
stream: true,
previous_response_id: id,
store: true,
});
To obtain the id
from the response when streaming, it is contained in the "response.completed"
event instead of directly on the response object. Let’s send this id
to the client at the end of the stream.
// ...
else if (event.type === "response.completed" && !id) {
c.enqueue(event.response.id);
}
Then when the form is submitted again it will contain the id
within the FormData
.
// src/client/main.ts
//...
while (true) {
const { done, value } = await reader.read();
if (done) break;
const idPrefix = "resp_";
if (value.includes(idPrefix)) {
// parse the response id
const [rest, id] = value.split(idPrefix);
// in case it was sent with something before
assistant.innerHTML += rest;
// append a new hidden input to the form with the value of the id
const input = document.createElement("input");
input.type = "hidden";
input.name = "id";
input.value = idPrefix + id;
form.append(input);
} else if (value) {
assistant.innerHTML += value;
}
}
Now the assistant will remember the previous messages.
Retrieving previous messages
In the case you need to get the previous messages in a conversation, you’ll need to make two requests.
const [previous, latest] = await Promise.all([
ai.client.responses.inputItems.list(response.id),
ai.client.responses.retrieve(response.id),
]);
previous; // all previous messages excluding the latest
latest; // the latest message
Conclusion
The Responses API provides a nice way to interact with OpenAI’s LLMs. The final project is located on GitHub. Thanks for reading!