Version, test, and monitor every prompt and agent with robust evals, tracing, and regression sets.
This library provides convenient access to the PromptLayer API from applications written in JavaScript.
npm install promptlayerOptional peer dependencies (learn more):
npm install promptlayer @openai/agents
npm install promptlayer @anthropic-ai/claude-agent-sdkTo follow along, you need a PromptLayer API key. Once logged in, go to Settings to generate a key.
Create a client and fetch a prompt template from PromptLayer:
import { PromptLayer } from "promptlayer";
async function main() {
const pl = new PromptLayer({
apiKey: process.env.PROMPTLAYER_API_KEY,
});
const prompt = await pl.templates.get("support-reply", {
input_variables: {
customer_name: "Ada",
question: "How do I reset my password?",
},
});
console.log(prompt?.prompt_template);
}
main();SDK methods that make network requests return promises.
You can also use the client as a proxy around supported provider SDKs:
npm install openaiimport OpenAI from "openai";
import { PromptLayer } from "promptlayer";
async function main() {
const pl = new PromptLayer({
apiKey: process.env.PROMPTLAYER_API_KEY,
});
const PromptLayerOpenAI: typeof OpenAI = pl.OpenAI;
const openai = new PromptLayerOpenAI();
const response = await openai.chat.completions.create({
model: "gpt-4.1-mini",
messages: [{ role: "user", content: "Say hello in one short sentence." }],
// @ts-ignore PromptLayer proxy option
pl_tags: ["proxy-example"],
});
console.log(response);
}
main();PromptLayer(...) accepts these parameters:
apiKey: string | undefined: Your PromptLayer API key. If omitted, the SDK looks forPROMPTLAYER_API_KEY.enableTracing: boolean = false: Enables OpenTelemetry tracing export to PromptLayer.baseURL: string | undefined: Overrides the PromptLayer API base URL. If omitted, the SDK usesPROMPTLAYER_BASE_URLor the default API URL.throwOnError: boolean = true: Controls whether SDK methods throw errors or returnnullor fallback values for many API errors.cacheTtlSeconds: number = 0: Enables in-memory prompt-template caching when greater than0.
The SDK relies on the following environment variables:
| Variable | Required | Description |
|---|---|---|
PROMPTLAYER_API_KEY |
Yes, unless passed as apiKey |
API key used to authenticate requests to PromptLayer. |
PROMPTLAYER_BASE_URL |
No | Overrides the PromptLayer API base URL. Defaults to https://api.promptlayer.com. |
PROMPTLAYER_TRACEPARENT |
No | Optional trace context passed through the Claude Agents integration. |
The main resources surfaced by PromptLayer are:
| Resource | Description |
|---|---|
client.templates |
Prompt template retrieval, listing, publishing, and cache invalidation. |
client.run() and client.runWorkflow() |
Helpers for running prompts and workflows. |
client.logRequest() |
Manual request logging. |
client.track |
Request annotation utilities for metadata, prompt linkage, scores, and groups. |
client.group |
Group creation for organizing related requests. |
client.wrapWithSpan() |
Helper for tracing your own functions and sending those spans to PromptLayer when tracing is enabled. |
client.skills |
Skill collection pull, publish, and update operations. |
client.OpenAI and client.Anthropic |
Provider proxies that wrap those SDKs and log requests to PromptLayer. |
Note: When tracing is enabled, spans are exported to PromptLayer using OpenTelemetry.
Optional modules that are imported directly rather than accessed through the client:
| Module | Description |
|---|---|
promptlayer/openai-agents |
Tracing utilities for the OpenAI Agents SDK that instrument agent runs and export their traces to PromptLayer. |
promptlayer/claude-agents |
Configuration utilities for the Claude Agent SDK that load the PromptLayer plugin and required environment settings so Claude agent runs send traces to PromptLayer. |
The SDK throws JavaScript Error instances for validation failures, missing API keys, unsupported providers, and PromptLayer API errors.
| Error case | Description |
|---|---|
| Missing API key | PromptLayer throws if no API key is passed and PROMPTLAYER_API_KEY is not set. |
| Validation failure | Some resource methods validate inputs before making a request, such as score ranges and skill collection providers. |
| PromptLayer API error | Non-success PromptLayer responses throw with the API error message when throwOnError is enabled. |
| Provider SDK error | Provider SDK calls made through client.run() or a provider proxy surface the underlying provider error. |
| Workflow failure | client.runWorkflow() can return { success: false, message } for some workflow-start failures, and throws for errors such as timeouts or no successful output node. |
By default, the client throws these errors. If you initialize PromptLayer with throwOnError: false, many resource methods return null, false, an empty result, or the original provider response instead of throwing on PromptLayer API errors.
When enabled, the SDK caches fetched prompt templates in memory for faster repeat reads, locally re-renders them with new variables, and falls back to stale cache on temporary API failures.
- Caching is disabled by default and is enabled by setting
cacheTtlSecondswhen creatingPromptLayer. - The cache applies to prompt templates fetched through
client.templates.get(...). - Cached entries are stored in memory and keyed by prompt name, version, label, provider, and model.
- Requests that include
metadata_filtersormodel_parameter_overridesbypass the cache. - Templates that require server-side rendering behavior, such as placeholder messages or tool-variable expansion, are not cached for local rendering.
- If a cached template is stale and PromptLayer returns a transient error, the SDK can serve the stale cached version as a fallback.
- You can clear cached entries with
client.invalidate(...)orclient.templates.invalidate(...).