Skip to content

Cerebras/vscode-cerebras-chat

Repository files navigation

Cerebras VS Code Extension

Build with the world's fastest AI inference—directly in VS Code, powered by Cerebras.

Make GitHub Copilot run 10× faster with the world’s fastest inference API. Cerebras Inference powers the world’s top coding models at 2,000 tokens/sec, making code generation instant and enabling super-fast agentic flows. Get your free API key to get started today.

Get Started

API Key Setup

Here's how you can use Cerebras models in VS Code:

  1. Get your free API key from Cerebras Cloud.
  2. Install the Cerebras VS Code extension.
  3. Set up GitHub Copilot if you haven't already done so.
  4. In the GitHub Copilot chat interface, select Manage Models and choose Cerebras.
  5. Paste in your API key when prompted.
  6. Choose which models to enable.
  7. You're all set! Happy coding 🎉

Note: Bring-your-own-key is not supported for GitHub Copilot Enterprise subscriptions at this time.

Supported Models

This extension provides support for GLM 4.7 in agent mode, as well as the following models in chat mode:

Model Token Speed
OpenAI GPT OSS ~3,000 tokens/sec
Z.ai GLM 4.7 ~1,000 tokens/sec
Qwen 3 235B Instruct (Preview) ~1,400 tokens/sec
Llama 3.1 8B ~2,200 tokens/sec

Advanced Tips

Here's how you can accomplish more with Cerebras:

What is Cerebras?

Cerebras Systems delivers the world's fastest AI inference for leading open models on top of its revolutionary AI hardware and software.

Cerebras consistently delivers chart-topping speeds for leading open models like Qwen 3 480B Coder and OpenAI's GPT OSS 120B, according to independent measurements by Artificial Analysis and OpenRouter.

At the heart of Cerebras' technology is the Wafer-Scale Engine (WSE), which is purpose-built for ultra-fast AI training and inference. The Cerebras WSE is the world's fastest processor for AI, delivering unprecedented speed that no number of GPUs can match. Learn more about our novel hardware architecture here.

Related

About

The world's fastest AI inference—now in VS Code

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors