This is a hands-on experience designed for you to work through a series of quests, each designed to guide you through the process of building AI apps step by step.
π§ Concepts you will cover include:-
- GitHub Models
- Azure AI Foundry VS Code Extension
- Azure AI Foundry Portal
- AI Toolkit VS Code Extension
- Azure Developer CLI (azd)
- Express.js
- Vite, Lit
- LangChain.js
- Azure AI Agents Service
- MCP Tools
- Automation with GenAIScript
This build-a-thon is organized into quests β choose the one that matches your goals and click its badge to begin.
Each quest has a required activity (e.g., push code). After you complete it, GitHub Actions will automatically unlock your next step.
Tips:
β Recommended pathβ Start with the first quest and go in order for the best learning experience.
π To restart, click the Reset button at the top of any page.
Click on a quest and follow the instructions to get started.
To reset your progress and select a different quest, click this button:
- A GitHub account
- Visual Studio Code installed
- Node.js installed
You will build a local Gen AI prototype using JavaScript and TypeScript. This prototype will allow you to experiment with different AI models, parameters, and prompts.
GitHub Models is a FREE service that provides access to a variety of AI models from different providers and a playground to experiment with them. You can use these models to build your own AI applications (prototypes), or just to learn more about how AI works.
With GitHub Models, you can use GitHub Personal Access Tokens (PAT) to authenticate and access the models locally, or use a single API key to access all the models.
- Right click GitHub Models and open in a new tab
Note
Open the link IN A NEW TAB so you can keep this page open for reference. You can use the Split Screen feature on Edge to have this instructions and GitHub Models/ VS Code open side-by-side
-
Click on explore the full model catalog to see the available models.
You will see a broad range of models listed in the catalog.
π€ But which model should you use for what?
-
Scroll down to the Filter section to see the available filters. You can filter the models by:
- Publisher: Cohere, DeepSeek, Meta, Mistral AI, Microsoft (research), Azure OpenAI Service, and more.
- Category: Conversation (models optimized for dialogue use cases), Agents, Multimodal (models capable of processing input in multiple formats - audio, visual etc.), Reasoning, and more.
-
Select a model from the list to open the model card. The model card provides detailed information about the model, and may include:
-
Model Abstract: A brief description of the model and its capabilities.
-
Model Architecture: The data used to train the model and their modalities for input and output (text-image pairs), the model size (parameters), model context length (how much text the model can process at once), training date (knowledge cut-off date/data freshness for the model), supported languages, and more.
-
Model Use cases: Primary and out-of-scope use cases for the model, responsible AI considerations, content filtering configurations and more.
-
Model License: The license under which the model is released, including any restrictions on use or distribution.
-
Model Benchmarks: A summary of the model's performance on various benchmarks, including accuracy, speed, and other relevant metrics.
-
Model Benchmarks: A list of benchmarks used to evaluate the model's performance, including details on how the model performed on each benchmark. Metrics may include:
- MMLU Pass@1 (Measuring Massive Multitask Language Understanding) - Knowledge and reasoning across science, math, and humanities.
- DROP - Measures reading comprehension and numerical reasoning capabilities
- among others
-
-
After selecting a model and reviewing the model card, you can use the Playground to experiment with the model. The playground provides a user-friendly interface for testing the model's capabilities and understanding how it works.
You can directly send questions (prompts) to the model and see how it responds. Throughout the session, you can monitor the token usage and the model's response time at the top of the chat UI.
-
To check your token usage against your GitHub Models free quota (input/ output tokens, latency), click on the Input: Output: Time note at the top right of the chat UI to open Model usage insights.
-
Before going further, on the right side of the playground, switch from Details to Parameters to see the available parameters that you can adjust to change the model's behavior.
The parameters include:
- Max Tokens: The maximum number of tokens the model can generate in response to a prompt. Adjusting this parameter can help control the length of the model's output.
- Temperature: Controls the randomness of the model's output. A higher temperature (e.g., 0.8) makes the output more random, while a lower temperature (e.g., 0.2) makes it more focused and deterministic.
- Top P: Controls the diversity of the model's output. A higher value (e.g., 0.9) allows for more diverse outputs, while a lower value (e.g., 0.1) makes the output more focused on the most likely tokens.
- Presence Penalty: Controls the model's tendency to repeat itself. A higher value (e.g., 1.0) discourages repetition, while a lower value (e.g., 0.0) allows for more repetition.
- Frequency Penalty: Similar to the penalty penalty, this parameter controls the model's tendency to repeat the same words or phrases.
- Stop: A list of tokens that, when generated, will stop the model's output.
You can continue interacting with the model in the playground, as you adjust the parameters to ensure you get the desired output.
π€ How do models compare across different prompts and parameters?
-
GitHub Models provides a Compare feature that allows you to compare the performance of different models on the same prompt. This is useful for understanding how different models respond to the same input and can help you choose the best model for your specific use case.
Click on the Compare button at the top right of the playground.
Select the models you want to compare from the list of available models from the drop-down.
This will open a chat UI for the selected models side by side, and your prompt will be sent to both models.
In the example provided, you can compare the performance of a reasoning model and a conversation model on the same prompt to understand their strengths and limitations.
Now that you have a better understanding of the models from the GitHub Models playground, let's look at how to use them in JavaScript code.
-
Clone the repository to your local machine using the following command:
git clone <your-repo-url>
Replace
<your-repo-url>with the URL of your GitHub repository. -
Open the cloned repository in Visual Studio Code.
cd <your-repo-name> code .
Replace
<your-repo-name>with the name of your cloned repository.
-
On the far right, click on Use this model and select Language: JavaScript and SDK: Azure AI Inference SDK.
Follow the instructions provided to:
-
Download the contoso website hand-drawn sketch from this link, (right click and open in new tab). and save it
contoso_layout_sketch.jpgin the same directory as yoursample.jsfile.Note
If you aren't using a muiltimodal model, swap out the
modelNamein the code sample with a multimodal model of your choice. You can find a list of multimodal models in the GitHub Models catalog. -
Update the code to pass the image to the model as input.
Note You can use GitHub Copilot to help you with this task.
-
Run the code and check the output in the console.
The AI Toolkit in Visual Studio Code is a powerful extension that provides a set of tools and features to help you build AI applications more efficiently.
-
Click on the Extensions icon in the left sidebar of Visual Studio Code, search for AI Toolkit and install.
-
Similar to GitHub Models, with the AI Toolkit now installed, you can browse through the catalog of available models, and use the Playground to experiment with the models, all on VS Code.
Let's execute the exercise above using the AI Toolkit in VS Code.
-
Select a multimodal model from the catalog and open the Playground.
In the playground, upload the
contoso_layout_sketch.jpgimage and enter a prompt to write the HTML code for the website. -
On the generated code, click on the New file icon to copy the generated code into a new file. Save it as
index.htmlin the same directory as yoursample.jsfile.Do the same for the CSS code and save it as
style.cssin the same directory.You can preview the generated code and iterate on the code to improve it (optionally using GitHub Copilot).
To complete this quest and AUTOMATICALLY UPDATE your progress, you MUST push your code to the repository as described below.
Checklist
- Have a
sample.jsfile at the root of your project - The file MUST include a reference to your GITHUB_TOKEN environment variable
-
In the terminal, run the following commands to add, commit, and push your changes to the repository:
git add . git commit -m "Working with GitHub Models and AI Toolkit" git push
-
After pushing your changes, WAIT ABOUT 15 SECONDS FOR GITHUB ACTIONS TO UPDATE YOUR README.
To skip this quest and select a different one, click this button:
Here are some additional resources to help you learn more about experimenting with AI models and building prototypes:
- About GitHub Models
- Choosing the right AI model for your task
- Introducing GitHub Models: A new generation of AI engineers building on GitHub
- πΉ DEM500: Prototype, build, and deploy AI apps quickly with GitHub Models
- What is the AI Toolkit for Visual Studio Code?
- Lesson 1: Introduction to Generative AI and LLMs for JavaScript Developers
- Lesson 2: Writing your first AI app
- Lesson 3: Prompt Engineering
- Microsoft AI Tools Extension Pack, a curated set of essential extensions for building generative AI applications and agents in VS Code

















