Skip to content

Titus-waititu/JSAI-Build-a-thon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

38 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸŽ‰ Welcome to the JS AI Build-a-thon!

This is a hands-on experience designed for you to work through a series of quests, each designed to guide you through the process of building AI apps step by step.

🧠 Concepts you will cover include:-

  • GitHub Models
  • Azure AI Foundry VS Code Extension
  • Azure AI Foundry Portal
  • AI Toolkit VS Code Extension
  • Azure Developer CLI (azd)
  • Express.js
  • Vite, Lit
  • LangChain.js
  • Azure AI Agents Service
  • MCP Tools
  • Automation with GenAIScript

πŸ—ΊοΈ How it works

This build-a-thon is organized into quests β€” choose the one that matches your goals and click its badge to begin.

Each quest has a required activity (e.g., push code). After you complete it, GitHub Actions will automatically unlock your next step.

Tips:
⭐ Recommended path⭐ Start with the first quest and go in order for the best learning experience.

πŸ”„ To restart, click the Reset button at the top of any page.

βœ… Activity: Select a quest

Click on a quest and follow the instructions to get started.

Static Badge

Static Badge

Static Badge

Static Badge

Static Badge

Static Badge

Static Badge

Static Badge

Static Badge

πŸ€– Quest: I want to build a local Gen AI prototype

To reset your progress and select a different quest, click this button:

Reset Progress

πŸ“‹ Pre-requisites

  1. A GitHub account
  2. Visual Studio Code installed
  3. Node.js installed

πŸ“ Overview

You will build a local Gen AI prototype using JavaScript and TypeScript. This prototype will allow you to experiment with different AI models, parameters, and prompts.

🧠 GitHub Models

GitHub Models is a FREE service that provides access to a variety of AI models from different providers and a playground to experiment with them. You can use these models to build your own AI applications (prototypes), or just to learn more about how AI works.

With GitHub Models, you can use GitHub Personal Access Tokens (PAT) to authenticate and access the models locally, or use a single API key to access all the models.

  1. Right click GitHub Models and open in a new tab

Note

Open the link IN A NEW TAB so you can keep this page open for reference. You can use the Split Screen feature on Edge to have this instructions and GitHub Models/ VS Code open side-by-side

  1. Click on explore the full model catalog to see the available models.

    GH Models full catalog

    You will see a broad range of models listed in the catalog.

    πŸ€” But which model should you use for what?

  2. Scroll down to the Filter section to see the available filters. You can filter the models by:

    • Publisher: Cohere, DeepSeek, Meta, Mistral AI, Microsoft (research), Azure OpenAI Service, and more.
    • Category: Conversation (models optimized for dialogue use cases), Agents, Multimodal (models capable of processing input in multiple formats - audio, visual etc.), Reasoning, and more.
  3. Select a model from the list to open the model card. The model card provides detailed information about the model, and may include:

    A. README

    • Model Abstract: A brief description of the model and its capabilities.

    • Model Architecture: The data used to train the model and their modalities for input and output (text-image pairs), the model size (parameters), model context length (how much text the model can process at once), training date (knowledge cut-off date/data freshness for the model), supported languages, and more.

      Model Architecture

    B. Transparency Note

    • Model Use cases: Primary and out-of-scope use cases for the model, responsible AI considerations, content filtering configurations and more.

      Model Transparency notice

    C. License

    • Model License: The license under which the model is released, including any restrictions on use or distribution.

      Model License notice

    D. Evaluation Report

    • Model Benchmarks: A summary of the model's performance on various benchmarks, including accuracy, speed, and other relevant metrics.

    • Model Benchmarks: A list of benchmarks used to evaluate the model's performance, including details on how the model performed on each benchmark. Metrics may include:

      • MMLU Pass@1 (Measuring Massive Multitask Language Understanding) - Knowledge and reasoning across science, math, and humanities.
      • DROP - Measures reading comprehension and numerical reasoning capabilities
      • among others

      Model Evaluation

  4. After selecting a model and reviewing the model card, you can use the Playground to experiment with the model. The playground provides a user-friendly interface for testing the model's capabilities and understanding how it works.

    Playground button

    You can directly send questions (prompts) to the model and see how it responds. Throughout the session, you can monitor the token usage and the model's response time at the top of the chat UI.

    Playground token usage note

  5. To check your token usage against your GitHub Models free quota (input/ output tokens, latency), click on the Input: Output: Time note at the top right of the chat UI to open Model usage insights.

    Playground token usage card

  6. Before going further, on the right side of the playground, switch from Details to Parameters to see the available parameters that you can adjust to change the model's behavior.

    Playground parameters

    The parameters include:

    • Max Tokens: The maximum number of tokens the model can generate in response to a prompt. Adjusting this parameter can help control the length of the model's output.
    • Temperature: Controls the randomness of the model's output. A higher temperature (e.g., 0.8) makes the output more random, while a lower temperature (e.g., 0.2) makes it more focused and deterministic.
    • Top P: Controls the diversity of the model's output. A higher value (e.g., 0.9) allows for more diverse outputs, while a lower value (e.g., 0.1) makes the output more focused on the most likely tokens.
    • Presence Penalty: Controls the model's tendency to repeat itself. A higher value (e.g., 1.0) discourages repetition, while a lower value (e.g., 0.0) allows for more repetition.
    • Frequency Penalty: Similar to the penalty penalty, this parameter controls the model's tendency to repeat the same words or phrases.
    • Stop: A list of tokens that, when generated, will stop the model's output.

    You can continue interacting with the model in the playground, as you adjust the parameters to ensure you get the desired output.

πŸ€” How do models compare across different prompts and parameters?

  1. GitHub Models provides a Compare feature that allows you to compare the performance of different models on the same prompt. This is useful for understanding how different models respond to the same input and can help you choose the best model for your specific use case.

    Click on the Compare button at the top right of the playground.

    Compare

    Select the models you want to compare from the list of available models from the drop-down.

    This will open a chat UI for the selected models side by side, and your prompt will be sent to both models.

    Compare chat example

    In the example provided, you can compare the performance of a reasoning model and a conversation model on the same prompt to understand their strengths and limitations.

πŸ‘¨β€πŸ’» Playground to VS Code

Now that you have a better understanding of the models from the GitHub Models playground, let's look at how to use them in JavaScript code.

Clone the repository

  1. Clone the repository to your local machine using the following command:

    git clone <your-repo-url>

    Replace <your-repo-url> with the URL of your GitHub repository.

  2. Open the cloned repository in Visual Studio Code.

    cd <your-repo-name>
    code .

    Replace <your-repo-name> with the name of your cloned repository.

Get sample code

  1. On the far right, click on Use this model and select Language: JavaScript and SDK: Azure AI Inference SDK.

    Use model

    Follow the instructions provided to:

    • Get a free developer key, (Personal Access Token (classic)) and store it in an environment variable either using bash, PowerShell or command line.

    • Install dependencies

    • Run basic code sample. Ensure your local file is named sample.js

      Run node sample file

πŸ“Œ Exercise: Convert a hand-drawn sketch to a web page

  1. Download the contoso website hand-drawn sketch from this link, (right click and open in new tab). and save it contoso_layout_sketch.jpg in the same directory as your sample.js file.

    Note

    If you aren't using a muiltimodal model, swap out the modelName in the code sample with a multimodal model of your choice. You can find a list of multimodal models in the GitHub Models catalog.

  2. Update the code to pass the image to the model as input.

    Note You can use GitHub Copilot to help you with this task.

    Update code with GitHub Copilot

  3. Run the code and check the output in the console.

    Run sample passing image

🧰 Use AI Toolkit in VS Code

The AI Toolkit in Visual Studio Code is a powerful extension that provides a set of tools and features to help you build AI applications more efficiently.

  1. Click on the Extensions icon in the left sidebar of Visual Studio Code, search for AI Toolkit and install.

  2. Similar to GitHub Models, with the AI Toolkit now installed, you can browse through the catalog of available models, and use the Playground to experiment with the models, all on VS Code.

    AI Toolkit catalog

    Let's execute the exercise above using the AI Toolkit in VS Code.

  3. Select a multimodal model from the catalog and open the Playground.

    In the playground, upload the contoso_layout_sketch.jpg image and enter a prompt to write the HTML code for the website.

  4. On the generated code, click on the New file icon to copy the generated code into a new file. Save it as index.html in the same directory as your sample.js file.

    AI Toolkit -html

    Do the same for the CSS code and save it as style.css in the same directory.

    You can preview the generated code and iterate on the code to improve it (optionally using GitHub Copilot).

    AI Toolkit - html preview

βœ… Activity: Push sample.js code to your repository

Quest Checklist

To complete this quest and AUTOMATICALLY UPDATE your progress, you MUST push your code to the repository as described below.

Checklist

  • Have a sample.js file at the root of your project
  • The file MUST include a reference to your GITHUB_TOKEN environment variable
  1. In the terminal, run the following commands to add, commit, and push your changes to the repository:

    git add .
    git commit -m "Working with GitHub Models and AI Toolkit"
    git push
  2. After pushing your changes, WAIT ABOUT 15 SECONDS FOR GITHUB ACTIONS TO UPDATE YOUR README.

To skip this quest and select a different one, click this button:

Skip to another quest

πŸ“š Further Reading

Here are some additional resources to help you learn more about experimenting with AI models and building prototypes:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors