This sample shows how to deploy an AI-powered GitHub repository chat tool using Mastra, a TypeScript AI framework. Mastra-nextjs allows you to chat with and understand any GitHub repository by fetching file trees, contents, pull requests, and issues, making it easy to navigate and understand codebases of any size.
- Repository Analysis: Enter a GitHub repository URL and instantly start a conversation about it
- Code Exploration: Navigate file trees, view file contents, and understand code structure
- PR & Issue Access: Query information about pull requests and issues directly in chat
- Large Codebase Support: Powered by Google's Gemini Flash model with its large context window
- Intuitive UI: Built with assistant-UI for a seamless chat experience with retries, copy, and message branching
- Download Defang CLI
- (Optional) If you are using Defang BYOC authenticate with your cloud provider account
- (Optional for local development) Docker CLI
To run the application locally for development, use the development compose file:
docker compose -f compose.dev.yaml upWhen running locally with Docker Compose, you will need to set the GOOGLE_GENERATIVE_AI_API_KEY environment variable to your GCP API key. You can get the API key from the Google AI Studio.
When running locally with Docker Compose, you are limited to the models on this list: Google models.
This will:
- Start PostgreSQL with volume persistence for local development
- Expose PostgreSQL on port 5432 for direct access if needed
- Start the Next.js application on port 3000 with hot reload
You can access mastra-nextjs at http://localhost:3000 once the containers are running.
For this sample, you will need to provide the following configuration. Note that if you are using the 1-click deploy option, you can set these values as secrets in your GitHub repository and the action will automatically deploy them for you.
The password for your Postgres database. You need to set this before deploying for the first time.
You can easily set this to a random string using defang config set POSTGRES_PASSWORD --random
You can easily set this using defang config set LLM_MODEL=<SELECTED_MODEL>
The large language model to use for the AI-powered chat. This can be set to models like anthropic.claude-3-5-sonnet-20241022-v2:0 for AWS or gemini-2.5-flash for Google Cloud. Here is a list of supported models for GCP and AWS. For AWS make sure you request access to the model in AWS Bedrock console and for GCP make sure you have request access to the model in GCP Vertex AI console.
You can easily set this using defang config set DB_SSL=<true|false>
Set to true to enable SSL for AWS and GCP. Set to false to disable SSL, which is used for Defang Playground.
You can easily set this using defang config set GITHUB_TOKEN=<YOUR_GITHUB_TOKEN>
A GitHub personal access token to increase API rate limits when fetching repository data. This is optional but recommended for better performance. Setting the permissions to public repositories only is sufficient, unless you want to access private repositories that you have access to.
- Enter a GitHub repository URL in the input field (e.g.,
https://github.com/DefangLabs/defang) - Start chatting with mastra-nextjs about the repository
- Use commands like:
- "Show me the file structure"
- "What are the recent pull requests?"
- "Explain the purpose of [filename]"
- "How many open issues are there?"
Mastra-nextjs uses a tool-based approach rather than traditional RAG systems, making it more efficient for large codebases. When you provide a repository URL, Mastra-nextjs uses tools to:
- Fetch the repository's file tree
- Access file contents on demand
- Retrieve information about pull requests and issues
- Store conversation history using Mastra's memory package
The large context window of Gemini Flash allows the agent to hold more code in memory, making the conversation more coherent and informed.
Note
Download Defang CLI
When deploying, the Playground environment is limited by resource constraints, so only Google Gemini flash or flash-lite models are supported. That said, I recommend using one of the flash models from this list. If you want to use other models, please use Defang BYOC.
Deploy your application to the Defang Playground by opening up your terminal and typing:
defang compose upIf you want to deploy to your own cloud account, you can use Defang BYOC.
[!WARNING] > Extended deployment time: This sample creates a managed PostgreSQL database which may take upwards of 20 minutes to provision on first deployment. Subsequent deployments are much faster (2-5 minutes).
This sample was based off of mastra's repo-chat sample.
Title: Mastra & Next.js
Short Description: An AI-powered tool for chatting with GitHub repositories using Mastra and Google Gemini.
Tags: AI, GitHub, Mastra, Next.js, PostgreSQL, TypeScript
Languages: TypeScript, JavaScript, Docker