Welcome to your Python capstone project! You'll be working with a FastAPI + PostgreSQL application that helps people track their daily learning journey. This will prepare you for deploying to the cloud in the next phase.
By the end of this capstone, your API should be working locally and ready for cloud deployment.
- Getting Started
- Development Workflow
- Development Tasks
- Data Schema
- AI Analysis Guide
- Troubleshooting
- Extras
- License
- Git installed on your machine
- Docker Desktop installed and running
- VS Code with the Dev Containers extension
Run these commands on your host machine (your local terminal, not inside a container):
-
Fork this repository to your GitHub account by clicking the "Fork" button
-
Clone your fork to your local machine:
git clone https://github.com/YOUR_USERNAME/journal-starter.git
-
Navigate into the project folder:
cd journal-starter -
Open in VS Code:
code .
Environment variables live in a .env file (which is git-ignored so you don't accidentally commit secrets). This repo ships with a template named .env-sample.
Copy the sample file to create your real .env. Run this from the project root on your host machine:
cp .env-sample .env- Install the Dev Containers extension in VS Code (if not already installed)
- Reopen in container: When VS Code detects the
.devcontainerfolder, click "Reopen in Container"- Or use Command Palette (
Cmd/Ctrl + Shift + P):Dev Containers: Reopen in Container
- Or use Command Palette (
- Wait for setup: The API container will automatically install Python, dependencies, and configure your environment. The PostgreSQL Database container will also automatically be created.
In a terminal on your host machine (not inside VS Code), run:
docker psYou should see the postgres service running.
In the VS Code terminal (inside the dev container), verify you're in the project root:
pwd
# Should output: /workspaces/journal-starter (or similar)Then start the API from the project root:
./start.sh- Visit the API docs: http://localhost:8000/docs
- Create your first entry In the Docs UI Use the POST
/entriesendpoint to create a new journal entry. - View your entries using the GET
/entriesendpoint to see what you've created!
π― Once you can create and see entries, you're ready to start the development tasks!
This project comes with several features already built for you β creating entries, listing entries, updating, and deleting all entries. The remaining features are left for you to implement.
We have provided tests so you can verify your implementations are correct without manual testing. When you first run the tests, some will pass (for the pre-built features) and some will fail (for the features you need to build). Your goal is to make all tests pass.
π Where to run commands: All commands in this section should be run from the project root in the VS Code terminal (inside the dev container). Do not
cdinto subdirectories likeapi/ortests/β run everything from the top-level project folder.
From the project root in the VS Code terminal, install dev dependencies:
uv sync --all-extrasThen run the tests to see the starting state:
uv run pytestYou should see output like this, with several failing tests:
FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_by_id_success - assert 501 == 200
FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_not_found - assert 501 == 404
FAILED tests/test_api.py::TestDeleteEntry::test_delete_entry_success - assert 501 == 200
FAILED tests/test_api.py::TestDeleteEntry::test_delete_entry_not_found - assert 501 == 404
FAILED tests/test_api.py::TestAnalyzeEntry::test_analyze_entry_not_found - assert 501 == 404
======================== 5 failed, 30 passed ========================
The 30 passing tests cover features that are already built for you (creating entries, listing entries, updating, etc.). The 5 failing tests are the features you need to implement. Each assert 501 == 200 means the endpoint is returning "Not Implemented" (501) instead of a successful response (200).
After completing all tasks, you should see:
============================= 35 passed ==============================
-
Create a branch
Branches let you work on features in isolation without affecting the main codebase. From the project root, create one for each task:
git checkout -b feature/your-feature-name
-
Implement the feature
Write your code in the
api/directory. Check the TODO comments in the files for guidance on what to implement. -
Run the tests
After implementing a feature, run the tests from the project root to check if your implementation is correct:
uv run pytest
pytest is a testing framework that runs automated tests to verify your code works as expected.
- Tests failing? Read the error messages β they tell you exactly what's wrong (e.g.,
assert 501 == 200means your endpoint is still returning "Not Implemented"). - Tests passing? Great, your implementation is correct! Move on to the next step.
Example: Before implementing GET /entries/{entry_id}:
FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_by_id_success - assert 501 == 200 FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_not_found - assert 501 == 404After implementing it correctly:
tests/test_api.py::TestGetSingleEntry::test_get_entry_by_id_success PASSED tests/test_api.py::TestGetSingleEntry::test_get_entry_not_found PASSEDπ‘ Tip: Use
uv run pytest -vfor verbose output to see each test's pass/fail status, oruv run pytest -v --tb=shortto also see concise error details.Run the linter from the project root to check code style and catch common mistakes:
uv run ruff check api/
A linter is a tool that analyzes your code for potential errors, bugs, and style issues without running it. Ruff is a fast Python linter that checks for things like unused imports, incorrect syntax, and code that doesn't follow Python style conventions (PEP 8).
Run the type checker from the project root to ensure proper type annotations:
uv run ty check api/
A type checker verifies that your code uses type hints correctly. Type hints (like
def get_entry(entry_id: str) -> dict:) help catch bugs early by ensuring you're passing the right types of data to functions. ty is a fast Python type checker. - Tests failing? Read the error messages β they tell you exactly what's wrong (e.g.,
-
Commit and push (only after tests pass!)
Once the tests for your feature are passing, commit your changes and push to GitHub. Run from the project root:
git add .git commit -m "Implement feature X"git push -u origin feature/your-feature-name
-
Create a Pull Request
On GitHub, open a Pull Request (PR) to merge your feature branch into
main. This is where code review happens. Once approved, merge the PR.
β οΈ Do not modify the test files. Make the tests pass by implementing features in theapi/directory. If a test is failing, it means there's something left to implement β read the error message for clues!
- Branch:
feature/logging-setup - Configure logging in
api/main.py
- Branch:
feature/get-single-entry - Implement GET /entries/{entry_id} in
api/routers/journal_router.py
- Branch:
feature/delete-entry - Implement DELETE /entries/{entry_id} in
api/routers/journal_router.py
- Branch:
feature/ai-analysis - Implement
analyze_journal_entry()inapi/services/llm_service.py - Implement POST /entries/{entry_id}/analyze in
api/routers/journal_router.py
This endpoint should return sentiment, a 2-sentence summary, and 2-4 key topics. See AI Analysis Guide below for details on the expected response format and LLM provider setup.
- Branch:
feature/data-model-improvements - Add validators to
api/models/entry.py
- Branch:
feature/cloud-cli-setup - Uncomment one CLI tool in
.devcontainer/devcontainer.json
Each journal entry follows this structure:
| Field | Type | Description | Validation |
|---|---|---|---|
| id | string | Unique identifier (UUID) | Auto-generated |
| work | string | What did you work on today? | Required, max 256 characters |
| struggle | string | What's one thing you struggled with today? | Required, max 256 characters |
| intention | string | What will you study/work on tomorrow? | Required, max 256 characters |
| created_at | datetime | When entry was created | Auto-generated UTC |
| updated_at | datetime | When entry was last updated | Auto-updated UTC |
For Task 3: AI-Powered Entry Analysis, your endpoint should return this format:
{
"entry_id": "123e4567-e89b-12d3-a456-426614174000",
"sentiment": "positive",
"summary": "The learner made progress with FastAPI and database integration. They're excited to continue learning about cloud deployment.",
"topics": ["FastAPI", "PostgreSQL", "API development", "cloud deployment"],
"created_at": "2025-12-25T10:30:00Z"
}LLM Provider Setup:
- Choose a provider and read their docs: OpenAI | Anthropic | Azure OpenAI | AWS Bedrock | GCP Vertex AI
- Add required environment variables to your
.envfile - Add your SDK to
pyproject.tomland runuv syncfrom the project root (inside the dev container)
API won't start?
- Make sure you're running
./start.shfrom the project root inside the dev container - Check PostgreSQL is running:
docker ps(on your host machine) - Restart the database:
docker restart your-postgres-container-name(on your host machine)
Can't connect to database?
- Verify
.envfile exists with correctDATABASE_URL - Restart dev container:
Dev Containers: Rebuild Container
Dev container won't open?
- Ensure Docker Desktop is running
- Try:
Dev Containers: Rebuild and Reopen in Container
- Explore Your Database - Connect to PostgreSQL and run queries directly
MIT License - see LICENSE for details.
Contributions welcome! Open an issue to get started.