Turn your GitHub profile into a full manga page -- powered by your own AI backend.
MangaREADME Generator is an open-source web app that transforms profile data into multi-panel manga pages ready for GitHub READMEs. It follows a BYOB (Bring Your Own Backend) architecture: run Stable Diffusion locally via the companion manga-readme Python package (Diffusers), or plug in an API key for OpenAI, Stability AI, Replicate, or HuggingFace. No vendor lock-in, no data stored server-side.
- Overview
- Gallery
- Features
- manga-readme Server (Recommended)
- Supported Providers
- Getting Started
- Provider Setup
- Project Structure
- Configuration
- Contributing
- License
INPUT CUSTOMIZE GENERATE EXPORT
Character Manga style, Provider creates Download PNG,
description, layout, prompts, each panel via copy Markdown
tech stack, speech bubbles your backend for GitHub
projects
- Input -- Describe your character, list your tech stack and projects
- Customize -- Choose a manga style and layout; edit prompts and speech bubbles in the Bubble Editor
- Generate -- Connect your provider and generate each panel image
- Export -- Download the final manga page as PNG and copy the Markdown embed snippet
| Category | Details |
|---|---|
| BYOB Architecture | Connect any supported backend -- local server or cloud API |
| manga-readme PyPI Package | One-command local AI server powered by Diffusers |
| 5 Manga Styles | Shonen, Shojo, Seinen, Chibi, Cyberpunk |
| 7 Panel Layouts | 2x2 grid, 3x1, 1-2-1, hero, action, comic strip, profile |
| Bubble Editor | Canva-like editing: move, resize, edit, add/remove bubbles |
| Speech Bubbles | Speech, thought, shout, narration, whisper; multiple per panel |
| Visual Effects | Speed lines, screentone, halftone, sparkle, impact, radial blur, vignette |
| LoRA Support | Load LoRAs with adjustable weights via prompt tags |
| Model Selection | DreamShaper 8, SDXL, Animagine XL, Realistic Vision, and more |
| One-Click Export | PNG download and ready-to-paste Markdown snippet |
The easiest way to generate images locally. Install from PyPI, run one command, and connect the frontend.
# For CUDA GPU (recommended) -- install PyTorch first:
pip install torch --index-url https://download.pytorch.org/whl/cu121
# Then install manga-readme:
pip install manga-readmemanga-readme serveThe server starts on http://127.0.0.1:7860 with DreamShaper 8 pre-loaded. The model is downloaded automatically on first launch.
manga-readme serve --model sdxl
manga-readme serve --model animagine-xl
manga-readme serve --model realistic-vision
manga-readme list-models| Alias | Architecture | Resolution | Repo |
|---|---|---|---|
| dreamshaper-8 | SD 1.5 | 512x512 | Lykon/dreamshaper-8 |
| sdxl | SDXL | 1024x1024 | stabilityai/stable-diffusion-xl-base-1.0 |
| sd15 | SD 1.5 | 512x512 | runwayml/stable-diffusion-v1-5 |
| sd21 | SD 2.1 | 768x768 | stabilityai/stable-diffusion-2-1 |
| animagine-xl | SDXL | 1024x1024 | cagliostrolab/animagine-xl-3.1 |
| realistic-vision | SD 1.5 | 512x512 | SG161222/Realistic_Vision_V5.1_noVAE |
| absolute-reality | SD 1.5 | 512x512 | digiplay/AbsoluteReality_v1.8.1 |
You can also pass any HuggingFace repo id: manga-readme serve --model user/my-model.
Place .safetensors or .pt LoRA files in a directory and pass it to the server:
manga-readme serve --lora-dir ./my-lorasLoRAs are applied via prompt tags (<lora:name:0.7>). The frontend LoRA picker sends these tags automatically. You can also load LoRAs directly from HuggingFace Hub by using the repo id as the LoRA name.
A CUDA GPU with 6 GB+ VRAM is recommended. CPU inference works but is very slow. The server auto-detects CUDA and uses fp16 when available.
| Provider | Type | Auth | LoRA / Model Select | Notes |
|---|---|---|---|---|
| manga-readme (Diffusers) | Local server | None | Yes | Recommended -- pip install manga-readme |
| Automatic1111 / Forge / SD.Next | Local server | None | Yes | Alternative local backend, requires --api flag |
| OpenAI (DALL-E 3) | Cloud API | API key | No | High quality, fixed sizes (1024x1024, 1792x1024, 1024x1792) |
| Google (Nano Banana / Gemini) | Cloud API | API key | No | Gemini image generation via Google API |
| Stability AI | Cloud API | API key | No | SD3, SDXL, Ultra via REST API |
| Replicate | Cloud API | API key | No | Run open-source models on cloud GPUs |
| HuggingFace Inference | Cloud API | Optional token | No | Free tier available, token increases rate limits |
| Requirement | Version |
|---|---|
| Node.js | 18+ |
| npm | 9+ |
| Browser | Any modern (Chrome, Edge, Firefox, Safari) |
| Python | 3.10+ (for manga-readme server) |
| Backend | manga-readme server or any supported provider |
git clone https://github.com/rodrigoguedes09/personal-page.git
cd personal-page
npm install
npm run devOpen http://localhost:3000 and follow the 4-step wizard.
pip install manga-readme
manga-readme serveIn the frontend, select Local Server (manga-readme) and click Test Connection.
npm run build
npm startpip install manga-readme
manga-readme serve --model dreamshaper-8In the app:
- Select Local Server (manga-readme) as the provider
- Server URL is
http://127.0.0.1:7860by default - Click Test Connection
- Select a model from the dropdown (all registered models appear)
- Optionally add LoRAs with custom weights (0.0 -- 1.5)
Start your server with the API enabled and CORS configured:
./webui.sh --api --cors-allow-origins=http://localhost:3000The same Local Server provider in the frontend works with A1111/Forge out of the box.
- Get an API key from platform.openai.com
- Select OpenAI (DALL-E) in the provider panel
- Paste your API key and test the connection
- Get a Gemini API key from Google AI Studio
- Select Google (Nano Banana) in the provider panel
- Paste your API key and test the connection
- Optionally choose a Gemini model in the model dropdown after connection
- Get an API key from platform.stability.ai
- Select Stability AI and enter the key
- Get an API token from replicate.com
- Select Replicate and enter the token
- Default model:
stability-ai/sdxl
- Optionally get a free token from huggingface.co/settings/tokens
- Select HuggingFace Inference
- Works without a token (rate-limited); token increases throughput
personal-page/
src/ -- Next.js frontend
app/
layout.tsx Root layout with metadata and fonts
page.tsx Main 4-step wizard page
globals.css Manga-themed Tailwind styles
components/
header.tsx App header with GitHub link
footer.tsx App footer
provider-config.tsx Provider selection, connection, LoRA/model config
user-input-form.tsx Step 1: Character description and profile data
manga-canvas.tsx Canvas-based manga page renderer
generation-view.tsx Step 3: Generation controls and progress
export-options.tsx Step 4: PNG/Markdown export
progress-bar.tsx Manga-styled progress indicator
webgpu-status.tsx GPU capability badge (informational)
hooks/
use-generation.ts Generation orchestration hook
use-webgpu.ts WebGPU detection hook
lib/
providers/
index.ts Provider factory and metadata registry
local-sd.ts Local server API client (manga-readme / A1111)
openai.ts OpenAI DALL-E client
stability.ts Stability AI client
replicate.ts Replicate client with polling
huggingface.ts HuggingFace Inference client with retry logic
constants.ts Style prompts, generation presets, defaults
prompt-engine.ts User data to manga prompt mapper
manga-layout.ts 7 panel layout algorithms
canvas-renderer.ts Canvas drawing with effects and bubbles
export.ts PNG and Markdown export
utils.ts Utility functions
webgpu.ts WebGPU detection and capability check
store/
app-store.ts Zustand global state
types/
index.ts TypeScript type definitions
server/ -- manga-readme Python package (PyPI)
manga_readme/
__init__.py Package version
__main__.py python -m manga_readme support
cli.py CLI entry point (serve, list-models)
server.py FastAPI server with A1111-compatible API
pipeline.py Diffusers pipeline manager (load, LoRA, txt2img)
models.py Curated model registry
pyproject.toml Package metadata and dependencies
README.md PyPI package documentation
LICENSE MIT
images/ -- Project screenshots and example exports
| Decision | Rationale |
|---|---|
| BYOB Provider System | No vendor lock-in; users choose their own AI backend |
| Diffusers-based Server | Industry-standard library, easy model/LoRA management, pip-installable |
| A1111-compatible API | Server speaks the same protocol as A1111 -- frontend works with both |
| Provider Interface | All providers implement ImageProvider -- uniform API, easy to extend |
| Zustand | Lightweight global state over React Context for flat, scalable stores |
| Canvas 2D | Direct pixel control for manga effects and efficient PNG export |
| LoRA Tag Injection | Standard <lora:name:weight> tags parsed and applied by the server |
| Style | Aesthetic |
|---|---|
| Shonen | Bold, high-energy action with dramatic lighting |
| Shojo | Soft tones, floral accents, expressive eyes |
| Seinen | Detailed, mature, photorealistic manga |
| Chibi | Cute, super-deformed characters |
| Cyberpunk | Neon-lit, tech-heavy futuristic aesthetic |
| Preset | Steps | Resolution | Guidance |
|---|---|---|---|
| Fast | 15 | 512x512 | 7.0 |
| Balanced | 30 | 512x512 | 7.5 |
| Quality | 40 | 768x768 | 8.0 |
manga-readme serve [OPTIONS]
--host Bind address (default: 127.0.0.1)
--port Port number (default: 7860)
--model Model alias or HF repo id (default: dreamshaper-8)
--lora-dir LoRA directory (default: ./loras)
--no-half Disable fp16 (use on CPU or if NaN outputs)
--reload Auto-reload for development
| Component | Technology |
|---|---|
| Frontend | Next.js 14 (App Router) |
| Language (Frontend) | TypeScript 5 |
| Styling | Tailwind CSS 3.4 |
| State | Zustand 5 |
| Icons | Lucide React |
| Export | html-to-image |
| Backend | FastAPI |
| AI Engine | HuggingFace Diffusers |
| Language (Backend) | Python 3.10+ |
- Fork the repository
- Create a feature branch (
git checkout -b feature/my-feature) - Commit your changes (
git commit -m 'feat: add my feature') - Push to the branch (
git push origin feature/my-feature) - Open a Pull Request
# Frontend
npm run dev # Start dev server at http://localhost:3000
npm run build # Production build
npm run start # Serve production build
npm run lint # Run ESLint
# Backend
pip install -e server/ # Install in editable mode
manga-readme serve --reload # Dev server with auto-reload
manga-readme list-models # Print model registry- Create
src/lib/providers/<name>.tsimplementing theImageProviderinterface - Add the provider type to
ProviderTypeinsrc/types/index.ts - Register it in the factory switch in
src/lib/providers/index.ts - Add metadata to
PROVIDER_METAin the same file - Add UI for provider-specific settings in
src/components/provider-config.tsx
- Edit
server/manga_readme/models.py - Add a
ModelEntrywith repo_id, alias, label, arch, and default resolution - Rebuild and publish:
cd server && python -m build && twine upload dist/*
MIT
Built with Next.js, Diffusers, and manga ink





