A modern, self-hosted EPUB reader that combines intelligent AI assistance with local privacy protection.
- Complete EPUB Support: Full compatibility with EPUB 2.0 and 3.0 standards
- Chapter Navigation: Intuitive table of contents with progress tracking
- Image Rendering: High-quality image display within chapters
- Responsive Design: Optimized for desktop, tablet, and mobile devices
- Dark Mode: Eye-friendly reading mode for low-light environments
- OpenAI: Full API integration with GPT models
- LM Studio: Local AI model support with automatic configuration
- Ollama: Complete local LLM integration for offline usage
- Provider Switching: Seamless switching between AI providers
- Privacy Protection: Local processing options for sensitive content
- Modern Tech Stack: FastAPI + Alpine.js + TailwindCSS
- Docker Support: Complete containerization with Docker Compose
- API-First Design: RESTful APIs for easy integration
- Extensible Architecture: Plugin-ready system for custom features
- Comprehensive Tooling: Development and deployment utilities
- Multi-Language Support: English and Simplified Chinese
- RTL Text Support: Right-to-left language compatibility
- Localized UI: Complete interface translation
- Dynamic Language Switching: Runtime language changes
# Clone the repository
git clone https://github.com/ohmyscott/reader3.git
cd reader3
# Configure environment
cp .env.example .env
# Edit .env with your AI provider settings
# Start the application
docker compose up -d
# Access the application
open http://localhost:8123# Prerequisites
Python 3.10+
Node.js 16+ (for frontend development)
# Install dependencies
pip install uv
uv sync
# Start the server
uv run python server.py
# Or use the operations script
./ops.sh dev startβββββββββββββββββββ
β User Interface β
βββββββββββ¬ββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Frontend (Alpine.js + TailwindCSS) β
βββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β FastAPI Backend β
βββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββΌββββββββββββββ¬ββββββββββββββ¬ββββββββββββββ
β β β β β
βΌ βΌ βΌ βΌ βΌ
βββββββββββ βββββββββββ βββββββββββ βββββββββββ βββββββββββ
βEPUB β β AI β β Provider β βTinyDB β β File β
βParser β β Service β βAbstractionβ βStorage β βSystem β
βββββββββββ βββββββββββ βββββββββββ βββββββββββ βββββββββββ
β β β β
βΌ βΌ βΌ βΌ
βββββββββββ βββββββββββ βββββββββββ βββββββββββ
βOpenAI β βLM Studioβ β Ollama β
βAPI β β API β β API β
βββββββββββ βββββββββββ βββββββββββ βββββββββββ
Create a .env file with your preferred AI provider configuration:
# AI Provider Selection (openai, lmstudio, ollama)
AI_PROVIDER=ollama
# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_MODEL=gpt-4o-mini
# LM Studio Configuration
LMSTUDIO_BASE_URL=http://localhost:1234/v1
LMSTUDIO_MODEL=your_local_model
# Ollama Configuration
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=llama3.1:8b
# Application Settings
PORT=8123
HOST=0.0.0.0
BOOKS_DIR=./books
UPLOAD_DIR=./uploads| Provider | API Key Required | Base URL | Privacy | Cost |
|---|---|---|---|---|
| OpenAI | Yes | api.openai.com | Cloud | Pay-per-token |
| LM Studio | No | localhost:1234 | Local | Free |
| Ollama | No | localhost:11434 | Local | Free |
reader3/
βββ π frontend/ # Frontend application
β βββ π index.html # Main application page
β βββ π reader.html # Reader interface
β βββ π css/ # Stylesheets
β βββ π js/ # JavaScript modules
β βββ π locales/ # Internationalization
βββ π api/ # API modules
βββ π data/ # Data storage
βββ π docs/ # Documentation
βββ π³ docker-compose.yml # Docker configuration
βββ π³ Dockerfile # Container definition
βββ π server.py # FastAPI application
βββ π reader3.py # EPUB processing utility
βββ π ops.sh # Operations script
βββ π requirements.txt # Python dependencies
# Start development server
./ops.sh dev start
# Check service status
./ops.sh dev ps
# Stop development server
./ops.sh dev stopWe recommend using the operations script for production deployment:
# Quick production setup
./ops.sh prod start
# Or use Docker Compose directly
docker-compose -f docker-compose.prod.yml up -d
# Scale the application
docker-compose -f docker-compose.prod.yml up -d --scale reader3=3
# Check production status
./ops.sh prod ps| Metric | Value |
|---|---|
| Startup Time | < 2s |
| Memory Usage | < 512MB (base) |
| Book Processing | < 5s per 1000 chapters |
| Concurrent Users | 100+ |
| API Response Time | < 500ms (local AI) |
Minimum:
- CPU: 2 cores
- RAM: 4GB
- Storage: 10GB
- OS: Linux/macOS/Windows
Recommended:
- CPU: 4 cores
- RAM: 8GB
- Storage: 50GB SSD
- OS: Linux with Docker
# Application management (development)
./ops.sh dev start # Start development server
./ops.sh dev stop # Stop development server
./ops.sh dev restart # Restart development server
./ops.sh dev ps # Check service status
# Application management (production)
./ops.sh prod start # Start production server
./ops.sh prod stop # Stop production server
./ops.sh prod restart # Restart production server
./ops.sh prod ps # Check production status
# Build production images
./ops.sh prod build # Build Docker images
# File management
./ops.sh ls # Show EPUB statistics
./ops.sh clean lru # Clean old files
./ops.sh clean lru 5 # Keep 5 most recent files# Testing is not yet implemented (TODO)
# Future testing capabilities will include:
# - Unit tests
# - Integration tests
# - End-to-end tests
# - Test coverage reporting# Manual testing can be performed by:
# - Uploading and reading EPUB files
# - Testing AI assistant functionality
# - Verifying provider switching
# - Checking responsive designWe welcome contributions! Please read our Contributing Guidelines for details.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Commit your changes:
git commit -m 'Add amazing feature' - Push to the branch:
git push origin feature/amazing-feature - Open a Pull Request
- Follow PEP 8 for Python code
- Write comprehensive tests for new features
- Update documentation for API changes
- Use the issue template for bugs
- Provide detailed reproduction steps
- Include system information and logs
- User Guide - Complete usage instructions
- API Reference - REST API documentation
- Developer Guide - Development setup
- Deployment Guide - Production deployment
- Troubleshooting - Common issues
- Local AI Options: Process sensitive content locally
- API Key Protection: Secure storage and masking
- Input Validation: Comprehensive input sanitization
- CORS Configuration: Proper cross-origin settings
- Rate Limiting: API request throttling
Please report security issues privately to [email protected]
This project is licensed under the MIT License - see the LICENSE file for details.
- FastAPI - Modern web framework
- Alpine.js - Minimal JavaScript framework
- Tailwind CSS - Utility-first CSS framework
- Project Gutenberg - Free EPUB books
- TinyDB - Lightweight database
- Documentation: docs/
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: [email protected]