Enterprise-grade biomedical research behind a chat interface - Access PubMed, clinical trials, FDA drug labels, and run complex Python analyses through natural language. Powered by specialized biomedical data APIs.
Traditional biomedical research is fragmented across dozens of databases and platforms. Bio changes everything by providing:
- 🔬 Comprehensive Medical Data - PubMed articles, ClinicalTrials.gov data, FDA drug labels, and more
- 🔍 One Unified Search - Powered by Valyu's specialized biomedical data API
- 🐍 Advanced Analytics - Execute Python code in secure Daytona sandboxes for statistical analysis, pharmacokinetic modeling, and custom calculations
- 📊 Interactive Visualizations - Beautiful charts and dashboards for clinical data
- 🌐 Real-Time Intelligence - Web search integration for breaking medical news
- 🏠 Local AI Models - Run with Ollama or LM Studio for unlimited, private queries using your own hardware
- 🎯 Natural Language - Just ask questions like you would to a colleague
- PubMed & ArXiv Search - Access to millions of scientific papers and biomedical research
- Clinical Trials Database - Search ClinicalTrials.gov for active and completed trials
- FDA Drug Labels - Access comprehensive drug information from DailyMed
- Drug Information - Detailed medication data, warnings, and contraindications
- Interactive Charts - Visualize clinical data, drug efficacy, patient outcomes
- Python Code Execution - Run pharmacokinetic calculations, statistical analyses, and ML models
- Python Code Execution - Run complex biomedical calculations, statistical tests, and data analysis
- Interactive Charts - Create publication-ready visualizations of clinical data
- Multi-Source Research - Automatically aggregates data from multiple biomedical sources
- Export & Share - Download results, share analyses, and collaborate
Bio supports two distinct operating modes:
🌐 Production Mode (Default)
- Uses Supabase for authentication and database
- OpenAI/Vercel AI Gateway for LLM
- Rate limiting (5 queries/day for free tier)
- Billing and usage tracking via Polar
- Full authentication required
💻 Development Mode (Recommended for Local Development)
- No Supabase required - Uses local SQLite database
- No authentication needed - Auto-login as dev user
- Unlimited queries - No rate limits
- No billing/tracking - Polar integration disabled
- Works offline - Complete local development
- Ollama/LM Studio integration - Use local LLMs for privacy and unlimited usage
For Production Mode:
- Node.js 18+
- npm or yarn
- OpenAI API key
- Valyu API key (get one at platform.valyu.ai)
- Daytona API key (for code execution)
- Supabase account and project
- Polar account (for billing)
For Development Mode (Recommended for getting started):
- Node.js 18+
- npm or yarn
- Valyu API key (get one at platform.valyu.ai)
- Daytona API key (for code execution)
- Ollama or LM Studio installed (optional but recommended)
-
Clone the repository
git clone https://github.com/yorkeccak/bio.git cd bio -
Install dependencies
npm install
-
Set up environment variables
Create a
.env.localfile in the root directory:For Development Mode (Easy Setup):
# Enable Development Mode (No Supabase, No Auth, No Billing) NEXT_PUBLIC_APP_MODE=development # Valyu API Configuration (Required) VALYU_API_KEY=your-valyu-api-key # Daytona Configuration (Required for Python execution) DAYTONA_API_KEY=your-daytona-api-key DAYTONA_API_URL=https://api.daytona.io # Optional DAYTONA_TARGET=latest # Optional # Local LLM Configuration (Optional - for unlimited, private queries) OLLAMA_BASE_URL=http://localhost:11434 # Default Ollama URL LMSTUDIO_BASE_URL=http://localhost:1234 # Default LM Studio URL # OpenAI Configuration (Optional - fallback if local models unavailable) OPENAI_API_KEY=your-openai-api-key
For Production Mode:
# OpenAI Configuration (Required) OPENAI_API_KEY=your-openai-api-key # Valyu API Configuration (Required) VALYU_API_KEY=your-valyu-api-key # Daytona Configuration (Required) DAYTONA_API_KEY=your-daytona-api-key # Supabase Configuration (Required) NEXT_PUBLIC_SUPABASE_URL=your-supabase-url NEXT_PUBLIC_SUPABASE_ANON_KEY=your-supabase-anon-key SUPABASE_SERVICE_ROLE_KEY=your-service-role-key # Polar Billing (Required) POLAR_WEBHOOK_SECRET=your-polar-webhook-secret POLAR_UNLIMITED_PRODUCT_ID=your-product-id # App Configuration NEXT_PUBLIC_APP_URL=http://localhost:3000
-
Run the development server
npm run dev
-
Open your browser
Navigate to http://localhost:3000
- Development Mode: You'll be automatically logged in as
dev@localhost - Production Mode: You'll need to sign up/sign in
- Development Mode: You'll be automatically logged in as
Development mode provides a complete local development environment without any external dependencies beyond the core APIs (Valyu, Daytona). It's perfect for:
- Local Development - No Supabase setup required
- Offline Work - All data stored locally in SQLite
- Testing Features - Unlimited queries without billing
- Privacy - Use local Ollama models, no cloud LLM needed
- Quick Prototyping - No authentication or rate limits
When NEXT_PUBLIC_APP_MODE=development:
-
Local SQLite Database (
/.local-data/dev.db)- Automatically created on first run
- Stores chat sessions, messages, charts, and CSVs
- Full schema matching production Supabase tables
- Easy to inspect with
sqlite3 .local-data/dev.db
-
Mock Authentication
- Auto-login as dev user (
dev@localhost) - No sign-up/sign-in required
- Unlimited tier access with all features
- Auto-login as dev user (
-
No Rate Limits
- Unlimited chat queries
- No usage tracking
- No billing integration
-
LLM Selection
- Ollama models (if installed) - Used first, unlimited and free
- LM Studio models (if installed) - Alternative local option with GUI
- OpenAI (if API key provided) - Fallback if no local models available
- See local models indicator in top-right corner with provider switching
Bio supports both Ollama and LM Studio for running local LLMs. Both are free, private, and work offline - choose based on your preference.
💡 You can use both! Bio detects both automatically and lets you switch between them with a provider selector in the UI.
Ollama provides unlimited, private LLM inference on your local machine - completely free and runs offline!
🚀 Quick Setup (No Terminal Required):
-
Download Ollama App
- Visit ollama.com and download the app for your OS
- Install and open the Ollama app
- It runs in your menu bar (macOS) or system tray (Windows/Linux)
-
Download a Model
- Open Ollama app and browse available models
- Download
qwen2.5:7b(recommended - best for biomedical research with tool support) - Or choose from:
llama3.1,mistral,deepseek-r1 - That's it! Bio will automatically detect and use it
-
Use in Bio
- Start the app in development mode
- Ollama status indicator appears in top-right corner
- Shows your available models
- Click to select which model to use
- Icons show capabilities: 🔧 (tools) and 🧠 (reasoning)
⚡ Advanced Setup (Terminal):
If you prefer using the terminal:
# Install Ollama
brew install ollama # macOS
# OR
curl -fsSL https://ollama.com/install.sh | sh # Linux
# Start Ollama service
ollama serve
# Download recommended models
ollama pull qwen2.5:7b # Recommended - excellent tool support
ollama pull llama3.1:8b # Alternative - good performance
ollama pull mistral:7b # Alternative - fast
ollama pull deepseek-r1:7b # For reasoning/thinking mode💡 It Just Works:
- Bio automatically detects Ollama when it's running
- No configuration needed
- Automatically falls back to OpenAI if Ollama is unavailable
- Switch between models anytime via the local models popup
LM Studio provides a beautiful GUI for running local LLMs - perfect if you prefer visual interfaces over terminal commands!
🎨 Easy Setup with GUI:
-
Download LM Studio
- Visit lmstudio.ai and download for your OS
- Install and open LM Studio
- The app provides a full GUI for managing models
-
Download Models
- Click on the 🔍 Search icon in LM Studio
- Browse available models or search for:
qwen/qwen3-14b(recommended - excellent tool support)openai/gpt-oss-20b(OpenAI's open source model with reasoning)google/gemma-3-12b(Google's model with good performance)qwen/qwen3-4b-thinking-2507(reasoning model)
- Click download and wait for it to complete
- Models are cached locally for offline use
-
Start the Server
- Click the LM Studio logo in your macOS menu bar (top-right corner)
- Select "Start Server on Port 1234..."
- Server starts immediately - you'll see the status change to "Running"
- That's it! Bio will automatically detect it
-
Important: Configure Context Window
⚠️ CRITICAL: This app uses extensive tool descriptions that require adequate context length- In LM Studio, when loading a model:
- Click on the model settings (gear icon)
- Set Context Length to at least 8192 tokens (16384+ recommended)
- If you see errors like "tokens to keep is greater than context length", your context window is too small
- Without sufficient context length, you'll get errors when the AI tries to use tools
- This applies to all models in LM Studio - configure each model individually
-
Use in Bio
- Start the app in development mode
- Local models indicator appears in top-right corner
- If both Ollama and LM Studio are running, you'll see a provider switcher
- Click to select which provider and model to use
- Icons show capabilities: 🔧 (tools) and 🧠 (reasoning)
⚙️ Configuration:
- Default URL:
http://localhost:1234 - Can be customized in
.env.local:LMSTUDIO_BASE_URL=http://localhost:1234
💡 LM Studio Features:
- Real-time GPU/CPU usage monitoring
- Easy model comparison and testing
- Visual prompt builder
- Chat history within LM Studio
- No terminal commands needed
If you have both Ollama and LM Studio running, Bio automatically detects both and shows a beautiful provider switcher in the local models popup:
- Visual Selection: Click provider buttons with logos
- Seamless Switching: Switch between providers without reloading
- Independent Models: Each provider shows its own model list
- Automatic Detection: No manual configuration needed
The provider switcher appears automatically when multiple providers are detected!
Not all models support all features. Here's what works:
Tool Calling Support (Execute Python, search databases, create charts):
- ✅ qwen2.5, qwen3, deepseek-r1, deepseek-v3
- ✅ llama3.1, llama3.2, llama3.3
- ✅ mistral, mistral-nemo, mistral-small
- ✅ See full list in Ollama popup (wrench icon)
Thinking/Reasoning Support (Show reasoning steps):
- ✅ deepseek-r1, qwen3, magistral
- ✅ gpt-oss, cogito
- ✅ See full list in Ollama popup (brain icon)
What happens if model lacks tool support?
- You'll see a friendly dialog explaining limitations
- Can continue with text-only responses
- Or switch to a different model that supports tools
✅ Full Chat History
- All conversations saved to local SQLite
- Persists across restarts
- View/delete old sessions
✅ Charts & Visualizations
- Created charts saved locally
- Retrievable via markdown syntax
- Rendered from local database
✅ CSV Data Tables
- Generated CSVs stored in SQLite
- Inline table rendering
- Full data persistence
✅ No Hidden Costs
- No OpenAI API usage (when using Ollama)
- No Supabase database costs
- No authentication service costs
View Database:
sqlite3 .local-data/dev.db
# Then run SQL queries
SELECT * FROM chat_sessions;
SELECT * FROM charts;Reset Database:
rm -rf .local-data/
# Database recreated on next app startBackup Database:
cp -r .local-data/ .local-data-backup/Development → Production:
- Remove/comment
NEXT_PUBLIC_APP_MODE=development - Add all Supabase and Polar environment variables
- Restart server
Production → Development:
- Add
NEXT_PUBLIC_APP_MODE=development - Restart server
- Local database automatically created
Note: Your production Supabase data and local SQLite data are completely separate. Switching modes doesn't migrate data.
Sidebar won't open on homepage:
- Fixed! Sidebar now respects dock setting even on homepage
Local models not detected:
- Ollama: Make sure Ollama is running:
ollama serve- Check Ollama URL in
.env.local(default:http://localhost:11434) - Verify models are installed:
ollama list
- Check Ollama URL in
- LM Studio: Click LM Studio menu bar icon → "Start Server on Port 1234..."
- Check LM Studio URL in
.env.local(default:http://localhost:1234) - Verify at least one model is downloaded in LM Studio
- Server must be running for Bio to detect it
- Check LM Studio URL in
Database errors:
- Delete and recreate:
rm -rf .local-data/ - Check file permissions in
.local-data/directory
Auth errors:
- Verify
NEXT_PUBLIC_APP_MODE=developmentis set - Clear browser localStorage and cache
- Restart dev server
This guide walks you through setting up Bio for production with full authentication, billing, and database functionality.
Valyu provides specialized biomedical data - PubMed articles, clinical trials, FDA drug labels, and more. Without this API key, the app cannot access biomedical data.
- Go to platform.valyu.ai
- Sign up for an account
- Navigate to API Keys section
- Create a new API key
- Copy your API key (starts with
valyu_)
Used for AI chat responses, natural language understanding, and function calling.
- Go to platform.openai.com
- Create an account or sign in
- Navigate to API keys
- Create a new secret key
- Copy the key (starts with
sk-)
Used for secure Python code execution, enabling data analysis, visualizations, and statistical calculations.
- Go to daytona.io
- Sign up for an account
- Get your API key from the dashboard
- Copy the key
- Go to supabase.com
- Create a new project
- Wait for the project to be provisioned (2-3 minutes)
- Go to Project Settings → API
- Copy these values:
Project URL→NEXT_PUBLIC_SUPABASE_URLanon publickey →NEXT_PUBLIC_SUPABASE_ANON_KEYservice_rolekey →SUPABASE_SERVICE_ROLE_KEY(keep this secret!)
- In Supabase Dashboard, go to SQL Editor
- Click New Query
- Copy the contents of
supabase/schema.sqland run it
- In the SQL Editor, create another new query
- Copy the contents of
supabase/policies.sqland run it
-
Go to Authentication → Providers in Supabase
-
Enable Email provider (enabled by default)
-
Optional: Enable OAuth providers (Google, GitHub, etc.)
- For Google: Add OAuth credentials from Google Cloud Console
- For GitHub: Add OAuth app credentials from GitHub Settings
-
Go to Authentication → URL Configuration
-
Add your site URL and redirect URLs:
- Site URL:
https://yourdomain.com(orhttp://localhost:3000for testing) - Redirect URLs:
https://yourdomain.com/auth/callback
- Site URL:
Polar provides subscription billing and payments.
- Go to polar.sh
- Create an account
- Create your products:
- Pay Per Use plan (e.g., $9.99/month)
- Unlimited plan (e.g., $49.99/month)
- Copy the Product IDs
- Go to Settings → Webhooks
- Create a webhook:
- URL:
https://yourdomain.com/api/webhooks/polar - Events: Select all
customer.*andsubscription.*events
- URL:
- Copy the webhook secret
If you don't want billing:
- Skip this section
- Remove billing UI from the codebase
- All users will have unlimited access
Create .env.local in your project root:
# App Configuration
NEXT_PUBLIC_APP_MODE=production
NEXT_PUBLIC_APP_URL=https://yourdomain.com
# Valyu API (Required - powers all biomedical data)
# Get yours at: https://platform.valyu.ai
VALYU_API_KEY=valyu_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# OpenAI Configuration
OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Daytona Configuration (Code Execution)
DAYTONA_API_KEY=your-daytona-api-key
DAYTONA_API_URL=https://api.daytona.io
DAYTONA_TARGET=latest
# Supabase Configuration
NEXT_PUBLIC_SUPABASE_URL=https://xxxxxxxxxxxxx.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
# Polar Billing (Optional - remove if not using billing)
POLAR_WEBHOOK_SECRET=whsec_xxxxxxxxxxxxxxxxxxxxx
POLAR_UNLIMITED_PRODUCT_ID=prod_xxxxxxxxxxxxxxxxxxxxx- Push your code to GitHub
- Go to vercel.com
- Import your repository
- Add all environment variables from
.env.local - Deploy!
Important Vercel Settings:
- Framework Preset: Next.js
- Node.js Version: 18.x or higher
- Build Command:
npm run build - Output Directory:
.next
- Netlify: Similar to Vercel
- Railway: Good for full-stack apps
- Self-hosted: Use Docker with PM2 or similar
-
Test Authentication:
- Visit your site
- Try signing up with email
- Check that user appears in Supabase Users table
-
Test Polar Webhooks:
- Subscribe to a plan
- Check Supabase users table for
subscription_tierupdate - Check Polar dashboard for webhook delivery
-
Test Biomedical Data:
- Ask a question like "What are recent clinical trials for melanoma?"
- Verify Valyu API is returning data
- Check that charts and CSVs are saving to database
Authentication Issues:
- Verify Supabase URL and keys are correct
- Check redirect URLs in Supabase dashboard
- Clear browser cookies/localStorage and try again
Database Errors:
- Verify all tables were created successfully
- Check RLS policies are enabled
- Review Supabase logs for detailed errors
Billing Not Working:
- Verify Polar webhook secret is correct
- Check Polar dashboard for webhook delivery status
- Review app logs for webhook processing errors
No Biomedical Data:
- Verify Valyu API key is set correctly in environment variables
- Check Valyu dashboard for API usage/errors
- Test API key with a curl request to Valyu
Rate Limiting:
- Check
user_rate_limitstable in Supabase - Verify user's subscription tier is set correctly
- Review rate limit logic in
/api/rate-limit
Do:
- Keep
SUPABASE_SERVICE_ROLE_KEYsecret (never expose client-side) - Use environment variables for all secrets
- Enable RLS on all Supabase tables
- Regularly rotate API keys
- Use HTTPS in production
- Enable Supabase Auth rate limiting
Don't:
- Commit
.env.localto git (add to.gitignore) - Expose service role keys in client-side code
- Disable RLS policies
- Use the same API keys for dev and production
Supabase:
- Monitor database usage in Supabase dashboard
- Set up database backups (automatic in paid plan)
- Review auth logs for suspicious activity
Polar:
- Monitor subscription metrics
- Handle failed payments
- Review webhook logs
Application:
- Set up error tracking (Sentry, LogRocket, etc.)
- Monitor API usage (Valyu, OpenAI, Daytona)
- Set up uptime monitoring (UptimeRobot, Better Uptime)
Try these powerful queries to see what Bio can do:
- "What are the latest clinical trials for CAR-T therapy in melanoma?"
- "Find recent PubMed papers on CRISPR gene editing safety"
- "Calculate the half-life of warfarin based on these concentrations"
- "Search for drug interactions between metformin and lisinopril"
- "Analyze Phase 3 clinical trial data for immunotherapy drugs"
- "Create a chart comparing efficacy rates of different COVID-19 vaccines"
With Local Models (Ollama/LM Studio):
- Run unlimited queries without API costs
- Keep all your medical research completely private
- Perfect for sensitive patient data analysis
- Choose your preferred interface: terminal (Ollama) or GUI (LM Studio)
- Frontend: Next.js 15 with App Router, Tailwind CSS, shadcn/ui
- AI: OpenAI GPT-5 with function calling + Ollama/LM Studio for local models
- Data: Valyu API for comprehensive biomedical data
- Code Execution: Daytona sandboxes for secure Python execution
- Visualizations: Recharts for interactive charts
- Real-time: Streaming responses with Vercel AI SDK
- Local Models: Ollama and LM Studio integration for private, unlimited queries
- Secure API key management
- Sandboxed code execution via Daytona
- No storage of sensitive medical data
- HTTPS encryption for all API calls
- HIPAA-compliant architecture (when self-hosted)
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
- Built with Valyu - The unified biomedical data API
- Powered by Daytona - Secure code execution
- UI components from shadcn/ui
Made with ❤️ for biomedical researchers