Spec2RTL is an autonomous AI assistant that transforms high-level hardware specifications into verified, ready-to-use Verilog RTL and a corresponding SystemVerilog testbench. It leverages a multi-agent workflow, a self-correction loop, and a custom knowledge base to automate the most tedious parts of the hardware design lifecycle.
This isn't just a code generator; it's a collaborative partner that plans, generates, validates, critiques, and refines its own work, presenting a polished result for final human approval.
- Specification-Driven Design: Takes a natural language markdown file as the single source of truth.
- Knowledge-Aware Generation (RAG): Utilizes a custom knowledge base (e.g., for simulator-specific rules or project coding standards) to generate more accurate and compliant code.
- Autonomous Workflow:
- AI-Powered Planning: First, it creates a high-level plan for how it will tackle the generation tasks.
- Parallel Code Generation: Generates RTL and testbench code concurrently for maximum speed.
- Self-Correction Loop:
- Automated Validation: Uses
iverilogto instantly check the generated code for syntax errors. - AI Self-Critique: An AI agent acts as a peer reviewer, checking the code for logical flaws, coverage gaps, and inconsistencies.
- Recursive Debugging: If any errors are found (either by the compiler or the AI critique), the system automatically attempts to fix them and re-validates, looping until the code is correct.
- Automated Validation: Uses
- Human-in-the-Loop: You are the final authority. The AI presents the validated, verified code for your approval before any files are written.
Spec2RTL orchestrates a sophisticated multi-agent workflow:
flowchart TD
subgraph "Phase 1: Planning"
A[Select Spec] --> B(Retrieve Knowledge via RAG)
B --> C(AI Creates a Plan)
end
subgraph "Phase 2: Generation"
C --> D(Generate RTL & Testbench in Parallel)
end
subgraph "Phase 3: Autonomous Verification"
D --> E{Compile Code}
E -- Syntax OK --> F{AI Code Review}
E -- Syntax Error --> G{Self-Correction AI}
F -- Logical Issue --> G
G --> E
end
subgraph "Phase 4: Finalization"
F -- No Issues --> H(Human Approval)
H -- Approve --> I(Write Files & Sim Script)
I --> Z[Done]
H -- Reject --> Z
end
- Python: 3.10 or newer.
- A C Compiler:
gccis required for one of the dependencies. - Icarus Verilog: The tool used for code validation.
- On Ubuntu/Debian:
sudo apt-get install iverilog - On macOS (with Homebrew):
brew install icarus-verilog
- On Ubuntu/Debian:
- An AI Backend: You need access to an AI model, either locally via Ollama or through the cloud via Azure OpenAI.
git clone https://github.com/cirkitly/spec2rtl.git
cd spec2rtlIt's highly recommended to use a virtual environment.
# Create a virtual environment
python3 -m venv env
# Activate it
source env/bin/activate
# Install all required Python packages
pip install -r requirements.txtThe application uses a .env file to manage API keys and endpoints. First, create your own .env file by copying the example:
cp .env.example .envNext, open the new .env file and fill it out according to one of the options below.
Edit your .env file to look like this, replacing the placeholders with your actual Azure credentials.
# .env file for Azure
LLM_PROVIDER="azure"
AZURE_OPENAI_ENDPOINT=https://<your-resource-name>.openai.azure.com/
AZURE_OPENAI_API_KEY=<your-azure-openai-key>
AZURE_OPENAI_DEPLOYMENT=<your-deployment-name>
AZURE_OPENAI_API_VERSION=2024-05-01-previewNote: AZURE_OPENAI_DEPLOYMENT is the custom name you gave the model when you deployed it in Azure.
If you prefer to run models locally, first download and run Ollama. Then, pull the required models from your terminal:
# 1. Download the main language model (for writing code)
ollama pull llama3
# 2. Download the embedding model (for RAG)
ollama pull mxbai-embed-largeThe .env.example is already set up for Ollama, so you just need to ensure LLM_PROVIDER="ollama" is set in your .env file.
Place your hardware module specifications as .md files inside the specs/ directory. A uart_spec.md is included as an example.
Place any relevant documentation, coding standards, or compatibility rules as .txt files in the knowledge_source/ directory. The AI will use this information to generate better, more compliant code.
From the project's root directory, run the main script:
python3 main.py- Select a Spec: The tool will list all specifications it found. Enter the number corresponding to the one you want to work on.
- Watch it Work: The AI will create a plan, generate the code, and run its validation and self-critique loops. You will see status updates in real-time.
- Approve the Result: Once the AI is satisfied with its work, it will present the final Verilog RTL and SystemVerilog testbench for your review. If you approve, it will save the files.
The generated files will be placed in a new directory inside output/.
# Navigate to the output directory
cd output/uart_spec
# Run the simulation script
./run_sim.shYou should see the iverilog compilation and the testbench execution results printed to your console. Congratulations, you've gone from spec to simulation in minutes