Note
This repo almost entirely slop/vibe code since I know almost nothing of C++
A JUCE plugin that exposes the neural DX7 model as a MIDI device, generating DX7 SysEx patches using the trained neural network.
- Neural DX7 patch generation using libtorch
- Real-time MIDI SysEx output
- Interactive latent space control
- Embedded model (no external files needed)
- Supports VST3, AU, and Standalone formats
- CMake 3.15+
- C++17 compatible compiler
- LibTorch (PyTorch C++ library)
- JUCE framework (included as submodule)
# Install system dependencies (Ubuntu/Debian)
make deps
# Setup JUCE and dependencies
make setup
# Create a dummy model file for testing
make dummy-model
# Build the project
make build
# Run standalone version
make run- 
Clone JUCE as a submodule: git submodule add https://github.com/juce-framework/JUCE.git JUCE git submodule update --init --recursive 
- 
Install LibTorch: - Download from https://pytorch.org/get-started/locally/
- Extract and set CMAKE_PREFIX_PATH to the LibTorch directory
 
- 
Generate the model file: - Follow instructions in models/README.md
- Place dx7_vae_model.ptin themodels/directory
 
- Follow instructions in 
mkdir build
cd build
cmake .. -DCMAKE_PREFIX_PATH=/path/to/libtorch
make -j8- Load the plugin in your DAW or run the standalone version
- Use the 8 sliders to control the neural model's latent space
- Click "Generate & Send" to send DX7 patches via MIDI SysEx
- Click "Randomize" to set random latent values
- Connect to a DX7, Dexed, or other compatible FM synthesizer
- make all- Setup and build the project
- make setup- Initialize submodules and dependencies
- make build- Build the project
- make clean- Clean build directory
- make install- Install built plugins
- make model- Check for required model file
- make deps- Install system dependencies
- make dummy-model- Create dummy model for testing
- make run- Run standalone application
- make package- Create distribution package
- DX7VoicePacker: Handles DX7 SysEx format encoding/decoding
- NeuralModelWrapper: Manages libtorch model inference
- MidiGenerator: Handles MIDI output and device management
- PluginProcessor/Editor: JUCE plugin interface
The neural model is embedded as binary data in the executable, so no external model files are needed at runtime.
To use a real trained model instead of the dummy:
- 
Train the model using the Python code in the parent directory: cd ../projects/dx7_vae python experiment.py
- 
Export to TorchScript format: import torch from agoge import InferenceWorker # Load your trained model model = InferenceWorker('hasty-copper-dogfish', 'dx7-vae', with_data=False).model # Convert to TorchScript scripted_model = torch.jit.script(model) # Save the scripted model scripted_model.save('dx7_vae_model.pt') 
- 
Replace the dummy model file with the real one.