- Vision: Build Your Own Superintelligence
- Mission: Empowers all enterprises to create AI with expertise and memory.
- Values:
- 🎀 Intensely Cute 🎀
- Llaser-focused
- Customers Customers Customers
Copied from Lamini Roadmap Slide for convenience. For a uptodate descriptions, please review the slide:
- Repeatable revenue: $3M ARR, 20 customers
- Lighthouse customer: 1 large enterprise promotes us, 10 customers follow
- Ease of use: 10 self-serve users convert to enterprise
- Deeptech moat: uphold Moore’s + scaling laws
- AMD moat: memory tuning/RAFT on AMD’s roadmap
Public Eng. Prod. Metrics Dashboard: Still working in progress, meant for sales, product to understand Lamini Cloud. The dashboard is public, can share with customer if deemed appropriate.
The dashboard has a weekly report, if you want to receive such report, please contact @yaxiong.
TODO: Define metrics needed to support each of the above company objectives.
We are looking to support most popular models and list them explicitly, a list orgs to examine:
-
Cursor is recommended for development work. You should receive an invite in your
lamini.aiemail inbox to join Cursor.- Cursor forks VSCode, so most of VSCode thingy still work for cursor
- Cursor adds chat panel to the right of the window
- cmd+k to write commands for Cursor
- cmd+l to put the current line or selected text into the chat context
- Use chat without any context to ask questions just like using gpt or claude, that part is no difference.
-
Install Docker Desktop on Mac, click docker icon and click settings -> resources to set the resources:
-
Install VSCode on Mac, install recommended extensions for VSCode, you'll see popup panel when you first launch vscode and open the lamini-platform repo.
# Used for executing `./lamini` script
brew install bash
git clone [email protected]:lamini-ai/lamini-platform.git
cd lamini-platform
# Include the provided git config file
git config --global include.path "$(pwd)/.gitconfig"
# Launch local lamini instance
./lamini up
# Open localhost:5001 to access the Lamini Platform.
# If localhost:5001 shows the normal Lamini Platform web ui, and what you want to test can be done on web ui,
# then you can ignore most of the warning/error logs from the lamini up command.Everything is written in Python, we use Python 3.10.6 and venv.
Official code editor is VSCode.
Install recommended extensions.
Install code command line launcher. You can then change to desired directory,
and run code . to open the current directory in VSCode.
cd lamini-platform
code .First, instal pyenv, on Ubuntu:
curl https://pyenv.run | bashOr on Mac
brew install pyenv
brew install pyenv-virtualenvAfter installation finishes, follow the printed instructions, and add the following line to your shell's rc file:
# pyenv and pyenv-virtualenv
export PYENV_ROOT="$HOME/.pyenv"
[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"Restart your shell for the changes to take effect.
On Ubuntu, first install dependent packages by python 3.10.6:
# Install packages required for installing python 3.10.6
sudo apt install -y libbz2-dev libncurses-dev libffi-dev libreadline-dev libsqlite3-dev lzma-dev liblzma-devThen you can install and activate python on Ubuntu or Mac as follows:
# Install the desired python version if not installed yet
pyenv install 3.10.6
# Activate it as the default version on your host
pyenv global 3.10.6Different subfolders have different dependencies. For example lamini-platform/ml has requirements.txt and requirements-torch.txt. Another example is lamini-platform/infra which has only a single requirements.txt. For the folder that you plan to work out of, you should navigate into it and create a Python virtual environment which allows you to keep dependencies separate from each other. For more information about Python virtual environments, check the official documentation.
git clone [email protected]:lamini-ai/lamini-platform.gitFor the you're working out of (e.g. ml, sdk, infra, etc.):
cd lamini-platform/<subfolder>
python -m venv .venv
source .venv/bin/activateYou should see a prefix of (.venv) on your command line. In order to leave the virtual environment, run deactivate. The prefix should disappear.
After activating your virtual environment, install pip packages included in the requirements files.
First install dependent system packages for mpi4py (needed by ml/requirements.txt).
On mac:
brew install mpi4pyOn Linux:
sudo apt install -y libopenmpi-devThen installing all requirements files packages for your project subfolder
- If inside
lamini-platform/ml/:
pip install \
--requirement requirements.txt \
--requirement requirements-torch.txt- If inside
lamini-platform/infra/:
pip install \
--requirement requirements.txt- If inside
lamini-platform/sdk/:
pip install \
--requirement requirements.txt- If inside
lamini-platform/(root project directory):
pip install \
--requirement requirements-pytest.txtSee instructions in deployments/testing/README.md.
- Write/change a SQLAlchemy file.
- Import the SQLAlchemy Class in the
__init__file - Write a Postgres *.sql migration file. Start by generating the this file using
dbmate.cd platform/infra && dbmate new <YOUR_MIGRATION_NAME>
- Write unit tests to verify your python code write correct data to the table.
- See test_download.py for an example:
- See test_download.py for an example:
After adding new migration scripts to infra/db/migrations,
you need to test the changes with ./lamini up.
./lamini up calls docker compose to launch Lamini Platform locally.
Afterwards, use docker ps to examine the list of docker containers, it should be looking like this.
You need to docker exec into the database container, and examine the database schema directly.
docker exec -it lamini-platform-database-oltp-1 bash
psql postgresql://postgres:secret@localhost:5432/default_db
default_db=# \dtUse \d to verify a table's schema. And select statements to verify the values.
default_db=# \d <table-name>
select * from <table-name>NOTE: dev-migrations-1 container is one of the container not shown in the above screenshot.
It runs dbmate on the sql scripts in infra/db/migrations.
After the migration is done, dev-migrations-1 will exit.
./lamini up calls docker compose, which saves container state after being stopped.
This can complicate changing database schemas during development. For example, if you created a db migration .sql file under infra/db/migrations. But needs to revise database schema. You'll need to create new migration .sql files. That is OK, but too noisy.
In this case, you can modify your new db migration .sql file, and remove the database container and recreate it.
docker ps -a
docker rm lamini-platform-database-oltp-1NOTE: DO NOT EDIT a already merged db migration sql file That could cause your change not reflected when deployed, if the installation site already have executed the old version of your migration file. Always create new db migration sql file
TODO: Introduce test fixture to launch postgres instance during tests, so that we can just check postgres directly. Right now, tests have to use sqlite to verify data written to tables.
We are dogfooding Google Python Style Guide. Without enforcement, i.e., VSCode has pylint in recommended extensions, and pylintrc is copied from Google python style guide. The lint check in lint.yaml is advisary, i.e., CI still passes even violations are detected.
This allows debugging python code with normal vscode and other tooling.
NOTE: Must install all of the requirements.txt in Development environment.
NOTE #2: Might not work as expected, file issue at Dev Env
Unit tests are under the corresponding dirs of the source code being tested. Set the necessary environemnt variable, TEST_ONLY_LAMINI_CONFIG_FILE_PATH to run the unit tests.
export TEST_ONLY_LAMINI_CONFIG_FILE_PATH=infra/configs/llama_config.yaml
pytest infra/test sdk/test ...All tests can be run with lamini script.
The lamini script at the root of the repository is auto-generated using a tool called bashly,
more details are under cmd. lamini runs tests in a container image, which was built with Dockerfile.
You can use bearer=test_token to bypass usual restrictions. For example, bypass feature gating.
Make sure you already installed all pip packages.
These tests are running against a real Lamini Platform.
These tests are running in all-staging-actions.yaml.
You first need to launch a complete Lamini Platform with docker contaienrs to test your changes:
-
Spin up Lamini Platform locally with docker-compose
./lamini up, You can specify the GPU server type withexport BASE_NAME=amdorexport BASE_NAME=nvidia.# Launch lamini platform on amd GPU server. export BASE_NAME=amd docker compose up --build # Launch lamini platform on nvidia GPU server. export BASE_NAME=nvidia docker compose up --build
-
Navigate to localhost:3000 for a LOCAL_KEY api key. Login through google and navigate to the account tab.
-
Production and staging keys can be found at app.lamini.ai and staging.powerml.co
-
Setup Lamini Platform API endpoint and API Key through environment variable. After executing the following lines, the integration tests will make RPC calls to staging endpoint, and use the supplied API Key.
export LAMINI_API_URL=https://staging.lamini.ai export LAMINI_API_KEY=[API_KEY]
-
Run tests with pytest directly:
pytest e2e_test
Coverage reports are automatically generated when by GitHub CI and reported to Coveralls.io. Coveralls.io settings details:
- STATUS UPDATES / CHECKS: This post report to pull request, and if the coverage percentage is reduced by more than 2%, or the overall coverage percent is < 65%, the check reports feailure.
- PULL REQUESTS: No report to pull request.
- Target coverage percentage: 70%+; we only monitor overall coverage percentage from the badge at the top of this page. If there is siganificant drop, we'll address them in an intense 1 week test enhancement.
-
Build and push container image with your changes. Follow instructions in the section of
Build and push container image; -
Update testing/configs/helm_config.yaml for your target K8s cluster;
-
Run
deployments/helm-generation/run_python.sh testing <image-tag>to generate helm charts underdeployments/{lamini,persistent-lamini};<image-tag>here is only the tag, not the image repo.
-
For fresh installation, run:
helm install persistent-lamini deployments/persistent-lamini --namespace lamini --create-namespace helm install lamini deployments/lamini --namespace lamini --create-namespace
-
For upgrade, run:
helm uninstall lamini deployments/lamini --namespace lamini helm install lamini deployments/lamini --namespace lamini --create-namespace
This is just install and reinstall the charts.
To add credits to a user's account on production or staging, use the add_credits.py script:
python3 devops/add_credits.pyIf you run into a psycopg2 error, you may need to install the psycopg2 package first:
pip3 install psycopg2autoflake --ignore-init-module-imports --remove-all-unused-imports --in-place **/*.pyFlaky tests fail some time, but not always. They are indications of the code not able to handle exceptional situations in runtime environment, or that the tests are not written in overly strict manner.
Mark flaky tests with reruns, if you do not have time to fix it immediately:
@pytest.mark.flaky(reruns=1, reason="issues/1234")
def test_foo():
...If you need to rerun a test more than 2 times (3 times in total) to make it pass reliably, then you should consider the test broken.
Specifying reruns on pytest command line, pytest --reruns 3 test_file.py,
does not affect the above marker. We can also set reruns = 3 in pytest.ini,
which also does not affect the above marker.
Follow command line installation on mac section of the official instructions of aws cli installation
Run aws configure to setup AWS credentials. Access Key ID and Secrete Access Key can be found on 1password,
search AWS lamini.ai root login, choose us-west-2 as default region, json as output format.
If run into blockers, consult official instructions.
Use eksctl to manage EKS clusters. Eksctl is a "battery-included" tool, which manages
many dependent aspects of EKS cluster. For example, it automatically install GPU plugins when
requesting GPU instances, which is not the case when manually create with EKS web UI or aws CLI.
To install eksctl on mac:
brew install eksctl# REGION=<your-region> then supply --region=${REGION} to the following command
# The default region should work for you.
# List clusters in the current region
eksctl get cluster
# This will write kube config for the cluster
eksctl utils write-kubeconfig --cluster=<cluster-name>
# Test that kubectl can access the EKS cluster
kubectl get node