The latest version of the CK automation suite supported by MLCommons™: v2.5.8 (Apache 2.0 license):
We plan to develop a new version of the CK framework (v3) as a collaborative effort within different MLCommons workgroups - please feel free to join this community effort!
Versions 1.x including v1.17.0 and 1.55.5 (BSD license) are stable but not officially supported anymore. Please get in touch and we will help you to upgrade your infrastructure to use the latest MLCommons technology!
Collective Knowledge framework (CK) helps to organize software projects as a database of reusable components with common automation actions and extensible meta descriptions based on FAIR principles (findability, accessibility, interoperability and reusability) as described in our journal article (shorter pre-print).
Our goal is to help researchers and practitioners share, reuse and extend their knowledge in the form of portable workflows, automation actions and reusable artifacts with a common API, CLI, and meta description. See how CK helps to automate benchmarking, optimization and design space exploration of AI/ML/software/hardware stacks, simplifies MLPerf™ inference benchmark submissions and supports collaborative, reproducible and reusable ML Systems research:
- ACM TechTalk
- MLPerf inference benchmark v1.1 automation demo
- AI/ML/MLPerf™ automation workflows and components from the community
- Reddit discussion about reproducing 150 papers
- Our reproducibility initiatives: methodology, checklist, events
- Automating MLPerf(tm) inference benchmark and packing ML models, data sets and frameworks as CK components with a unified API and meta description
- Developing customizable dashboards for MLPerf™ to help end-users select ML/SW/HW stacks on a Pareto frontier: aggregated MLPerf™ results
- Providing a common format to share artifacts at ML, systems and other conferences: video, Artifact Evaluation
- Redesigning CK together with the community based on user feedback: incubator
- Other real-world use cases from MLPerf™, Qualcomm, Arm, General Motors, IBM, the Raspberry Pi foundation, ACM and other great partners;
Follow this guide to install CK framework on your platform.
CK supports the following platforms:
As a host platform | As a target platform | |
---|---|---|
Generic Linux | ✓ | ✓ |
Linux (Arm) | ✓ | ✓ |
Raspberry Pi | ✓ | ✓ |
MacOS | ✓ | ± |
Windows | ✓ | ✓ |
Android | ± | ✓ |
iOS | TBD | TBD |
Bare-metal (edge devices) | - | ± |
Here we show how to pull a GitHub repo in the CK format and use a unified CK interface to compile and run any program (image corner detection in our case) with any compatible data set on any compatible platform:
python3 -m pip install ck
ck pull repo:mlcommons@ck-mlops
ck ls program:*susan*
ck search dataset --tags=jpeg
ck detect soft --tags=compiler,gcc
ck detect soft --tags=compiler,llvm
ck show env --tags=compiler
ck compile program:image-corner-detection --speed
ck run program:image-corner-detection --repeat=1 --env.MY_ENV=123 --env.TEST=xyz
You can check output of this program in the following directory:
cd `ck find program:image-corner-detection`/tmp
ls
processed-image.pgm
You can now view this image with detected corners.
Check CK docs for further details.
We have prepared adaptive CK containers to demonstrate MLOps capabilities:
You can run them as follows:
ck pull repo:mlcommons@ck-mlops
ck build docker:ck-template-mlperf --tag=ubuntu-20.04
ck run docker:ck-template-mlperf --tag=ubuntu-20.04
You can create multiple virtual CK environments with templates to automatically install different CK packages and workflows, for example for MLPerf™ inference:
ck pull repo:mlcommons@ck-venv
ck create venv:test --template=mlperf-inference-main
ck ls venv
ck activate venv:test
ck pull repo:mlcommons@ck-mlops
ck install package --ask --tags=dataset,coco,val,2017,full
ck show env
All CK modules, automation actions and workflows are accessible as a micro-service with a unified JSON I/O API to make it easier to integrate them with web services and CI platforms as described here.
- See docs
We have developed the cKnowledge.io portal to help the community organize and find all the CK workflows and components similar to PyPI:
- Search CK components
- Browse CK components
- Find reproduced results from papers
- Test CK workflows to benchmark and optimize ML Systems
The community provides Docker containers to test CK and components using different ML/SW/HW stacks (DSE).
- A set of Docker containers to test the basic CK functionality using some MLPerf inference benchmark workflows: https://github.com/mlcommons/ck-mlops/tree/main/docker/test-ck
Users can extend the CK functionality via CK modules or external GitHub reposities in the CK format as described here.
Please check this documentation if you want to extend the CK core functionality and modules.
Note, that we plan to redesign the CK core to be more pythonic (we wrote the first prototype without OO to be able to port it to bare-metal devices in C but eventually we decided to drop this idea).
We would like to thank all contributors and collaborators for their support, fruitful discussions, and useful feedback! See more acknowledgments in the CK journal article.