Skip to content

Imageomics/bioclip-2

Repository files navigation

BioCLIP 2 DOI

This repository contains the code for BioCLIP 2 training and evaluation (testing and visualizing embeddings). We developed this repository based on BioCLIP and OpenCLIP. BioCLIP 2 is trained on the TreeOfLife-200M dataset and achieves state-of-the-art performance on both species classification and other biological visual tasks. The BioCLIP 2 website is hosted from the gh-pages branch of this repository.

BioCLIP 2 is a CLIP model trained on a new 200M-image dataset of biological organisms with fine-grained taxonomic labels. BioCLIP 2 outperforms general domain baselines on a wide spread of biology-related tasks, including zero-shot and few-shot classification.

Table of Contents

  1. Model
  2. Training and Evaluation Commands
  3. Paper, website, and data
  4. Citation

Model

The main differences in the training implementation between BioCLIP 2 and BioCLIP are the adopted model architecture and the introduction of experience replay. BioCLIP 2 employs a ViT-L/14 CLIP architecture pre-trained with LAION-2B data. Along with the contrastive optimization of biological organism data, we also include part of the LAION-2B data for experience replay. In order to reduce the influence of the domain gap between hierarchical labels and image captions, we use two separate visual projectors on top of the visual encoder. This part of the code is in transformer.py. We provide the weight of BioCLIP 2 in the BioCLIP 2 model repo.

Commands

Training

The TreeOfLife-200M images can be downloaded from their original sources with distributed-downloader. TreeOfLife-toolbox/docs contains instructions for full download into the proper format, and the code to construct the webdataset for training. These repositories are included in the supplementary material. img2dataset can be used to download data from the first three metadata parquet files of LAION-2B-en; we use the first downloaded 4,000 tar files for experience replay. Finally, download the validation set from TreeOfLife-10M (download instructions), as we use that for evaluation during training.

Clone this repository, then install the requirements:

conda env create -f requirements-training.yml

To train the model, run:

sbatch slurm/train.sh

Evaluation

Species classification

We evaluated BioCLIP 2 on the same test sets as used for BioCLIP, as well as a newly curated camera trap test set:

The metadata used in evaluation is provided in data/annotation, including NABirds, Rare Species, and other benchmarks from Meta Album. All evaluation parameters are described in src/evaluation/README.md. Please be sure to update the directories accordingly to reflect the locations of these data and metadata in slurm/eval.sh and run:

sbatch slurm/eval.sh

Other biological visual tasks

We also evaluated on biological tasks that go beyond species classification with the following datasets:

Please be sure to update the directories accordingly to reflect the locations of these data in slurm/eval_other.sh and run:

sbatch slurm/eval_other.sh

Paper, Website, and Data

We have a preprint on arXiv and a project website.

Our data is published on Hugging Face: TreeOfLife-200M and IDLE-OO Camera Traps. Step-by-step download instructions for TreeOfLife-200M are available in TreeOfLife-toolbox.

Citation

Please cite our papers and the associated repositories if you use our code or results.

@article{gu2025bioclip,
  title = {{B}io{CLIP} 2: Emergent Properties from Scaling Hierarchical Contrastive Learning}, 
  author = {Jianyang Gu and Samuel Stevens and Elizabeth G Campolongo and Matthew J Thompson and Net Zhang and Jiaman Wu and Andrei Kopanev and Zheda Mai and Alexander E. White and James Balhoff and Wasila M Dahdul and Daniel Rubenstein and Hilmar Lapp and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su},
  year = {2025},
  eprint={2505.23883},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2505.23883}, 
}

Our code (this repository):

@software{bioclip2code,
  author = {Jianyang Gu and Samuel Stevens and Elizabeth G. Campolongo and Matthew J. Thompson and Net Zhang and Jiaman Wu and Zheda Mai},
  doi = {10.5281/zenodo.15644363},
  title = {{B}io{CLIP} 2},
  version = {1.0.1},
  month = {sep},
  year = {2025}
}

Also consider citing OpenCLIP and BioCLIP:

@software{ilharco_gabriel_2021_5143773,
  author={Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig},
  title={OpenCLIP},
  year={2021},
  doi={10.5281/zenodo.5143773},
}

Original BioCLIP Paper:

@inproceedings{stevens2024bioclip,
 title = {{B}io{CLIP}: A Vision Foundation Model for the Tree of Life}, 
 author = {Samuel Stevens and Jiaman Wu and Matthew J Thompson and Elizabeth G Campolongo and Chan Hee Song and David Edward Carlyn and Li Dong and Wasila M Dahdul and Charles Stewart and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su},
 booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
 year = {2024},
 pages = {19412-19424}
}

Original Code:

@software{bioclip2023code,
  author = {Samuel Stevens and Jiaman Wu and Matthew J. Thompson and Elizabeth G. Campolongo and Chan Hee Song and David Edward Carlyn},
  doi = {10.5281/zenodo.10895871},
  title = {BioCLIP},
  version = {v1.0.0},
  year = {2024}
}

License

BioCLIP 2 is released under the MIT License. Some elements of the code are copyright by others (see LICENSE); detailed provenance information is provided in HISTORY.md.

Packages

No packages published

Contributors 5

Languages