Lists (1)
Sort Name ascending (A-Z)
Stars
π€ AgentVerse πͺ is designed to facilitate the deployment of multiple LLM-based agents in various applications, which primarily provides two frameworks: task-solving and simulation
[ICLR 2025] "GraphRouter: A Graph-based Router for LLM Selections", Tao Feng, Yanzhen Shen, Jiaxuan You
Agentic Web: Weaving the Next Web with AI Agents.
ππ€ Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper. Don't be shy, join here: https://discord.gg/jP8KfhDhyN
An open-source framework for collaborative AI agents, enabling diverse, distributed agents to team up and tackle complex tasks through internet-like connectivity.
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery π§βπ¬
An open protocol enabling communication and interoperability between opaque agentic applications.
Distributed Peer-to-Peer Web Search Engine and Intranet Search Appliance
pyDVL is a library of stable implementations of algorithms for data valuation and influence function computation
Provides a common interface to many IR ranking datasets.
State-of-the-Art Text Embeddings
Whitepaper of DIN (Decentralized Intelligence Network)
π Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
A decentralized learning research framework
allRank is a framework for training learning-to-rank neural models based on PyTorch.
Learning to Tokenize for Generative Retrieval (NeurIPS 2023)
This list of writing prompts covers a range of topics and tasks, including brainstorming research ideas, improving language and style, conducting literature reviews, and developing research plans.
Rich is a Python library for rich text and beautiful formatting in the terminal.
π‘ All-in-one open-source AI framework for semantic search, LLM orchestration and language model workflows
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
Python implementation of Tribler's IPv8 p2p-networking layer
πΈ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading