Skip to content

v1.0.4 - Ollama integration and improvements

Compare
Choose a tag to compare
@aksg87 aksg87 released this 05 Aug 12:29
· 106 commits to main since this release

What's Changed

  • Added Ollama language model integration – Full support for local LLMs via Ollama
  • Docker deployment support – Production-ready docker-compose setup with health checks
  • Comprehensive examples – Quickstart script and detailed documentation in examples/ollama/
  • Fixed OllamaLanguageModel parameter – Changed from model to model_id for consistency (#57)
  • Enhanced CI/CD – Added Ollama integration tests that run on every PR
  • Improved documentation – Consistent API examples across all language models

Technical Details

  • Supports all Ollama models (gemma2:2b, llama3.2, mistral, etc.)
  • Secure setup with localhost-only binding by default
  • Integration tests use lightweight models for faster CI runs
  • Docker setup includes automatic model pulling and health checks

Usage Example

import langextract as lx

result = lx.extract(
    text_or_documents=input_text,
    prompt_description=prompt,
    examples=examples,
    language_model_type=lx.inference.OllamaLanguageModel,
    model_id="gemma2:2b",
    model_url="http://localhost:11434",
    fence_output=False,
    use_schema_constraints=False
)

Quick setup: Install Ollama from ollama.com, run ollama pull gemma2:2b, then ollama serve.

For detailed installation, Docker setup, and more examples, see examples/ollama/.

Full Changelog: v1.0.3...v1.0.4