🎉 Announcing Spice v1.7.0! v1.7.0 introduces major improvements in performance, search, embeddings, and model integration. Highlights: ✔️Apache DataFusion v49 Upgrade: Faster query planning, dynamic filters, TopK pushdown, compressed spill files, new ordered-set aggregates, and regex functions. ✔️Real-Time Full-Text Search: Index CDC streams instantly for low-latency search on new data. ✔️EmbeddingGemma Support: Use Google’s high-quality embedding model for semantic search and retrieval. ✔️/v1/search API Improvements: Backed by new text_search and vector_search table functions for greater performance. ✔️Embedding Request Caching: Reduce cost and latency by caching repeated embedding requests. ✔️OpenAI Responses API Enhancements: Tool calls with streaming for real-time interactions. 📖 Check out the release blog for all of the updates and more info: https://hubs.ly/Q03K_zcq0
Spice AI
Technology, Information and Internet
Seattle, Washington 3,216 followers
The Data and AI Stack in One Engine
About us
Spice AI is an open-source data and AI platform that helps development teams build more responsive and intelligent applications. Spice combines SQL query federation & acceleration, hybrid search & retrieval, and LLM inference in a high-performance, lightweight runtime—so you can query data in place, across operational and analytical data sources, without ETL or complex integrations. Deploy anywhere—edge, cloud, or on-premise—and ship faster applications with less infrastructure management and greater security.
- Website
-
https://spice.ai
External link for Spice AI
- Industry
- Technology, Information and Internet
- Company size
- 11-50 employees
- Headquarters
- Seattle, Washington
- Type
- Privately Held
- Founded
- 2021
Locations
-
Primary
Seattle, Washington 98104, US
Employees at Spice AI
-
Roger Frey
President & COO, Spice AI
-
Edward Hooper
CEO & Co-founder at GXE. VC at Cardinia Ventures. Previously, Co-founder at Omny Studio (acquired by Triton Digital)
-
Billy Rusteen
Legal Dude at My In-House Coach | Helping attorneys land their first in-house job
-
Phillip LeBlanc
Co-Founder and CTO at Spice AI
Updates
-
Spice AI reposted this
Why do most enterprise AI projects fail? Often, it's because accessing the right data with the appropriate search function, embedding that into an agentic workflow, and serving it with low latency has generally been unreasonably complex and/or cost prohibitive. Spice has taken a new approach to solving this problem - read our latest blog to learn more: https://hubs.ly/Q03KVn4q0
-
Why do most enterprise AI projects fail? Often, it's because accessing the right data with the appropriate search function, embedding that into an agentic workflow, and serving it with low latency has generally been unreasonably complex and/or cost prohibitive. Spice has taken a new approach to solving this problem - read our latest blog to learn more: https://hubs.ly/Q03KVn4q0
-
“We were wrangling with the embeddings for some weeks and found it quite frustrating. As soon as we deployed Spice, those problems were gone.” - Rachel Wong, CTO at Basis Set Basis Set needed to search continuously refreshed data on 10,000+ people and companies without the burden of managing embeddings or pipelines. Using Spice's data and AI engine, Basis Set investors can now run natural language searches directly against real-time & disparate datasets - ultimately delivering accurate, data-grounded insights that help them spot opportunities earlier and act faster. Read the case study for the full story: https://lnkd.in/ghFPJegH
-
Instead of wiring together separate systems and ETL pipelines for data and AI, you can do it all in Spice in one runtime. In this demo, Advay Patil demonstrates querying and accelerating data in Spice - and then calling OpenAI's Responses API endpoint from the same interface for additional insights. All it takes is a few lines of YAML! 📺 Watch the full demo: https://hubs.ly/Q03J-lmC0 📖 Docs: https://hubs.ly/Q03J-t7W0
-
Developers use text-to-SQL to make data easier to query and to speed up exploration without hand-writing every statement. Spice offers a dedicated text-to-SQL endpoint for these use cases, giving you more control over how queries are generated and reducing the chance of hallucinated SQL. The cookbook recipe walks through setup, query execution, tracing, and even running with local models - check it out: https://lnkd.in/g7ZSYXqB
-
Power BI is one of the most widely used BI tools in the enterprise. Connecting it to operational databases, warehouses, and object stores, however, often requires complex ETL pipelines, duplicated storage, or slow queries across disparate sources. With Spice’s new Power BI Data Connector, you can skip that complexity: - Run federated SQL queries across operational and analytical data sources - Accelerate large datasets locally with DuckDB + Arrow for sub-second dashboards - Rely on open standards like ADBC + Flight SQL for Arrow-native performance 📖 Check out the blog and give the connector a try! https://hubs.ly/Q03JzHtV0
-
A common use case for Spice is data acceleration - federating data from multiple sources and making it consistently fast at the application layer in the Spice runtime. But what if you also need to enforce data integrity constraints on top of that? Spice Co-Founder & CTO Phillip LeBlanc shows how our new advanced upsert functionality makes it possible. 🔑 Highlights: - Prevent duplicates with primary keys - Deduplicate messy incoming data automatically - Resolve conflicts with last-write-wins semantics This means you get database-like reliability in the Spice runtime without ETL or pipelines. 👉 Learn about additional Spice constraints in the docs: https://hubs.ly/Q03J89zS0
-
In this demo, Senior Software Engineer Sergei Grebnov walks through how easy it is to join streaming data from Kafka with lookup tables in S3 using a single SQL query in Spice. The same config connects to OpenAI's GPT-4.1 model, so you can start interacting with your datasets in natural language. Together, this allows you to: - Query Kafka topics and S3 objects from one interface - Join data across sources without pipelines - Ask questions like “How many orders are in our Kafka topic?” and get results instantly With Spice, federated queries and LLMs run side-by-side in one lightweight runtime. Watch the full demo here: https://hubs.ly/Q03H-GJZ0 And if you're interested in digging in, check out these resources: 👉 Kafka Connector: https://hubs.ly/Q03H-M0c0 👉 Query Federation Docs: https://hubs.ly/Q03H-L-g0
-
Modern apps and AI agents rely on fast, real-time data served from both analytical and operational data stores. The challenge is getting that data into the application without building complex ETL pipelines, caches, or stitching together multiple databases. Spice solves this by combining query federation + acceleration, search, & LLM inference in a single runtime that sits next to your application. In this clip, Spice Founder and CEO Luke Kim shows how with just a few lines of config in Spice, you can easily accelerate any underlying data source with DuckDB and instantly cut query times from 1 second to ~100ms. Spice manages the updates so you get the performance benefits without extra work. Check out the docs for more info on getting started with Spice Acceleration: https://hubs.ly/Q03HPqKT0 And for the full demo (that adds hybrid search and LLM inference), go here: https://hubs.ly/Q03HPtFy0