As LLMs take on reasoning and intelligence, one challenge becomes critical: context. AI systems need fast, accurate access to structured and unstructured data, vectors, and long-term memory. That’s where we come in. As the real-time context engine for AI, Redis helps teams build and scale AI agents, chatbots, search, and recommendation systems by delivering the most relevant data with low latency. Catch up with Manvinder Singh, VP of AI Product Management at Redis, in this conversation with Cerebral Valley about how Redis has evolved beyond just caching to support modern AI workloads. Full conversation here: https://lnkd.in/g6RTb32q
Redis
Software Development
Mountain View, CA 286,087 followers
The world's fastest data platform.
About us
Redis is the world's fastest data platform. We provide cloud and on-prem solutions for caching, vector search, and more that seamlessly fit into any tech stack. With fast setup and fast support, we make it simple for digital customers to build, scale, and deploy the fast apps our world runs on.
- Website
-
http://redis.io
External link for Redis
- Industry
- Software Development
- Company size
- 501-1,000 employees
- Headquarters
- Mountain View, CA
- Type
- Privately Held
- Founded
- 2011
- Specialties
- In-Memory Database, NoSQL, Redis, Caching, Key Value Store, real-time transaction processing, Real-Time Analytics, Fast Data Ingest, Microservices, Vector Database, Vector Similarity Search, JSON Database, Search Engine, Real-Time Index and Query, Event Streaming, Time-Series Database, DBaaS, Serverless Database, Online Feature Store, and Active-Active Geo-Distribution
Locations
-
Primary
Get directions
700 E. El Camino Real
Suite 250
Mountain View, CA 94041, US
-
Get directions
Bridge House, 4 Borough High Street
London, England SE1 9QQ, GB
-
Get directions
94 Yigal Alon St.
Alon 2 Tower, 32nd Floor
Tel Aviv, Tel Aviv 6789140, IL
-
Get directions
316 West 12th Street, Suite 130
Austin, Texas 78701, US
Employees at Redis
Updates
-
Planned software maintenance shouldn’t break apps. Smart client handoffs in Redis Software and Redis Cloud keep apps online during maintenance by proactively reconnecting clients to new endpoints and temporarily relaxing timeouts until upgrades complete. The result: reliable, disruption-free maintenance for critical workloads. Read more in our blog and docs: - https://lnkd.in/gBbbNscA - https://lnkd.in/gfNwa-NK
-
-
Semantic caching reuses prior LLM results to cut costs, reduce latency, and stabilize throughput, but high hit rates require careful tuning with embeddings, similarity thresholds, TTL/eviction, deduplication, and observability. Redis LangCache is a managed semantic cache that exposes these controls so teams can optimize cache effectiveness without heavy custom work. Here are 10 practical techniques you can start using today to optimize your semantic cache: https://lnkd.in/gc_2NTdn
-
About 90% of your database traffic is probably reads. And yet we keep throwing more CPU, bigger disks, and more replicas at PostgreSQL. If you’ve ever worked on a read-heavy system like e-commerce, SaaS dashboards, or user profiles, you’ve felt this pain: - Pages get slower during traffic spikes - Reads compete with writes - Caching logic spreads across services - Cache invalidation becomes a nightmare Most teams reach for the cache-aside pattern. It works… until it doesn’t. But there’s a better approach: stop reacting to reads and start preparing for them. Redis Data Integration (RDI) gives you refresh-ahead out of the box: - Continuous sync from PostgreSQL to Redis - Automatic handling of inserts, updates, and deletes - Read-optimized Redis models (Hashes, JSON) - Config-driven transformations, no app code required Lead Developer Advocate at Redis, Ricardo Ferreira, wrote a full deep-dive on how this works, why cache-aside falls short, and how to implement refresh-ahead cleanly with RDI. If you’re still solving the “90% reads” problem with yesterday’s caching patterns, this one’s for you: https://lnkd.in/g7sMwsEp After you're done reading, try it yourself. Sign up for the public preview of Redis Data Integration in Redis Cloud: https://lnkd.in/eVReES3e
-
-
Redis reposted this
Nu de call for papers afgelopen nacht gesloten is hebben we de eerste sprekers voor de Dutch AI Conference bevestigd. De komende weken voegen we er gelijkelijk steeds meer toe (we hebben uiteindelijk 488 inzendingen om te reviewen). Voor nu hebben we drie tracks. In de technische track vind je de eerste best practice sessie ‘Reduce LLM calls with vector search design patterns’ van Raphael De Lio van Redis. In de business track bijvoorbeeld ‘AI development governance, the 3 collaboration models’ van Frederik V. In de inspiratie track hebben we Iulia Feroli bevestigd met de talk ‘how I built my own intelligent robot arm from scratch’. We hebben geen winstoogmerk met de conferentie, we willen elke ticket-euro in goede sprekers steken. Dat wil zeggen dat als de kaart verkoop hard genoeg gaat, we een vierde track zullen toevoegen. Meer sessies kun je al vinden in de schedule op de site: https://lnkd.in/ejNqPyxk #ai #conference #bestpractices
-
Prompt engineering is overrated. Context engineering isn’t. Redis Context Engine Lead and Featureform co-founder Simba Khadder joins Demetrios Brinkmann from the MLOps community to unpack why the hardest part of building agents today isn’t the model, it’s the context. They dive into: - Why feature stores aren’t dead (and where they still deliver real ROI) - Why naive RAG was only a first step - How agents actually fail when context is fragmented - Why Redis is evolving into a context platform for AI systems If you’re building agentic systems and feel like the model is “smart but stuck,” this conversation is for you: https://lnkd.in/gFYQcKvX
-
Redis reposted this
We are live with our Deep Dive on Redis with VP of AI Product Management Manvinder Singh! Redis is much more than a database 🛢 Redis is a real-time data platform that has long been synonymous with performance and speed in the web and mobile application stack, and is now evolving into a core context engine for AI applications. Built to support low-latency, high-throughput workloads, Redis enables developers to power AI agents, chatbots, search systems, and recommendation engines by providing fast access to structured and unstructured data, vectors, and agent memory. Its goal is to help teams improve accuracy, reduce latency, and scale AI applications efficiently by bringing the most relevant context to LLM-powered systems. In this conversation, Manvinder shares how Redis is positioning itself as the context engine for the AI stack, the challenges of building in a rapidly evolving ecosystem, and his vision for how agentic systems and context engineering will shape the future of AI-powered applications. Link below 👇
-
-
We recently announced our AI agent builder, an interactive code generator that creates production-ready AI agents powered by Redis in minutes. The new AI agent builder works like a chat: you tell it what you want, it asks a few follow-up questions, and in minutes you get complete, production-ready Python code. The two types of agents currently supported are recommendation engines and Conversational assistants. More languages and more agent types are coming soon. Read more: https://lnkd.in/eJSC945n
-
-
No idea what kids these days are saying? You’re not old. Your data is. With LangCache fully managed semantic caching, your AI apps will always slay. No cap. Try LangCache on Redis Cloud for free: https://lnkd.in/gD2mzvMb
-
-
-
-
-
+4
-
-
✅ Hot cocoa bar (with sprinkles) ✅ Holiday cookies ✅ Great company ✅ AI hackathon and demos We've got you covered tonight in San Francisco. It doesn't get more festive than this. 🤖🎄 Sign up below 👇
🎄Redis is co-sponsoring a Cocoa & Coding Holiday Hack Night tomorrow evening with Composio, Manus AI & Build Club in SF! You'll have a chance to learn how people are using Redis in their AI workflows, and Composio will show what they’ve been working on too. There’s an optional hack if you want to tinker, but it’s equally fine to just hang out, meet people, and talk shop over cocoa. 🕢 7:00 PM - 10:00 PM 📍Composio office in SF 🔗 https://luma.com/tx7jqjtc If you’re around and want to connect and build with other devs, join us!