🚀 The future of CX: Voice AI Agents! Meet the Regal Voice AI Agents. They can autonomously handle millions of calls using AI that acts human around the clock. Whether it's 2 AM or 2 PM, it’s all about delivering instant, personalized responses. ⚡ For you: Lower wait times, better CSAT. ⚡ For your customers: Seamless support anytime. 👉 Want to experience it yourself? Demo our AI Phone Agent today: https://lnkd.in/eviTKFmj
REGAL
Technology, Information and Internet
Transform your customer experience with Voice AI Agents.
About us
Regal is the AI Agent Platform for enterprise CX. Generative AI Agents are transforming customer expectations and the types of customer experiences business can build. The biggest opportunity is in support, sales and operations calls at consumer businesses. Regal helps overcome three hurdles: 1. We make it easy to build, test, deploy, and monitor autonomous AI Agents that are low-latency, omniscient, and always available. 2. We connect with your first-party customer data to perfect every customer conversation. 3. We give you the A/B testing tools needed to test a blend of Regal AI Agents and your AI-enhanced human agents, and build a culture of continuous improvement in your contact center without requiring engineering resources.
- Website
-
https://www.regal.ai?utm_source=linkedin&utm_medium=social&utm_content=about
External link for REGAL
- Industry
- Technology, Information and Internet
- Company size
- 51-200 employees
- Headquarters
- New York
- Type
- Privately Held
- Founded
- 2020
Locations
-
Primary
New York, US
Employees at REGAL
-
Jake Saper
Jake Saper is an Influencer General Partner @ Emergence Capital
-
Scott Gifis
Girl Dad x3. Operator. Investor. Advisor. Board Member.
-
David Frankel
Managing Partner, Founder Collective
-
Jack Aspir
Strategic Growth & M&A Leader | Scaling Multi-Location Healthcare Businesses | Delivering Competitive Acquisitions, EBITDA Growth & Long-Term Value…
Updates
-
Bad customer support is expensive for everyone. Alex Levin shares how Regal’s custom AI agents cut costs by 50%+, unlock billions in revenue, and deliver the fast, human-like support today’s customers expect.
-
📣 We’re Hiring a Product Marketing Associate! We’re looking for a detail-obsessed, technically curious product marketer who’s excited about the AI space and ready to move fast. This isn’t a “watch from the sidelines” role — you’ll be hands-on driving Regal.ai’s go-to-market strategy, storytelling, and enablement for our AI voice agent platform. This is a great opportunity to: ✅ Shape how Regal positions and launches products to the market ✅ Create content — from blogs to videos — that establishes Regal as the category leader in voice AI ✅ Translate complex features into clear, compelling value for both technical and non-technical prospects 👉 Apply here: https://lnkd.in/e4-nPSFp #Hiring #ProductMarketing #MarketingJobs #SaaS #JoinUs #RegalAI
-
We are excited to announce that Regal will be at HLTH Inc. 2025 next month. As leaders in transforming healthcare experiences with AI agents, we’re driving innovation that makes patient and member interactions smarter, faster, and more human. Stop by our booth to: ➡️ Explore how we’re helping the healthcare industry transform with AI agents ➡️ Partake in live demos on how to build an AI Agent ➡️ Take part in fun activations designed to engage Our team Lex Sivakumar, Yael Goldstein, Stephanie Sociedade, Ethan Goldberg, and Eliza Loftus will be on site and ready to connect. If you’ll be at HLTH, come say hi and see firsthand how AI is reshaping patient and member experiences. #HLTH2025 #HealthcareInnovation #AI #VoiceAI #RegalAI
-
-
You’ve built. You’ve tested. You’ve launched. But here’s the question: is your AI agent actually working? Unlike human agents, your AI won’t give you a gut check on how the day is going. AI agents communicate performance through data alone. So, how do you know how your AI is performing right now? In aggregate? How is it impacting your KPIs? The three layers of monitoring AI agent performance in Regal include: 1️⃣ Monitoring deployments live with real-time visibility So you don’t run the risk of real-time call degradation going unnoticed. Live stats refresh every 10 seconds—tracking dispositions, task completions, call durations, and transfers across all agents, queues, and campaigns. 2️⃣ Tying AI Agent impact directly to key KPI trends So you don’t deploy updates without understanding downstream impact. Track performance over time in Looker dashboards tailored to your goals (conversion, callback rates, short hangups, sentiment shifts). Validate A/B tests. Compare human vs. AI outcomes. 3️⃣ Uncovering deeper patterns and insights with Custom AI Analysis So you don’t miss high-signal patterns like compliance misses, objection themes, or common drop-off topics. Use Custom AI Analysis to extract structured data from LLM-reviewed transcripts. Combine with Regal Improve’s unsupervised clustering to identify new insight categories automatically. 👇 Full breakdown of how to measure AI Agent success with Regal: https://lnkd.in/eS-WFKcT
-
What an incredible evening at our CX & AI executive dinner last week! We brought innovation leaders together for a private dinner filled with great food, conversation, and insights on where AI is taking the industry next. Thank you to everyone who joined us. Here’s a quick look back at the evening →
-
Three things that winners do differently when implementing voice AI, according to Jarrod from TaskUs 👇 1. Align Product and Ops more tightly. “Where we see projects fail is when the teams who actually run customer support or sales are not aligned with product or tech teams. You just get this massive gap in terms of ROI realization.” 2. Treat knowledge as a living system. “Clients [don’t] anticipate up front all the care and feeding that’s going to be necessary to manage these systems over time… It’s not because the technology is flawed or high-maintenance. It’s because policies change, processes change, products change… And people don’t manage their in-house knowledge very well to begin with.” 3. Define ROI upfront, and iterate relentlessly. “Is it revenue growth, is it conversion rate, is it cost reduction at scale? You gotta understand [your ROI] up front and then measure it, document it, and go back, otherwise the business gets impatient.” Shoutout to Sahil and Jarrod for an amazing conversation at Ai4. Recap below. Check out the full discussion to see how you can get voice AI right from the start >> https://lnkd.in/g_qbTTiz
-
REGAL reposted this
When did CX become the practice of frustrating customers to stop them from what they were tying to actually do? It's not a good look. Endless phone tree options. Zero resolution. It's time for a reset. With the low variable cost of voice AI Agents, you can support everyone at the moment they need it. AI Agents don’t route you in circles. They understand the request, access your customer data, and complete the task - without handing it off five times. We all deserve better customer service.
-
Scaling AI agent QA can be a nightmare. Manual testing can’t cover end-to-end scenarios. That’s why we’re expanding our Simulations feature with automated Evaluations. Simulation Evaluations provide scenario-specific QA for every simulation you run in Regal. This further speeds up the process of pre-launch testing, while giving greater visibility into the edge cases. What you get with this update 👇 Faster, more granular evaluation: → Evaluate each scenario against explicit success criteria, exposing failures in conversation flow, objection resolution, data capture, and function execution. Unified scoring logic: → Evaluations rely on the same backend logic as Regal scorecards, making QA predictable across pre-launch and production. You can align testing logic, metrics, and outcomes across every simulation and live interaction. Actionable insights: → Pass/fail outcomes highlight exactly where prompts, branching logic, or custom actions need adjustment, reducing QA cycles and accelerating iteration. And, it’s built for complexity and scale: LLMs act as the simulated contact AND the evaluator, interpreting variations in phrasing, intent, and context to flag subtle failures that rule-based testing misses. With this, you get pinpointed pre-launch testing at scale. Which ultimately means faster iteration and deployment cycles, and AI agents that execute as intended from day one. See how it works: https://lnkd.in/gKjyV5x8