More than a decade after Amazon launched Alexa, the company is debuting Alexa+, a significant leap over its predecessor. As is the case with so many headlines these days, Alexa+ uses generative AI (genAI). In particular, it is using the tech to overhaul user experience with improved conversational abilities and personalization—think a jump from basic command processing (i.e., “Alexa, what’s the forecast?”) to complex task execution. The new Alexa can manage smart homes, make reservations, navigate the internet autonomously, and even perform multi-step operations without repeated wake words.
This next-generation version brings together Amazon Bedrock’s large language models with a specialized “experts” framework that coordinates thousands of services and devices (Introducing Alexa+, the next generation of Alexa). At its core is a suite of powerful LLMs—Amazon’s own (codenamed “Nova”) alongside third-party models—allowing Alexa+ to dynamically select the best AI engine for any given task. One of these is Anthropic’s Claude, a state-of-the-art LLM in which Amazon invested $8B, as Reuters has reported. By tapping multiple models, Alexa+ can hold more human-like, open-ended conversations and continually improve its answers via machine learning.
In practice, this marks a step closer to the voice functionality envisioned in the film Her (2013), where Samantha maintains contextual awareness across conversations, proactively anticipates needs, and handles complex tasks without explicit instructions. While Alex 1.0 could check the weather and queries related to orders, the latest iteration is billed as something closer to a personal secretary. While Alexa+ doesn’t aspire to the emotional intelligence or consciousness portrayed in the movie Her, its ability to manage multi-turn dialogues and independently complete sequences of actions represents a meaningful move in that direction.
Alexa 1.0 wasn’t exactly a big money-maker for Amazon. According to WSJ, the devices division that includes Alexa lost $25 billion between 2017 and 2021 alone. Amazon’s strategy—selling Echo speakers at cost while hoping to generate revenue through voice shopping—largely failed as customers primarily used Alexa for free services like playing music, setting timers and checking weather. “We worried we’ve hired 10,000 people and we’ve built a smart timer,” one former senior employee told the Journal. This financial reality has pushed CEO Andy Jassy to pivot from founder Jeff Bezos’s approach of prioritizing adoption over profitability.
Alexa 1.0 also had several limitations in common with Apple’s Siri and other similar models. These systems require precise phrasing, struggle with sequential tasks, and offer minimal personalization. .
The integration with Prime membership creates immediate strategic pressure on competitors. Non-Prime users will pay $19.99 monthly for Alexa+ capabilities, but existing Prime subscribers receive these advanced functions without additional cost.
The landscape for voice assistants appears increasingly binary: evolve dramatically or face obsolescence. Apple’s Siri and Google Assistant have followed incremental improvement paths, leaving significant opportunity for a leap forward. AI companies like OpenAI and Grok are also shaking up the space. OpenAI launched Advanced Voice Mode first in September 2024, bringing real‑time, emotion‑sensitive, voice-based interaction to its platform. It was later expanded to the web and mobile apps, allowing users to ask complex questions and switch between languages. Elon Musk’s xAI is set to launch its own answer with the release of voice functionality for Grok-3.