Hume AI, a New York-based Series B startup and AI research lab, is at the forefront of developing emotionally intelligent voice interactions by integrating its Empathic Voice Interface (EVI). Anthropics Cloud AI Models

The collaboration aims to enhance human-computer communication by enabling AI systems to understand and respond to the emotional nuances of human speech.

Setting a new standard for conversational, emotionally intelligent AI Hume AI’s EVI 2 Introduces a new voice-to-voice AI model architecture capable of fast and fluent communication, setting it apart from others. Best AI Chatbots. It understands the tone of the user’s voice and can produce any desired accent, imitating a wide range of personalities, accents and speaking styles. EVI 2 can replace or integrate major language models (LLMs), giving developers the flexibility to build applications that require emotionally intelligent voice interactions.

According to a Hume AI spokesperson, Hume’s EVI 2 has powered more than 2 million minutes of voice AI conversations, illustrating its scalability and impact. Anthropic’s Advanced Prompt Caching has reduced costs by 80% and latency by 10%, making it efficient and cost-effective for over 36% of developers who prefer Claude over other external LLMs.

Chatbots like Chat GPT, Gemini Liveand Meta AI There are advanced voice modes, which engage users in human-like conversations with responses tailored to their preferences. Now, EVI, Hume AI’s flagship product, processes live audio input and generates responses that demonstrate understanding of voice feedback. By analyzing the tone, rhythm and timbre of speech, EVI determines the right moments to engage and creates empathic language with the right tone. This capability is achieved through Hume’s sympathetic large language model (eLLM), which guides the generation of language and speech.



Source link