Introduction
Microsoft is giving Copilot a face. In an experimental feature named Portraits, users can choose from about 40 animated avatars to accompany their AI conversations. The avatars are not photorealistic, but they come with voices and real-time facial expressions, head movements, and lip-sync during chat. The goal is to make AI-driven dialogue more engaging and easier to read, especially during brainstorming sessions or when preparing for interviews.
Portraits in Copilot Labs
The feature is accessible through Copilot Labs, Microsoft’s experimental playground for AI features. The 40 portrait options cover a range of styles, from friendly to professional, and all are designed to convey emotion without relying on photorealism. By offering distinct personalities, the option aims to help users connect with the AI on a more intuitive level while keeping the interface simple and legible.
Real-time animation and voice
According to Microsoft, the system pairs the avatar with speech so the conversation feels more natural. The avatar’s mouth moves as it speaks, and its facial expressions and head movements dynamically respond to the discussion. This is achieved without building heavy 3D models, leveraging a technology called VASA-1 developed by Microsoft researchers to enable visual, real-time conversations.
How it works: VASA-1 and visual conversations
VASA-1 stands as a technology that allows AI chat to include visible cues—expressions, head tilts, and lip movements—without requiring complex 3D rendering. The Verge has highlighted this approach, noting that it emphasizes expressiveness and efficiency over photorealism. In short, users get a more lifelike, responsive chat partner without demanding graphics resources.
Use cases: when avatars can help
Portraits can be especially helpful during collaborative sessions where a lively partner can spark creativity. They may also assist in rehearsal contexts, such as preparing for a job interview, where an avatar can model responses, tone, and pacing. Teachers, students, and researchers exploring a topic might benefit from an avatar-guided dialogue that clarifies ideas and maintains engagement. By providing a more tangible conversational partner, avatars can enhance comprehension and memory even in text-heavy tasks.
Considerations and limits
As an experimental feature, Portraits is not guaranteed to be available in every Copilot experience. Availability, performance, and controls may vary by device and network conditions. Privacy and accessibility questions come with animated avatars that speak in real time, including how voice data is managed and how expressions are interpreted. Some users may find constant motion distracting, so Microsoft is likely to refine preferences and opt-out options as the feature evolves.
The road ahead
Portraits signals a broader push to make AI assistants feel more human and approachable. If Microsoft expands Copilot Labs with additional personas and richer interaction patterns, animated avatars could become a standard feature in AI chat interfaces, complementing voice, text, and other multimodal inputs. The potential is substantial for enterprise tools, education, and everyday productivity, provided users retain control over when and how avatars appear during conversations.
Conclusion
Microsoft’s Portraits adds a recognizable face to Copilot, turning the AI assistant into a more expressive conversational partner. By combining animated avatars with real-time voice, facial expressions, and lip movements via VASA-1 technology, the company seeks to improve brainstorming, interview prep, and topic exploration. As an evolving feature, Portraits invites users to assess whether a visual avatar enhances clarity and collaboration in AI-assisted work.