Mira Murati’s Thinking Machines Lab Introduces Interaction Models: A Native Multimodal Architecture for Real-Time Human-AI Collaboration

AimostAll news brief curated from MarkTechPost.

Source details

Original source
MarkTechPost
Published
2026-05-13
Primary topic
Creative AI

Why it matters

Image, video, music, design, voice, and creative workflow updates across generative media tools. Use the original source for the full report, then use the directory shortcuts below to compare the products and workflows the story points toward.

What happened

Thinking Machines Lab has introduced a research preview of TML-Interaction-Small, a 276B parameter Mixture-of-Experts model with 12B active parameters, built around a multi-stream, time-aligned micro-turn architecture that processes 200ms chunks of audio, video, and text simultaneously — eliminating the need for external voice-activity detection harnesses. Unlike standard turn-based models that freeze perception during generation, the system runs two components in parallel: a real-time interaction model that maintains continuous full-duplex exchange with the user, and an asynchronous background model that handles sustained reasoning and tool use while sharing the full conversation context throughout. The post Mira Murati’s Thinking Machines Lab Introduces Interaction Models: A Native Multimodal Architecture for Real-Time Human-AI Collaboration appeared first on MarkTechPost .

What to do next

Use the related tools layer to compare output quality, control surfaces, and pricing before adopting the creative workflow.

Thinking Machines Lab has introduced a research preview of TML-Interaction-Small, a 276B parameter Mixture-of-Experts model with 12B active parameters, built around a multi-stream, time-aligned micro-turn architecture that processes 200ms chunks of audio, video, and text simultaneously — eliminating the need for external voice-activity detection harnesses. Unlike standard turn-based models that freeze perception during generation, the system runs two components in parallel: a real-time interaction model that maintains continuous full-duplex exchange with the user, and an asynchronous background model that handles sustained reasoning and tool use while sharing the full conversation context throughout. The post Mira Murati’s Thinking Machines Lab Introduces Interaction Models: A Native Multimodal Architecture for Real-Time Human-AI Collaboration appeared first on MarkTechPost .

This AimostAll brief summarizes the linked source so readers can scan AI developments quickly and jump to the original reporting when needed.

Read original source More creative news

Directory context

Tools, models, and guides to go deeper

Move from the headline to product evaluation with topic-matched tool pages, model references, and buyer guides.

Related coverage

More from this topic