Modern surgery is a spatial problem. Surgeons operate within complex three-dimensional anatomy where millimeters matter, coordinating people, instruments, and decisions in real time. Yet the digital systems surrounding the operating room have no understanding of that environment.
“The way surgery is performed and the way it’s measured are completely misaligned,” says Thomas Knox, CEO of VitVio. “We’re asking teams to operate in a physical system that the existing technology can’t actually see.”
Without direct visibility into the room, hospitals rely on manual inputs to reconstruct what happened. Take the simplest example: critical moments such as case start, incision, and closure are recorded manually after the fact, often inaccurately. Across hundreds of cases, those gaps compound into lost time, inefficiencies, and distorted operational insight.
Where current systems break down
Hospitals have EHRs, scheduling platforms, and analytics tools, but none can observe the operating room itself. Existing approaches attempt to fill this gap, but fall short:
- Manual inputs are delayed and inconsistent
- 2D vision cannot understand depth, interaction, or context
- Real Time Locationing Systems (RTLS) lacks the precision to capture workflows
“This isn’t a data problem, it’s a sensing problem,” Knox says. “We’re trying to describe a complex physical environment without any system that can actually understand it.”
Making the operating room context-aware
VitVio’s 3D spatial AI introduces that missing layer by enabling direct understanding of the operating room as a physical system. Using 3D sensing, it continuously interprets how staff, instruments, and workflows move and interact in space.
Instead of relying on manual input, it can:
- Detect when a case actually starts and ends
- Track phase durations in real time to identify delays
- Automatically orchestrate teams
- Track tool usage to support tray rationalization
- Generate operative documentation automatically
Once the system can interpret what’s happening in the room, downstream systems can respond automatically, and clinicians can focus on the patient.
“Clinicians shouldn’t be acting as data entry clerks,” Knox adds. “When the system can make sense of the room, the rest should take care of itself automatically.”

Turning context into operational outcomes
Once the operating room is digitized, operational performance becomes far more precise. Systems no longer rely on approximations. They respond to what is actually happening.
Hospitals can:
- Increase throughput without burning out staff
- Reduce turnover duration by pinpointing physical bottlenecks
- Increase their billing capture and reduce revenue leakage
- Reduce their costs and material wastage
- Allow their staff to focus on patients
“Once you can actually see what’s happening, you stop guessing,” Knox says. “You move from estimation to precision.”
A new operating model for the OR
By making the operating room context-aware, 3D spatial AI turns it into a system that can be continuously measured and improved.
“The operating room still isn’t perceived as a 3D spatial system,” Knox explains. “Once it is, everything changes.”
Rather than coordinating disconnected events, hospitals can manage the OR as a single, integrated system, one that understands itself in real time and continuously optimizes processes to ultimately benefit both the staff and the patient.
