Skip to main content

Humans and AI, Working as an Ensemble

November 14, 2025

|

Topic:

Industry

Share:

Author:Dave Marra

The journey from Prompts to Partners. This is the Next Generation of Applied AI

A 737 sits cold on the tarmac in Boston. A maintenance technician kneels beneath the wing, hands in the service panel. He is equipped with a complete maintenance productivity system integrated to aircraft telemetry, the airline’s maintenance history, operational logistics data, and the latest system data, direct from the OEM manufacturer. The system delivers reasoned insights to the operator and relays real-time activity back to the airline’s enterprise data platform.

It doesn’t “suggest.” It knows.

The bleed‑air valve that routes hot air for cabin pressurization and wing de‑icing has begun to stick. Replace it. While the panel is open, retorque a loose clamp and swap the filter.

The technician acknowledges and begins work while the system handles the rest: reserves parts, updates the work order, pings the gate, and writes the actions back to the enterprise ontology. Minutes later, the aircraft is ready for pushback, avoiding a future delay.

This is the next phase of applied AI—a categorically new segment of technology: body‑borne productivity systems that give the operator reasoned insights direct from the enterprise ontology, and give the ontology a coherent view of the enterprise from a distinctly human perspective. AI systems see what you see, hear what you hear, and act with you, not for you. Humans and machines operate as an ensemble, each learning from, supervising, and correcting the other in real time.

Why does the airport scene matter? Because operations live and die by uptime, throughput, and safety. The jet pushes on time, not because of a flashy overlay, but because the frontline productivity system is the substrate upon which the enterprise’s ontology and humans collaborate.. That’s the difference between AI that suggests and AI that closes the loop.

When a system can see what you see, hear what you hear, and act alongside you, the operator stops being a passive recipient of analytics and becomes part of the sensing fabric: the eyes and ears of the enterprise ontology. It’s not enough for the operator to consume insights; the operator’s system should capture ground truth, structure it, and push it upstream as the work happens.

On a production line, the operator shouldn’t halt work to “program” a robot. They should task it like a teammate: “Analyze the telemetry from Station 7 and flag anomalies.” “Inspect this weld line for micro-fractures.” In theater, a Soldier shouldn’t require weeks of training and a rack of controllers to drive autonomous systems. They should make natural commands, with the expectation that the system understands the AOR as any other human-in-the-loop would, “Sweep the eastern ridge of NAI X, keep 200 meters offset, report heat signatures.” As swarms of robots proliferate, natural‑language tasking and shared context are the only scalable way to turn autonomy into meaningful, productive action.

Shared context requires a shared picture of reality: a network of perception that spans the operator, the machines beside them, and the environment itself.

That’s what recognition really means. It isn’t limited to what’s detected by a drone’s camera or the sensors worn on a single pair of glasses. It’s the collective understanding created when every sensor — human-borne, robot-borne, and environmental — contributes to the same cognitive map.

  • On the flight line: distinguish aircraft by tail number and maintenance status; flag hot brakes; map safe approach zones from engine temperature and wheel‑well clearance; identify which aircraft is yours, which panels are open, which tools are checked out, and which hazards are active.
  • On the factory floor: spot a spilled fluid, a moving crane, or a robotic arm sweeping its arc; recognize the pallet you need, the torque wrench in your hand, and the lockout tag still on the power box.
  • On the battlefield: parse friend, foe, and neutral; distinguish dismounts from vehicles and cold weapons from hot barrels; detect thermal anomalies, signal emissions, and movement across sectors; register terrain features, line‑of‑sight constraints, and named areas of interest—without a human glued to a map.

When recognition expands beyond the lens and into the fabric of the operation, the system stops interpreting the world and starts participating in it. It ties every object and event—valve, pallet, vehicle, target, person—back to logistics, constraints, and history, enabling operators to act faster, safer, and smarter where it matters most.

Everyone talks about large language models. Typing a prompt and getting a reasoned answer is just the first horizon. Horizon two is asking the AI to do a task on your behalf. Horizon three is a true ensemble. The system says, “I know the telemetry and the enterprise context; you’re here in the factory or on the battlefield; consider X, Y, Z outcomes and P, D, Q remediations.” The human sets intent and decision space, approves or adjusts, and the system executes, including the administrative tail: opening and closing work orders, reserving parts, updating logs, coordinating autonomy stacks, and writing back to scheduling and safety.

How do we enable this? AI must understand the world the way humans do. There’s no substitute for a pair of glasses that can see what you see and hear what you hear, so action models can learn from journeymen or master tradesmen and then teach and guide novices to perform with the same quality and efficiency. That’s how a plant raises its floor without lowering its ceiling: apprenticeship at scale.

Rivet is not another iteration of augmented reality. We’re building task systems for people who do hard jobs in hard places. The visual layer is a means; the mission is a closed loop from perception to reasoning to action under real‑world constraints.

Under a wing, on a line, or downrange, the measure of technology is simple: does it make the operator faster, safer, or more capable?

Body-borne systems that can see the nouns, control the robots, and read and write to command and control will. They’ll turn data into decisions, and decisions into action, in seconds, not hours. The technician’s jet still takes off, but what really moves is the enterprise.

AI has left the lab. It’s on the front line.

That’s the mission of Rivet.

Share this article

Contact the Rivet team

For interest in Rivet Integrated Task Systems and frontline collaborations, get in touch at info@rivet.us.

For press inquiries, reach us at comms@rivet.us.

Contact Us