[DRAFT — PENDING TECHNICAL REVIEW BY OPERATIONS LEAD]
FIELD NOTE / 002 L04 · SOFTWARE MAR 2026 7 MIN

Chain-of-custody systems for field equipment tracking.

Standard inventory systems model warehouses. Field operations are not warehouses. After three failed attempts, here is what we settled on.

We build custody and inventory software for teams that don't fit normal inventory systems. Most of our clients share a single operational pattern: expensive kit that moves between sites, gets sub-rented, gets loaned to subcontractors, and is occasionally lost, stolen, or silently retired. Standard inventory systems treat assets as static things in warehouses. Field operations are not warehouses.

Three attempts later, here's what we settled on.

What off-the-shelf systems get wrong

Off-the-shelf inventory software (SAP, Odoo, NetSuite, the lot) models inventory as:

Asset X is at Location Y in quantity N.

This model assumes you know where things are. Field operations assume the opposite — you know where things were last seen, and the accuracy of that knowledge decays with time. The question "where is the OTDR meter?" has a probabilistic answer, not a definitive one.

Attempt one: location-based model

We started with the standard model. Inventory at Warsaw depot, inventory at Berlin depot, inventory assigned to Crew 3. Crews were supposed to check things out and back in.

Result: within six months, the system's view of reality and actual reality had diverged by 23%. Crews forgot to check in. A unit assigned to "Crew 3" might actually be in the back of a van parked at a subcontractor's site 400 km from where the system thought.

Attempt two: scan-on-move

We required every movement to be scanned — QR codes on equipment, scan on handoff, scan on arrival. Audit trail per unit.

Result: better in theory, same in practice. Crews on a 6am start don't scan things. Crews working in a trench don't scan things. Crews at the end of a 14-hour day absolutely don't scan things. The scan compliance rate was about 60% on a good day.

Attempt three: the model that works

We stopped trying to record ground truth and started recording confidence.

Every asset has a location belief — the last place the system had evidence of it. Alongside the belief, we record a confidence decay — how much the system trusts the belief based on how old it is and what kind of evidence created it.

asset_id:    OTDR-047
location:    Warsaw depot
confidence:  0.42
last_evidence: {
  type: "handoff_scan",
  date: "2026-03-14T08:22:00Z",
  operator: "M. Kowal"
}
decay_rate:  0.015/day

Confidence decays passively. If the asset is scanned into a depot, confidence goes to 0.98. If it's observed on a project's equipment list, confidence rises to 0.85. If no evidence arrives for two weeks, confidence falls into "probably missing" territory.

The UI then surfaces what matters — the low-confidence assets — and routes them to the operations lead for a reconciliation check. The high-confidence assets are ignored until their confidence decays.

Why this works when scan-on-move didn't

Three reasons:

  1. It accepts that data will be incomplete and models the incompleteness. Rather than punishing missing scans, it prices them into the model.
  2. It focuses attention. Operations leads don't see a wall of assets, they see the 12 items that are drifting out of confidence.
  3. It creates a structured prompt for physical reconciliation. Once a month, the 30 lowest-confidence items get found or written off. The system learns from the outcome.
OBSERVATION

Across our three custody deployments, confidence-based tracking reduced "unaccounted" equipment from ~12% annual loss to ~3% annual loss. The remaining 3% is genuine loss — actual theft, actual damage, actual crews who quit with a bag of kit. The earlier 9% was bookkeeping.

Implementation details

Evidence sources we currently weight:

Decay rates are tuned per asset class. A fusion splicer (expensive, handled carefully) decays slower than a bundle of patch cables (cheap, commonly lost).

What this builds on top of

The confidence model sits on top of a fairly standard data structure — Odoo inventory, or a custom Postgres schema, or spreadsheet-backed for very small teams. What's custom is the confidence layer, the decay calculation, and the reconciliation workflow. Most of the engineering is in the UX: making low-confidence items easy to fix, hard to ignore.

We open-sourced a reference implementation of the confidence layer for teams building their own. [DRAFT: link to repo when ready]

[DRAFT AUTHOR] — [Role, specialisation]
Edited for the field notes series, Oxenex. Corrections to notes@oxenex.eu.
CONTINUE READING