When Compliance Data Enters the Courtroom – and Fails
Article Summary
Compliance data that satisfies audits can fail completely when tested as legal evidence. Most systems are not designed to prove their own integrity, creating a hidden risk that only emerges under forensic scrutiny.Article Contents
Introduction
Across healthcare, testing laboratories, and regulated environments, digital systems quietly record what organisations rely on most: proof that critical processes were carried out correctly.
Sterilisation cycles. Device testing. Environmental controls. Audit trails.
These records are routinely described as “Compliance Data”. But there is a moment – more often than you think, but decisive – when that same data stops being operational and becomes something else entirely. It becomes Evidence. And in that moment, many organisations discover – often for the first time – that their systems were never designed for that purpose.
The Assumption That Breaks
Most organisations operate on a simple, unspoken assumption: If a system records something, that record can be relied upon.
In day-to-day operations, that assumption holds. Systems produce reports, logs, dashboards. Auditors review them. Regulators accept them. Operational Systems often run on Assumption. Courts never do. Courts require Proof.
The question is not: “Does your system record the data?” The question is: “Can you prove that this data is what you say it is?”
That is not a technical distinction. It is a legal fault line. Most organisations discover that fault line only when they are already standing on it.

The Human Story Behind a Legal Principle
In 1993, a petrol station manager named Shepherd was accused of theft after discrepancies appeared in a computerised till system. There was no direct evidence. The case depended on what the system reported.
He denied wrongdoing and argued that the system itself could have been wrong. The case reached the House of Lords (then the United Kingdom’s highest Court), which held that a computer system is presumed to be operating properly unless there is evidence to the contrary.
Shepherd lost. His conviction was upheld.
That principle continues to influence how Courts approach Computer-Generated Evidence today. This is not abstract. Real executives stand in real Courtrooms, defending the accuracy of their systems while those systems are examined, in minute detail, under hostile forensic challenge.
Before Everything Else: Admissibility
Before a Court considers whether digital Evidence is persuasive, it asks a more fundamental question: Is it Admissible at all? If it is not, the material is never considered. In practical terms, it is as if it does not exist.
In legal terms, Inadmissible Data is indistinguishable from no data at all.
Not One Test – Several
Admissibility is not a single hurdle. It is a set of conditions. In substance, the Court will require answers to questions including:
- Was the record created at or close to the time of the event?
- Can a proper audit trail be demonstrated?
- Is there a clear and unbroken chain of custody?
- Has the integrity of the system that produced the data been positively established?
- Is the storage environment secure and resistant to alteration?
- Has human intervention introduced the possibility of error?
These are not technical preferences. They are Conditions. If they cannot be satisfied, the material may not be admitted at all.
Remember: Inadmissible Data = No Data
What That Means – In Reality
A system can appear to function perfectly in daily operation. It can generate reports. It can satisfy audits. It can pass internal review. And still fail at this first vital legal step. If it does, the consequence is absolute: The data does not go before the Court. It cannot prove anything.
Why This Problem Has Become Harder, Not Easier
The modern shift to cloud computing has made this problem more acute.
Data is now created and stored across distributed systems, often abstracted away from the organisation relying on it. Infrastructure is virtualised. Databases are managed by third parties. System states are transient and rarely captured in a reproducible form.
From an operational perspective, this is efficient. From an evidential perspective, it introduces a critical problem:
The organisation may no longer be able to demonstrate how the system was configured or behaving at the precise moment the record was created. Or give a thought to the seven tests above.
Control has, in many cases, been traded for convenience – without a clear understanding of the evidential cost.

Where Cases Are Actually Won or Lost
Cases do not turn on the presumption. They turn on what happens when it is challenged. The real battleground is this: Can the party relying on the data prove that the system was operating properly when the data was created?
And here is the uncomfortable truth: Very few systems in use today are capable of doing that in a way that would withstand forensic challenge. Most organisations have no record – no artefact – showing the state of the system at the moment the data came into existence.
So when the question is asked: “Was the system working properly at that exact time?” The answer is rarely Evidence. It is belief. “We believe the system was working.”
That is not Evidence. That is assertion. And assertion does not survive scrutiny.
The Failure Point Nobody Sees Coming
The problem is not that systems record data. The problem is that they do not record themselves.
There is usually:
- No contemporaneous record of system state
- No verifiable configuration snapshot
- No independent integrity marker tied to the event
- No chain linking data creation to system condition
So when reliability is put in issue, there is nothing to show. Nothing to prove. And at that point, the data begins to lose its Admissibility.
Most systems are designed to record events. Very few are designed to produce records that can later be proven – independently and under challenge – to be what they claim to be.
That is not a question of storage. It is a question of design.
The Cliff Edge
In everyday operation, organisations assume their records are fine. And they are – until they are not. Because once a credible admissibility challenge is raised, everything changes. The presumption falls away. The question becomes immediate and unforgiving: “Show me. Show me that the system was working properly at the moment this record was created.”
And for most systems, that is the Cliff Edge. Up to that point, everything appears secure. Beyond it, everything can collapse – and collapse will be swift.
If your Data were tested tomorrow, would it be Admissible - or would it not?
Closing Thought
Most organisations will never see their data tested in Court. But many do. And when they do, the question is simple: “Why should this be allowed into Evidence?” If the answer is “because our system recorded it”, that will not be enough. If that answer fails, everything built on that data fails with it. This is not a failure of technology. It is a failure of evidential design.
The question is no longer whether your systems record what happened. It is whether, when it matters most, your data is Admissible.
Remember: No Admissibility = No Data
What Comes Next
This problem sits inside systems already in use – quietly generating records that may one day be relied upon as adducible Evidence.
In the Articles that follow, we will examine where the problem actually lies:
- Why storing data is not the same as proving it.
- Why testing and certification records can become vulnerable under challenge.
- Why decontamination records can sit at the centre of disputes – and yet prove nothing.
- What Courts actually look for when they examine audit trails, timestamps and chains of custody.
- And what it means to design systems capable not just of recording events – but of proving them.
If your Data were tested tomorrow, would it be Admissible – or would it not?
Disclaimer. The views and opinions expressed in this article are solely those of the author and do not necessarily reflect the official policy or position of Test Labs Limited. The content provided is for informational purposes only and is not intended to constitute legal or professional advice. Test Labs assumes no responsibility for any errors or omissions in the content of this article, nor for any actions taken in reliance thereon.
Get It Done, With Certainty.
Contact us about your testing requirements, we aim to respond the same day.
Get resources & industry updates direct to your inbox
We’ll email you 1-2 times a week at the maximum and never share your information
