Temperature control is foundational, but how well do we really understand the ranges we rely on?
From cryogenic to controlled ambient, we use a shared language of set points, limits, and alarm thresholds, yet inconsistencies in definitions, assumptions, and practices persist. This piece aims to unpack some of the most common sources of confusion and highlight where precision in terminology, mapping, and monitoring makes all the difference.
What’s in a temperature?
Cold storage is something we’ve come to take for granted, much like electricity or running water. Yet, in biomedical and pharmaceutical environments, understanding the exact temperature your product is stored at can mean the difference between preserved integrity and total loss. The distinction between knowing your temperature and really knowing your temperatures is subtle but crucial.
Across the industry, many temperature ranges are thrown around, often without full understanding of what they truly mean or imply. Take ‘cryogenic’, for instance. This term might be used loosely to refer to storage below:
-135°C – often considered the glass transition temperature, below which water becomes a stable, amorphous solid rather than forming damaging ice crystals. Historically, this has been a key threshold for storing biological samples like cells or tissues.
-150°C – now frequently considered a stricter, more modern minimum for long-term cryogenic preservation.
-196°C – the boiling point of liquid nitrogen, which is the gold standard for deep cryogenic storage.
All of these could be labelled “cryogenic,” but they each carry very different implications for risk, stability, and equipment needs. So when someone says, “This product must be stored cryogenically,” it’s essential to ask: how cold, exactly?
Then there’s -80°C, which gets even messier. Sometimes, -80°C is interpreted as:
Anything below -45°C or -60°C, especially in loosely regulated setups.
A general target range from -45°C to -80°C.
A confusing mathematical range like -62.5°C ±17.5°C, which technically spans -80°C to -45°C.
Here’s why this ambiguity matters: water—the common component in nearly all biological materials—can still undergo ice crystal formation and molecular motion at certain sub-zero temperatures. Locking up "all available water" isn’t as simple as going below 0°C. In fact, some water remains mobile even at temperatures as low as -20°C or -40°C, making biological materials vulnerable unless they are stored at much colder temperatures.
Inconsistent definitions lead to critical errors. A product intended to be stored at -80°C might end up at -45°C for hours—or even days—due to miscommunication or equipment limitations. This can compromise the product’s viability, even though the temperature “sounds cold enough.”
Let’s talk terminology: what’s in a number?
Let’s start with some nomenclature, why does it matter how you write a defined temperature?
Even the way we write a temperature range matters more than you might think.
For example, “2 to 8°C” vs “2–8°C” (with a dash) might seem identical. Most people would write or accept the latter. But what happens when you're specifying negative temperatures?
Consider:
“-6 to -8°C” is clear.
“-6 - -8°C” becomes visually cluttered and prone to misinterpretation—does it mean -14°C? Is that a typo?
This is why spelling it out as “to” rather than using dashes can help avoid mistakes, especially when dealing with negative values.
Now, consider “6 ±4°C” vs. “2 to 8°C”: they may look similar at first glance, but they’re used in very different ways.
If I specify 6°C, I expect the product to be held at exactly that, allowing only a tight tolerance—say, ±4°C. That means any variation is assessed around a fixed point.
In contrast, a temperature range of 2 to 8°C implies flexibility: the unit can drift anywhere within that range. One day it might be 3°C, and another day it might be 7°C; that's acceptable. This looser tolerance might be fine for some products—but disastrous for others that need pinpoint consistency.
This becomes critical when you start examining temperature mapping and measurement uncertainty.
Why map storage environments?
Once a storage range is defined, the environment must be mapped, that is, tested to ensure every area of the storage unit maintains the required conditions. It's not enough for the digital display on the front to show the right temperature. Internal airflow patterns, equipment layout, and proximity to doors or fans all create microenvironments that might be several degrees warmer or cooler than the average.
Time and again, products have been compromised because they were stored in a “compliant” unit that hadn’t been mapped properly, leaving hidden hotspots or cold zones where items fall out of specification.
Before mapping, a reference probe (often a calibrated, high-accuracy sensor) is used to compare against the unit's internal reading. If the unit is set to -20°C but the reference probe shows -24°C, the discrepancy needs to be addressed—usually by applying an offset—so both agree on the actual temperature.
Of course, probe accuracy matters here too. If the unit can only read to the nearest degree but the reference reads to 0.01°C, matching becomes more difficult. A risk-based approach should be used, and ideally, the reference and unit probes should be positioned as closely together as possible.
How mapping is done
Mapping needs to be representative. At Biovault, we typically perform 8-point or 16-point mapping, depending on the size of the unit and the range being tested.
For compressor-driven fridges or freezers (which cycle on and off), the mapping should run for at least 24 hours to capture the full temperature fluctuations over several compressor cycles. Some cycles can take more than 6 hours.
For cryogenic storage (such as liquid nitrogen freezers), longer mapping periods of up to a week may be necessary, especially when nitrogen fills occur only once a week. It’s essential to define the real operational extremes—both high and low—that occur within the unit over time.
Once mapping is complete, you're left with thousands of data points. The temptation is to average the data: it’s easy, and often makes the results look great. But this is a mistake.
You’re not trying to prove that your average temperature was in range. You’re trying to prove that no part of the storage environment was ever out of range.
A real-world example
Let’s say we need to store a product at 6°C ±2°C.
Initial equipment qualification shows:
When set to 6°C, the reference probe reads 6.2°C.
The fridge has been adjusted to 5.9°C, and the reference probe now reads 6.0°C (UKAS traceable).
The unit has three shelves. We map 8 points:
Bottom shelf: back left, front right
Middle shelf: front left, back right
Top shelf: back left, front right
One probe in the centre
One near the reference probe
After 24 hours:
Lowest temperature: 5.5°C (bottom left)
Highest: 7.5°C (top right)
The probe adjacent to the reference remains within 0.2°C of it
Although the fridge maintains a temperature of 6.0°C at the reference point, the internal environment varies between 5.5°C and 7.5°C. That range matters. To guarantee compliance with the ±2°C requirement, we would have to set:
Lower limit: 4.5°C
Upper limit: 6.5°C
Any tighter, and parts of the fridge would fall outside acceptable parameters.
We also observe a 0.2°C measurement uncertainty, indicating that even the most accurate reference probes can differ slightly from the unit display. When this is factored in, the effective safe range shrinks further—to 4.7°C to 6.3°C.
In some cases, the only safe option might be to take a shelf out of use—or to replace the equipment altogether.
The role of alarm points
One last point: alarm thresholds are often misunderstood or misused. Many facilities simply set the alarm limits to match the product limits. This can be dangerous.
At Biovault, we store biological samples in a cryogenic state at -196°C. The product limit is -150°C—but our alarm triggers at -175°C.
Why? Because if we waited until -150°C to raise the alarm, we’d have no time to respond. At -175°C, we have hours—sometimes over a day—to take corrective action or move samples to a backup system.
Likewise, our blood product fridge has a 2 to 8°C range, but alarm points are set at 3.4°C and 7.0°C, allowing staff a full hour (under normal conditions) to respond before the limits are reached. Since the fridge is only used during working hours, this is an acceptable risk level.
So what is in a temperature?
Potentially, your treatment, your patient’s outcome, and a lifetime of scientific research.
That’s why your product deserves the best storage. It must be:
Mapped and validated
Independently monitored
Equipped with remote alarms and power backup
And you must ask questions:
What are the set points?
What are the alarm thresholds?
What contingency plans are in place?
Can you see proof of mapping?
Because ultimately, “cold” isn’t good enough. Only the right temperature is.
Precision matters - at every level.
If you're trusting a storage environment with your materials, demand clarity: mapped data, traceable calibration, alarm logic, and defined contingencies. The details aren’t just technical — they’re critical.