In this article
My future in-laws were kind (and brave) enough to invite my crew, along with the other grandkids (ages five to fifteen), on an amazing trip to the Bahamas this spring break. Three days of water slides, Bahama Mamas, and a very optimistic attempt to stay off my computer.
One moment that stuck with me didn't happen at the resort. It happened at the airport.
As we went through customs, each kid walked up, handed over their passport, and looked straight into the camera. No hesitation. No confusion. Just a kind of quiet confidence that they knew exactly what to do and that it would work.
And it did. Every time.
But what really stood out was that they didn't just know how to use it; they trusted it. To them, it wasn't invasive. It was just part of how things work. And in their minds, it made sense. It helped the line move faster. It helped keep them safe.
That's a meaningful shift. Because for a long time, the dominant question around facial biometrics was whether people would ever be comfortable with it.
"Aren't the privacy risks too high?"
I don't hear it much anymore.
Not because people stopped caring about privacy. People still care about privacy (as they should).
What’s changed is that several things happened at once.
- The risk of getting security wrong stopped being just theoretical.
- The technology became so common that people use it daily without thinking; even a five-year-old can navigate it easily.
- Responsible facial biometrics became quicker and easier than the systems it replaced.
- Decision-makers started to understand more clearly that authentication and surveillance are not the same thing. And not every biometric system is the same. They’ve dug in and know the right questions to ask.
No single one of those shifts would have been enough on its own. But together, they changed the calculus. The privacy concern didn't shrink. The case for modern access control just became too strong, from too many directions, to keep treating it as optional.
We are now in an era where the harder question isn't "do we deploy facial biometrics?" It's "what happens if we don't?"

The shift in public perception: familiarity outpaced policy
The resistance to early facial biometrics was genuine and understandable.
As recently as 2023, Pew Research found that seven out of ten Americans opposed employers using facial recognition to analyze workers’ expressions. Privacy advocates rightly raised concerns about opaque data practices, risks of centralized storage, and surveillance.
These concerns reflected how early systems were designed. Many times, they were built for surveillance and data mining: broad scanning, centralized databases, image retention, and secondary uses of data. Those systems rightly faced scrutiny.
Modern facial authentication systems, like our Rock, operate differently. They don’t scan crowds. They don’t identify unknown individuals. They don’t match against external databases.
They answer a specific question at a specific moment: Is this the authorized person at this access point? And they do only that. The biometric data is encrypted, mathematically irreversible, separated from identifying information, and subject to strict retention and deletion policies.
Decision-makers learned more about the technology. Lawmakers shifted focus from banning biometrics altogether to addressing harmful actions. Then something happened that no policy could have achieved alone.
People didn’t just discuss the technology; they started using it.
Face ID. Airport e-gates. Contactless check-ins. Identity verification that’s seamless and effortless. The technology didn’t just advance; it became familiar and convenient enough that older methods started to seem unnecessary and unsafe.
Familiarity changed the question. Convenience provided the answer.
The sense of safety sealed the deal.
The data reflects that shift.
By August 2025, Morning Consult found that American support for facial recognition had risen from 42% in 2019 to 48%, with majorities approving its use across TSA, financial institutions, schools, and personal devices. SITA’s 2025 Passenger IT Insights report found that 79% of travelers are now comfortable sharing biometric data, up from 74% the year before, and 62% prefer biometric checkpoints over traditional border controls.
Preference is different from acceptance.
Acceptance means, “I’ll tolerate this.”
Preference means, “This works better.”
Once technology crosses a certain line, people adopt it not because of privacy concerns but due to convenience and reliability. This pattern has appeared before with ATMs, contactless payments, and mobile boarding passes, where comfort and familiarity drive adoption. The speed of acceptance is rapid here — airports show this clearly, with 43% implementing biometric boarding and 38% biometric check-in per SITA’s 2024 report, aiming for full deployment by 2026. This isn’t experimentation but an architectural shift. Studies in 2024 and 2025 show that trust, familiarity, and perceived security benefits, especially in controlled settings, drive acceptance more than privacy concerns. Designing for trust accelerates adoption, and facial biometrics are leading the way.

What inaction actually costs
For years, organizations have been cautious about analyzing the risks of deploying biometric systems—privacy issues, regulatory requirements, and compliance obligations. What they haven't dedicated the same effort to is analyzing the risk of not deploying these systems. That gap is now being addressed, but not in the way most companies would prefer. It's being addressed through legal action.
The Security Industry Association published a recent piece titled "Negligent Security and the Role of Our Industry," written by Charles Johnson of NGA Security Advisors. The article outlines what is happening in civil courts across the country: property owners, building managers, and HOAs are being held legally accountable not only for crimes that occurred but also for crimes that could have been prevented had they adopted available modern security technology.
The figures are alarming. In Florida alone, around 342 negligent security cases were filed in one fiscal year, averaging roughly 28.5 per month. One Florida law firm claims to have recovered nearly $1 billion in verdicts and settlements since 2007 across almost 500 negligent security cases. In New York, the 345 Park Avenue case, filed in 2025, alleges negligent security stemming from the failure to deploy modern access control and AI-enabled detection systems, with multimillion-dollar exposure.
This pattern is becoming familiar. An unauthorized person enters a building, often by following someone through a secured entry point. An incident then occurs. The key question becomes whether that entry point was properly controlled.
A recent independent study by Security Management, surveying over 400 security stakeholders, found that 48% reported their facilities had been compromised by tailgating in the past two years, and 54% left at risk due to doors being propped open or simply left unlocked. The Security, Resiliency & Technology Integration Forum reports that 41% of security executives believe the cost of a serious tailgating incident ranges from $2 million to "too high to measure."
The staffing situation worsens the problem. A survey of 400 security guard firms revealed that 34% had staffing levels significantly below pre-pandemic numbers. We are asking fewer people to secure more space, more entry points, and more valuable assets. And in many cases, we continue relying on badges and PIN codes.
These systems don't fail because of rare edge cases. They fail in ways that are well understood and the hardest to defend after they’re exploited.
To make things worse, when tenants or employees voice concerns or request stronger access controls, the focus sharpens. Notice is firmly established. Foreseeability is no longer hypothetical. The existence of a commercially available solution shifts from mere potential to concrete reality. At this point, the conversation changes.
It’s no longer about whether a system raises regulatory questions. It becomes much more direct.
When legacy security, known to repeatedly fail, is still in use, it inevitably raises a harder question: Why were visible upgrades, like a new executive conference room, prioritized over modernizing the technology that protects your people and data?
What adoption actually looks like right now
According to Verizon's 2025 Mobile Security Index, based on a survey of 762 professionals across small, medium, and large businesses and public sector organizations, 62% of organizations currently use biometric authentication. A separate survey by ExpressVPN found that 58.3% of employers currently use biometric access controls for employee physical access. These are not pilot programs. These are operational deployments at scale.
Alcatraz alone protects over 5 million workers worldwide. Seven out of ten of the largest AI companies use our facial authentication platform. That includes deployments like Scott Data, a data center that replaced legacy fingerprint readers with facial biometric MFA, achieving what they describe as "multi-factor at the speed of single-factor." And Martin Luther King Jr. Community Hospital in Los Angeles, that uses facial authentication for additional security and verification in controlled healthcare spaces.
Rock solid advice for risk professionals
For those advising on these decisions, the situation has become more straightforward. This is no longer a theoretical weighing of abstract risks. It is a comparison between two types of exposure.
On one side, a clear, manageable compliance framework based on known requirements.
On the other hand, an open-ended liability is assessed after an incident, depending on what could have been done.
One is structured and predictable, while the other is reconstructed later. And hindsight often simplifies the issue in ways that are not always helpful.
So what do you do?
1. Map the actual risk exposure on both sides. Model biometric compliance risks (defined, administrative, insurable) against the negligent security litigation risk (open-ended, escalating, often uninsurable after an incident). The difference is significant.
2. Treat a written security request as legal notice. Once a tenant or employee formally requests modern access control, the foreseeability element of a negligent security claim is effectively established. Document the response carefully.
3. Differentiate the technology. Facial authentication is legally and architecturally different from facial recognition. Regulators, courts, and privacy advocates recognize this distinction. And with the right communication plan, your employees, tenants, and customers will too.
4. Incorporate compliance into the system design, not just documentation. Consent workflows, encrypted templates, defined retention schedules, and deletion controls are not mere legal formalities. They form the technical foundation that makes a biometric system defensible under BIPA, GDPR, and all emerging state laws. It sounds daunting, but not if you do it every day. It’s pretty simple, and we can walk you through it all.
5. The presence of some security is not enough. The $3.9 million Duval County jury verdict in December 2025 made the point bluntly: "The presence of some security does not automatically equal adequate security." Legacy systems, badges, PINs, and guards may have been reasonable precautions in 2015. They are increasingly difficult to defend as reasonable in 2026 when purpose-built alternatives exist and are widely deployed.
When systems are designed to be trusted, adoption follows.
When systems are designed to be trusted, adoption doesn’t have to be earned; it happens naturally.
You see it in the smallest moments, like a group of kids at an airport in the Bahamas pausing in front of a camera, looking into it with quiet confidence, and moving forward without a second thought.
The executive who starts her day a little smoother because she didn’t spill her latte digging for a badge.
The doctor who walks through secure hospital doors with a glance, focused not on protocols but on patients.
They don’t wonder whether the technology belongs in their world; they simply trust that it does. This illustrates the strength of intentional design—developing not only functional systems but also technology that builds trust by how it serves, protects, and respects its users.
Because when people trust what we build, progress follows effortlessly.




