Is one photo worth a thousand credentials? Written on

Is one photo worth a thousand credentials?

A deep investigation into smartphone face biometrics, spoofing benchmarks, and the structural weaknesses of mobile authentication systems.

The Biometric Update report is directionally right, but the deeper story is more uncomfortable than the headline: this is not one isolated flaw, and it is not just a “cheap Android phones” problem. It is the visible symptom of a two-track market. On one track are devices and apps that treat face biometrics as a convenience feature, often with 2D camera-only unlock. On the other are systems that treat biometrics as a security boundary, with stricter spoof-resistance, secure pipelines, and hard limits on what a weak biometric is allowed to authorize. The danger sits in the gap between those two worlds.

Which? says it has tested 208 phones since October 2022 and found that 133 of them—64%—could be unlocked with a simple printed 2D photo of the owner. It says the problem worsened in 2024, when 72% of phones it tested failed, up from 53% in 2023, before improving slightly in 2025 to 63%. Its current list of affected brands includes Asus, Fairphone, Honor, HMD, Motorola, Nokia, Nothing, OnePlus, Oppo, Realme, Samsung, Vivo and Xiaomi. Which? also says Samsung’s Galaxy S26 series passed its latest spoof tests, while earlier Galaxy S25 models did not.

That headline number matters, but the finer point matters more: in many cases, the weakness is not that the biometric system can approve a bank transfer or a wallet payment. The weakness is that it can unlock the phone itself, and once the home screen is open, a thief may inherit access to email, messages, photos, wallet history, and enough logged-in services to begin account recovery or social-engineering attacks. Which? explicitly warns about password resets via email access and exposure of Google Wallet history and partial card details.

What the attack actually is

The attack vector in the Which? investigation is blunt: a 2D printed photograph of the enrolled user’s face. That matters because this is not a Hollywood-grade deepfake or a lab-only exploit. It is low-cost, low-skill spoofing. Which? describes the affected systems as “easily fooled,” and Android’s own biometric testing guidance shows why this class of attack is so fundamental: Android’s spoof testing for face explicitly includes printed photos, photos on displays, videos on displays, and 3D masks as in-scope presentation attack instruments. Google’s test guidance also notes that folding a printed photo at the cheeks can materially improve spoofing success, and that testing should probe different horizontal and vertical angles in 10-degree increments to find the device’s most permissive position.

That last point is revealing. The problem is not merely “phone looked at a flat image.” The problem is that many face-unlock implementations remain permissive under realistic capture conditions: acute angles, imperfect lighting, replay on screens, and minor manipulations that make a 2D artefact look more face-like to a classifier. Android’s own protocol warns testers to hide borders, bezels, and hands holding the image precisely because many systems are better at noticing framing mistakes than noticing that the “face” is fake.

Why iPhones, newer Pixels, and a few others fare better

Apple’s relative immunity in this story is not mysterious. Which? and Biometric Update both point to depth sensing in Face ID. Google, meanwhile, says the Pixel 8’s Face Unlock meets Android’s highest biometric class, which is why it can be used for banking apps and Google Wallet. Official Android documentation says only Class 3 biometrics can integrate with both BiometricPrompt and the Keystore in the strongest way; Class 1 cannot expose an app-facing biometric API at all, and Class 2 can integrate with BiometricPrompt but not Keystore-backed operations. Google Wallet’s own help pages say it does not accept Class 1 or Class 2 biometric unlock for payment verification.

So the market already contains an implicit admission that weaker mobile face biometrics are not trustworthy enough for high-value actions. The OS and major apps increasingly know this. The user often does not. That is why Which? shifted part of its criticism from raw spoofability to disclosure: it says Motorola and OnePlus were the biggest offenders on warnings, with 27 models since October 2022 that were bypassable in its lab without what it considers an adequate upfront warning; it also says Xiaomi warned on 26 vulnerable phones tested between 2023 and 2025, and Samsung provided upfront warnings on nine devices over three years.

This did not begin in 2026

A key point from the reporting trail is that 2026 is a continuation, not a discovery. In May 2023, Which? reported that multiple newly tested phones from Motorola, Nokia, Samsung, Oppo, Vivo, Xiaomi and Honor could be spoofed by a printed 2D photo. It listed specific vulnerable models and noted that many were cheap or mid-range, but not all: the Motorola Razr 2022 was a premium example. It also argued then that the affected phones likely behaved like Class 1 biometrics in practice, even if buyers experienced them as “face unlock.”

The 2026 piece is therefore best read as a trend line. Which?’s own chronology is: substantial failures in 2023, a worse 2024, slight improvement in 2025, but still a majority of tested devices failing. That suggests vendors did not misjudge the risk once; they repeatedly shipped face unlock that was good enough for convenience and bad enough to become a consumer-security liability.

Past benchmark-style tests from the last two years

There are at least four useful reference points from the past two years, and they are not all measuring the same thing.

1) Which? consumer-lab testing, 2023–2026

This is the most consumer-relevant benchmark because it tests shipping phones as bought, with native face unlock enabled, and asks a practical question: can a thief with a printed face image get in? Since October 2022, Which? says 133 of 208 tested phones failed; 53% failed in 2023, 72% in 2024, and 63% in 2025. In 2023, it already found major brands vulnerable to simple printed-photo spoofing. The strength of this benchmark is realism. The weakness is that it is not a formal Android certification test and does not disclose all lab mechanics publicly.

2) DHS RIVTD / RIVR Track 3, 2024–2026

The U.S. Department of Homeland Security’s Remote Identity Validation testing is not a phone lock-screen test; it is a liveness/PAD evaluation for remote identity systems. But it is one of the most informative recent public benchmarks for selfie-based biometric defense. DHS says its Track 3 PAD evaluation used multiple smartphones and attack species, with metrics reported as worst-case performance across smartphones and species. In the public results, none of the six active PAD systems met the 3% BPCER benchmark; only two active systems rejected all attacks. Among passive systems, 9 of 15 met the 3% BPCER benchmark, but only PAD-P1 and PAD-P9 rejected all attacks. DHS also reports steep variance by attack class and device: for passive systems, max APCER by class reached 88% for Class A attacks, 98% for Class B, and 100% for Class C in the worst cases.

That is a crucial corrective to simplistic vendor claims. Even outside native phone unlock, state-of-the-art PAD remains highly variable under cross-device and cross-attack testing. The strongest systems look strong; the field as a whole does not.

3) LivDet-Face 2024 and the SOTERIA mobile-device research dataset

The academic benchmark closest in spirit to the Which? problem is the 2024 SOTERIA mobile PAD work from Idiap. Its dataset uses nine smartphones and several attack types, including print, replay on mobile, replay on TV, and projection. The paper reports that a strong face-verification model without PAD was extremely vulnerable: average IAPMR across the print and replay scenarios was 89.2%; print attacks averaged 99.3%; replay on mobile 88.3%; replay on TV 90.3%; replay by projector 81.9%. Lighting changes barely helped, and curving the paper photo also did not meaningfully reduce the system’s vulnerability.

This is not a direct scorecard of Samsung or Xiaomi handsets, but it reinforces the same technical truth: plain face matching on mobile imagery is highly vulnerable unless PAD is designed in from the start. LivDet-Face 2024, meanwhile, uses ISO-style metrics such as APCER and BPCER and explicitly notes that print and projection attacks remain challenging categories in the competition. The SOTERIA paper is available here.

4) iBeta PAD letters, 2024–2025

The opposite picture emerges in narrower vendor-run conformance tests. iBeta’s methodology page says Level 2 testing allows 2–4 days per subject or species, uses up to six attack species, caps artefact cost at $300, and allows 1% penetration or match rate. Its posted confirmation letters show many vendors passing Level 2 on specific device-and-version combinations. One 2025 example: VIDA’s app on a Samsung Galaxy A54 underwent 750 total attacks across five species and iBeta reported 0% APCER. iBeta’s methodology page also makes clear that these letters are conformance to a specified setup, not blanket certification of every deployment.

The lesson is not that the industry solved PAD in 2025. It is that strong PAD can be built and independently demonstrated in controlled remote-ID flows, while mass-market phone face unlock often still is not built to that bar.

Where the weakness really lives: sensor, software, or policy?

All three.

The first weakness is hardware poverty. Many Android phones still rely on a standard RGB camera for face unlock. A single flat camera image has no native depth. Apple’s Face ID and some rarer Android “Pro” systems use 3D mapping or equivalent extra sensing; Pixel 8+ uses enough additional modeling and safeguards to qualify as Class 3, according to Google. Most devices that fail Which? appear to be using far weaker 2D stacks.

The second weakness is insufficient presentation attack detection. Android’s own documentation strongly recommends liveness detection for all biometric modalities and attention detection for face. Europol’s 2025 report frames this as the core defense split: hardware-based PAD adds more data; software PAD analyzes the same data for signs of attack. If vendors do neither well, a face recognizer becomes a photo recognizer.

The third weakness is the secure pipeline. Android’s compatibility rules say Class 2+ camera biometrics must prevent frames from being altered, keep processing isolated, and protect against direct data injection; the Android 16 CDD explicitly says camera-based biometrics must operate in a mode that prevents frames from being altered outside the isolated environment, and that Class 2 implementations “must” resist direct injection that would falsely authenticate a user.

That matters because the threat has already moved beyond photos at the lens. iProov’s 2025 and 2026 threat reporting says digital injection attacks and face swaps surged, and that a new attack vector detected in late 2024 could potentially bypass most current remote identity verification systems. Its 2026 report says injection attacks targeting iOS devices surged 1,151% in the second half of 2025 and rose 741% year over year. Biometric Update also reports that ISO is now developing a dedicated biometric injection-attack-detection standard, ISO/IEC 25456, precisely because the threat is no longer only “presentation” at the capture device.

So the weakness is no longer just “does the camera mistake paper for a face?” It is also “can an attacker bypass the camera entirely and feed the system synthetic or replayed data?” That is a different failure mode, but it belongs to the same broader story: mobile biometric systems are often built around convenience assumptions that are no longer safe.

How vendors and platforms have been patching this

The most meaningful patch has been segmentation: weaker biometrics are being fenced away from higher-value actions.

Google Wallet now requires Class 3 biometric unlock for payment verification and says Class 1 and Class 2 biometrics are not accepted for that purpose. That is a direct mitigation against exactly the kind of weak 2D unlock Which? is warning about. Which? likewise reports that most UK banking apps and Google Wallet recognize low-security 2D systems and force fingerprint or PIN instead, while some banks require Class 2 or Class 3, or only Class 3.

Google’s own hardware patch has been to move some Pixel devices into the strongest Android biometric class. Google said in 2023 that Pixel 8 Face Unlock meets the highest Android biometric class, enabling access to compatible banking and payment apps. Which? now says Pixel 8, 9 and 10 are exceptions among Android phones because their 2D face systems are “significantly more secure.”

Samsung’s patching is more mixed. Which? says the Galaxy S26 series passed its latest spoofing tests while the S25 did not. Samsung also tells users, and reiterated publicly, that face unlock is a convenience feature and that stronger actions require fingerprint or PIN. Separately, Samsung added an “Improve accuracy” option for fingerprint enrollment in the S26 line, a reminder that the company’s own preferred secure path remains the fingerprint reader, not face unlock.

On the Android platform side, the CDD has also become more explicit: strong biometrics need lower spoof acceptance, hardware-backed keystores, isolated processing, secure camera operation, and protection against direct injection. And standardization is catching up on the remote-ID side, where ISO/IEC 25456 is being developed specifically for injection attacks.

The general weaknesses of mobile biometric authentication

A modern mobile biometric system usually fails in one or more of seven recurring ways.

First, it lacks depth or multi-sensor evidence, so a flat artefact can pass as a face. That is the core Which? finding.

Second, it has weak PAD or no meaningful liveness/attention checks. Android strongly recommends both; many vendors still treat them as optional or minimal.

Third, it lacks a secure end-to-end pipeline. If camera frames or biometric intermediates can be altered before matching, injection attacks become possible. Android explicitly treats this as a compatibility requirement for stronger classes.

Fourth, its fallback and post-unlock model is weak. Even when a wallet or bank blocks low-grade biometrics, the unlocked device may still expose email, messaging, session cookies, app histories, and recovery paths. Which? is right to emphasize that the unlocked home screen can itself be the breach.

Fifth, many systems are insufficiently stress-tested across attack diversity. Android’s own protocol says manufacturers should calibrate across different PAI species and subject diversity; DHS found worst-case performance varied sharply by smartphone and attack class; iProov says vendors often test against only a fraction of known attack combinations.

Sixth, fairness and age effects remain under-discussed. DHS reported that active PAD systems often produced substantially higher BPCER for older users, with differences of up to 48%. A system that becomes stricter in the real world can become both less usable and harder to secure correctly.

Seventh, vendors still obscure the risk. Which? defines an adequate warning as a clear, prominent setup-time notice that the phone may be bypassed by a 2D photo or a similar-looking person. It says too many manufacturers still fail even that basic transparency test.

Bottom line

The significance of this news is not simply that “many phones can be fooled by a photo.” It is that the smartphone ecosystem has quietly accepted a category of biometric that is too weak for security but polished enough to be marketed as security. The platforms know the difference—that is why Google Wallet rejects Class 1 and Class 2 biometrics, and why Android differentiates Class 1, 2 and 3 so sharply. Researchers know the difference—that is why SOTERIA, LivDet, DHS RIVTD and iBeta all exist. But consumers still encounter a single friendly button: “Use Face Unlock.”

The practical conclusion is blunt. For many Android phones, face unlock is still best understood as a convenience feature for low-risk access, not as a trustworthy identity boundary. Over the last two years, the better systems have gotten stronger, the standards have gotten stricter, and app-level controls have gotten smarter. But the mass market is still full of devices where a printed face remains dangerously close to a master key.

For higher-security apps such as banking, signature, and medical applications, that is exactly why the more durable answer is not weak local device biometrics and not centralized biometric databases, but decentralized biometrics that can combine top-tier biometrics with certified liveness, injection resistance, and deepfake protection while preserving biometric privacy. This is where Youverse fits naturally into the conclusion of the story. Its approach to identity verification and biometric authentication points toward a model built for stronger assurance rather than convenience-grade unlock.

A particularly relevant example is YouLive, a certified liveness product designed to address the very categories of weakness laid out in this investigation, including spoofing, injection, and deepfake attacks. And for organizations that need decentralized face authentication rather than device-bound or centrally stored biometrics, YouAuth represents the architectural shift that this market increasingly needs.

The same logic also applies to digital identity wallets. If a wallet relies only on local device biometrics, it inherits the weaknesses of those biometrics. The stronger path is to base wallets on credentials that are intrinsically biometric rather than merely gated by device unlock, which is why the security argument explored in this Youverse analysis of biometrics and EUDI wallet security matters so much in this debate.

Until that shift is complete, a printed photograph remains dangerously close to a master key.

Newsletter subscription icon
Subscribe to our Newsletter!
The latest posts delivered to your inbox.