Software Integrity

 

Touch ID: Yea or nay?

Unsurprisingly, German hackers were able to produce a fingerprint prosthetic allowing an attacker to defeat Apple’s TouchID within days of the iPhone 5S release. Media coverage abounds, as has reaction to the attack and discussion about biometrics, multi-factor authentication, and-of course-death of the pin/password.

Unfortunately, the password’s death has been reported early

None of us like the password: it’s impossible to get users to create, rotate, and protect strong passwords. Storing them is a pain as well. Users’ promiscuous use of a single password across many sites makes compromise more damaging.

For now, and for the foreseeable future, we’re stuck with passwords, pins, or pass phrases–why? Not in-spite of multi-factor authentication (MFA) but because of it. MFA, as it is commonly broken down relies on 1) something you know, 2) something you have, and 3) something you are. Whether it’s the Google Authenticator (a virtual “fob” you have) or Apple’s TouchId (something you are), you haven’t got MFA unless you include at least two of the three.

In an MFA scheme, biometrics are designed to augment (not replace) passwords to bolster the security of AuthN. I think us security folk agree we’re more comfortable with designing MFA-based AuthN solutions rather than swapping out one scheme for another.

Now, onto TouchId.

We always start with a threat model. Omitting gory detail, consider two main attack goals getting coverage:

  • Physical Forgery – A.k.a. Operation “Gummy Bear”®: A threat (thief) gains physical access to fingerprint(s) and (perhaps later) the device, and impersonates the user by manufacturing a working forgery of the authenticating print.
  • Data-based Impersonation – (or, “that finger’s compromised, I have 8 more ‘password’ resets”): A threat gains access to stored fingerprint data through HW or SW attack. Stolen data is sufficient to either generate a working forgery and/or present this data successfully by bypassing the reader to the authentication subsystem. Attack vectors could include any interposing between application-/kernel-level fingerprint or authentication APIs, or by attacking the “Secure Enclave” itself.

When you consider or discuss the 5S biometrics, remember to classify your goal or attack vector in the above terms. The published attack, of course, represents physical forgery.

Conclusions

At this point, we have to conclude that, from either the retail consumer’s or enterprise users’ perspective, TouchId presently withstands attack unless a high-value target of choice is attacked (physically, by a co-located) by a concerted threat.

  1. Physical forgery requires professional grade CSI skills and physical access to both device and a high-quality smudge-free print of the correct finger.
  2. Permanent impersonation requires kernel or hardware hacking skills and physical (perhaps destructive) access to a device.
  3. Retail AAPL consumers are bound to be quite satisfied with both the genuine accuracy and false acceptance scores this technology provides.
  4. Concerted threats attacking a high value target of choice will succeed–just as always.

What underlies these conclusions?

Select elements of the threat model (physical forgery)

Succeeding in a physical forgery involves developing a working forgery requires a capable threat possessing:

  • Attack Surface 1 – lengthy physical access to the correct sufficiently intact un-smudged finger print.
  • Attack Surface 2 – physical access to the victim’s device
  • Skill – Professional-grade CSI capabilities
  • Materials –a 2400 dpi camera, 1200 dpi laser printer, and … a substrate on which to imprint the forgery such as glue

Further considerations:

Because of the way a user holds their iPhone, the most likely finger they’ll use for TouchId is their thumb. This, as I understand it, is also the least likely finger to leave a sufficiently clear print on other surfaces for theft.

Apple patents refer to various sensor-based verifications that don’t appear to have prevented the published forgery. These include:

  • “electric field”,
  • “infrared image”,
  • “complex impedance” detection,
  • and “optical light absorption”

This could be because they’re presently disabled for this first release, because they’ve been tuned beyond an effective threshold, or (less likely) the contrived victim/attacker pair didn’t trigger failure with these other sensors (i.e. I tried to break my own phone with a forgery of my own fingerprint). I expect Apple to follow up on implementing these patent ideas and keep attackers in a cat-and-mouse game that leaves this security control on the “effective” side of the balancing act.

Select elements of the threat model (data-based vectors)

Because Apple builds proprietary systems rather than open, it’s less clear what attack vectors will result in successfully stealing biometric data for reuse elsewhere. Back to our breakdown, developing a working electronic or physical forgery from electronic data requires a capable threat possessing:

  • Attack Surface 1: lengthy physical device access (perhaps destructive)
  • Skills –
    • Kernel exploitation (custom or use of kit) or a rooted victim device
    • Ability to attack hardware / overcome tamper-resistance
  • Materials – Retail / commercial-grade electronics equipment (oscilloscope, probes, etc.)

Further considerations:

Whether or not a rooted phone makes acquiring data or any cryptographic material used in the TouchId scheme is presently unclear. If Apple took an approach similar to what we see in other payment systems use of secure element would not be compromised by a kernel break. Further exploration is also necessary to determine whether or not a kernel break would allow an attacker to short-circuit the “Success”/”Fail” return value from TouchId verification.

General Biometric Considerations

One fingerprint, Many Fingerprint Products – There are plenty of fingerprint-based biometric products. Unless apple stores the RAW fingerprint minutiae data, swiping this information from a iPhone will not allow the threat to create a forgery that will work on another vendor’s reader/verification. It might not even work on a new replacement iPhone 5s… depending on whether the enrollment process randomly selects which minutiae factors it’s going to rely on… …or how many it’s collected at the time of theft (we do know it’s supposed to ‘learn’ over time). Keep this in mind: completely owning the raw data from one iPhone may not be valuable in practice. It’s certainly not the spooky “but the NSA owns you” specter people are raising regarding TouchId.

Probabilistic Matching – Biometric authentication produces a binary result just like username/password but the underlying algorithm is probabilistic–unlike password verification. This means that any biometric product must tune both Failure Acceptance Rates (FAR) and Failure Rejection Rates (FRR).

FAR is a proxy for inverse security and annoyance. The higher the FAR the less secure. But, the higher the FAR, the greater number of peanut butter covered, or greeby, or greasy, scans verification accepts, and thus the less annoyed a user is.

FRR is a proxy for annoyance. The higher the FRR the more times a user swears because their phone remains locked and they miss that 8MP shot of someone doing something absurd.

Modern systems are rather accurate.

Summarizing: when you tune a modern algorithm to correctly identify the authorized individual 95-100% of the time, your false acceptance rate doesn’t exceed 10%. For non-twin populations, asking users to “try again” only 1.5% of the time and opening the phone up to attack by < 2% of the population seems like an exceptionally good tradeoff. Ignoring legal standards for fingerprint evidence, there’s a big difference between saying,

“the person who murdered Joe Bob was the defendant Fred or one of 143.6 million other people”

and

“only 2% of people who steal Corey’s phone can defeat the fingerprint scanner without the help of a CSI team and a quality forgery pulled from his pint glass while he was in the can.”