Posted by John Steven on September 25, 2013
Unsurprisingly, German hackers were able to produce a fingerprint prosthetic allowing an attacker to defeat Apple’s TouchID within days of the iPhone 5S release. Media coverage abounds, as has reaction to the attack and discussion about biometrics, multi-factor authentication, and-of course-death of the pin/password.
None of us like the password: it’s impossible to get users to create, rotate, and protect strong passwords. Storing them is a pain as well. Users’ promiscuous use of a single password across many sites makes compromise more damaging.
For now, and for the foreseeable future, we’re stuck with passwords, pins, or pass phrases–why? Not in-spite of multi-factor authentication (MFA) but because of it. MFA, as it is commonly broken down relies on 1) something you know, 2) something you have, and 3) something you are. Whether it’s the Google Authenticator (a virtual “fob” you have) or Apple’s TouchId (something you are), you haven’t got MFA unless you include at least two of the three.
In an MFA scheme, biometrics are designed to augment (not replace) passwords to bolster the security of AuthN. I think us security folk agree we’re more comfortable with designing MFA-based AuthN solutions rather than swapping out one scheme for another.
Now, onto TouchId.
We always start with a threat model. Omitting gory detail, consider two main attack goals getting coverage:
When you consider or discuss the 5S biometrics, remember to classify your goal or attack vector in the above terms. The published attack, of course, represents physical forgery.
At this point, we have to conclude that, from either the retail consumer’s or enterprise users’ perspective, TouchId presently withstands attack unless a high-value target of choice is attacked (physically, by a co-located) by a concerted threat.
What underlies these conclusions?
Succeeding in a physical forgery involves developing a working forgery requires a capable threat possessing:
Because of the way a user holds their iPhone, the most likely finger they’ll use for TouchId is their thumb. This, as I understand it, is also the least likely finger to leave a sufficiently clear print on other surfaces for theft.
Apple patents refer to various sensor-based verifications that don’t appear to have prevented the published forgery. These include:
This could be because they’re presently disabled for this first release, because they’ve been tuned beyond an effective threshold, or (less likely) the contrived victim/attacker pair didn’t trigger failure with these other sensors (i.e. I tried to break my own phone with a forgery of my own fingerprint). I expect Apple to follow up on implementing these patent ideas and keep attackers in a cat-and-mouse game that leaves this security control on the “effective” side of the balancing act.
Because Apple builds proprietary systems rather than open, it’s less clear what attack vectors will result in successfully stealing biometric data for reuse elsewhere. Back to our breakdown, developing a working electronic or physical forgery from electronic data requires a capable threat possessing:
Whether or not a rooted phone makes acquiring data or any cryptographic material used in the TouchId scheme is presently unclear. If Apple took an approach similar to what we see in other payment systems use of secure element would not be compromised by a kernel break. Further exploration is also necessary to determine whether or not a kernel break would allow an attacker to short-circuit the “Success”/”Fail” return value from TouchId verification.
One fingerprint, Many Fingerprint Products – There are plenty of fingerprint-based biometric products. Unless apple stores the RAW fingerprint minutiae data, swiping this information from a iPhone will not allow the threat to create a forgery that will work on another vendor’s reader/verification. It might not even work on a new replacement iPhone 5s… depending on whether the enrollment process randomly selects which minutiae factors it’s going to rely on… …or how many it’s collected at the time of theft (we do know it’s supposed to ‘learn’ over time). Keep this in mind: completely owning the raw data from one iPhone may not be valuable in practice. It’s certainly not the spooky “but the NSA owns you” specter people are raising regarding TouchId.
Probabilistic Matching – Biometric authentication produces a binary result just like username/password but the underlying algorithm is probabilistic–unlike password verification. This means that any biometric product must tune both Failure Acceptance Rates (FAR) and Failure Rejection Rates (FRR).
FAR is a proxy for inverse security and annoyance. The higher the FAR the less secure. But, the higher the FAR, the greater number of peanut butter covered, or greeby, or greasy, scans verification accepts, and thus the less annoyed a user is.
FRR is a proxy for annoyance. The higher the FRR the more times a user swears because their phone remains locked and they miss that 8MP shot of someone doing something absurd.
Modern systems are rather accurate.
Summarizing: when you tune a modern algorithm to correctly identify the authorized individual 95-100% of the time, your false acceptance rate doesn’t exceed 10%. For non-twin populations, asking users to “try again” only 1.5% of the time and opening the phone up to attack by < 2% of the population seems like an exceptionally good tradeoff. Ignoring legal standards for fingerprint evidence, there’s a big difference between saying,
“the person who murdered Joe Bob was the defendant Fred or one of 143.6 million other people”
“only 2% of people who steal Corey’s phone can defeat the fingerprint scanner without the help of a CSI team and a quality forgery pulled from his pint glass while he was in the can.”
Get the latest Software Integrity news, thought leadership, and more.