Introduction
In a series of posts investigating various Apple patents and patent applications, we happened to touch upon certain aspects of the security and privacy techniques that are built into Apple's products:
On Sept. 17, Apple rolled out an entire section of its website devoted to explaining its privacy policies and technologies. This coincided with several important improvements to the techniques it is using.
The reaction to this was bipolar. On the one hand, many applauded Apple not just for taking strong steps to secure and protect users' data, but for being open and transparent about what is protected and how. On the other hand, many doubted Apple's motives, insisting that there must be some sort of loophole, or that Apple is lying, or that this is all some sort of insidious Apple plot. Some mistakenly compared to Google's disclosures, which do little to explain how things are actually protected (and which, by the way, are substantively less protected than in Apple's systems).
In this report we'll take a closer look at Apple's practices, with a careful review of Apple's recently-published iOS Security white paper.
Security and Privacy
When people express concern with security and privacy, they are referring to a wide variety of potential risks that actually fall into different categories. At its most basic, the concern is that one's personal (and potentially private) "data" may be accessed by others.
This, however, covers a wide variety of potential issues. Some people are only concerned if what is accessed is "data" (for example, the contents of an email) and don't mind if "metadata" is accessed (data about data. For example, the email addresses of the sender and recipient of an email.)
Some people only worry that their data may be intercepted by someone who had bad intentions (e.g. credit card thieves, account hostage-takers, etc.) and don't care if the data is used in the ordinary course of performing some business method (e.g. parsing your email to see that you might be interested in buying a new car, and serving you ads relating to new cars). Some folks are concerned that the government or the police may access their data, while others are not. Those that are worried about the government either don't mind as long as access is permitted only with a court warrant, while others don't want their data accessible under any circumstances.
Some people are so concerned about use of "their" data that they object even to being included in bulk statistics made available to advertisers ("30% of our male customers visit Amazon.com during business hours").
And some folks object even to the use of their data in a manner that's absolutely necessary for the service they are willingly using to work (typically these folks don't really think about, or understand, that to provide certain services requires the use of data).
Given this wide spectrum of concerns, it is unlikely that even the most stringent policies will please everyone. And, particularly where Apple is concerned, rabid Apple-haters will always find something to complain about. But, hopefully, a plain English explanation of what Apple does and does not do will help mitigate some of the more hysterical accusations being hurled around.
Cryptography
The field of cryptography covers a lot of ground, and can get very easily get too mathematical for some folks to comfortable grasp. But understanding Apple's privacy disclosures is a lot easier with a basic understanding of some cryptographic concepts.
Typically cryptography is used for two purposes: authentication and secrecy. Authentication is the process of proving that someone is who they say they are (or that a device is what it purports to be, or that a message came from who it purports to have come from). In simple terms, authentication is proving that someone or something isn't a phony.
Secrecy, on the other hand, refers to making sure that information cannot be accessed by anyone other than those who are meant to be in on the secret.
People usually divide cryptography into two branches, "symmetric" and "asymmetric." Each can, under the right circumstances, be used both for secrecy and authentication.
Typically we talk about using a "key" to transform unprotected "plaintext" into protected "ciphertext." This transformation is performed through the use of a cryptographic algorithm.
The simplest type of cryptography is symmetric cryptography. In symmetric cryptography the sender of a message, and the message's intended recipient, share a secret (often called a "shared secret"). This secret is shared ahead of any message being sent. In the olden days this required quite an effort, and usually required some sort of direct or indirect contact ahead of time just to agree on how future messages would be encrypted.
The life cycle of information using symmetric cryptography is represented in the figure at right. Plaintext undergoes the process of encryption to become ciphertext. The encryption uses the shared secret key. The ciphertext may be decrypted (using the same shared secret key) to become the original plaintext.
"Caesar's cipher" is an ancient example of symmetric cryptography.
Caesar's cipher involved two parties agreeing on a shared secret translation between two alphabets. As in the above figure, for each letter in the alphabet, another letter is substituted. So, for example "APPLE" becomes "ZHHFX." When someone receives a message containing the sequence "ZHHFX" he or she uses the same table to convert the ciphertext back to plaintext "APPLE."
This can serve both as authentication of the message and to secure the message. First, assuming that the recipient knows that no one other than the sender has the shared secret (let's assume this fact is true), then if the recipient can decode the text and the result is legible, it must have been encrypted by the sender. Therefore the recipient can authenticate that the message was sent from the sender. Moreover, as a stranger cannot read the message (since the stranger doesn't have the shared secret key), the message is secure.
One of the major difficulties with symmetric cryptography is distribution of the shared secret. After all, it wouldn't be very much fun if in order to visit a secure website I first had to drive to corporate headquarters for the website and whisper a secret word into someone's ear.
Asymmetric cryptography helps solve this problem.
In asymmetric cryptography there are two keys: a private key and a public key. The private key is known only by one individual. No one else in the world needs to know about it (in fact, the assumption is that no one else in the world knows it). The public key can be known by anyone in the world, and its dissemination is actually encouraged.
The public key and the private key are related to each other in a very special way. They go together. The public key works with the private key and only with the private key. The private key only works with that public key.
It's easy to come up with private and public keys that go together, but very hard (preferably requiring many many years of guessing) to figure out what the private key is if you only have the public key.
Consider Alice and Bob. Alice wants to send bob a message securely, so that only Bob can read it. The first thing Alice does is get Bob's public key. Bob's public key is just that - it's public. Everyone knows it. Perhaps there's a directory that for each potential message recipient provides public keys, and everyone uses the same directory. Or maybe Bob previously conveyed his key to Alice, or broadcast it on national TV, or printed it in the newspaper, or had it tattooed on his arm. In any event, Alice knows Bob's public key.
So Alice encrypts her plaintext message to Bob using Bob's public key. This creates ciphertext that cannot be read by anyone unless it is decrypted. The only way to decrypt it, however, is by using Bob's private key. No other key will work. And only Bob, and no one else, has that private key. Bob did not have to share it with Alice. It's his own personal, private, business.
Now consider that Bob receives the message and wants to make sure it came from Alice, and not someone masquerading as Alice; after all, everyone knows Bob's public key, so the fact that the message is encrypted with Bob's public key says nothing about who encrypted the message. Asymmetric cryptography offers us a solution to this problem as well.
After encrypting the message to produce ciphertext, Alice applies the same algorithm, this time using her own private key to the ciphertext. This is referred to as "signing" the message with her key. When she does this, she produces a signature - a signed version of the message. (She could also start with a plaintext message and sign it if she doesn't care about security, of course). Now, when Bob receives the message, he needs to "authenticate" it to make sure it came from Alice.
Bob merely runs the same encryption routine on the message, but using Alice's public key. The result is the message Alice sent. If the message is not useable after authenticating it using Alice's public key, then it wasn't signed with Alice's private key, so it was not sent by Alice.
Of course, once Alice's "signature" is stripped off in this manner, Bob can go ahead and decrypt the message (if necessary) using his own private key.
The summary of all this is that encryption allows one to prove identity and to modify messages so that only the intended recipient can read them. The other message is that to read a message that has been encrypted, or to sign a message, one must know the proper secret key. If the key is actually secret, then no one can forge a message and no one can read a message not intended for them.
One last point. We discussed that the use of public keys requires certainty that the public key actually belongs to the person we think it does. This is often handled by the use of "certificates" or "public key certificates." The idea behind this is that there is some central entity or agency that everyone "trusts" (insofar as we all agree that this agency holds a canonical database which accurately provides the public keys associated with people).
If Alice wants to communicate with Bob securely, Alice asks the central "certificate authority" for Bob's public key. The certificate authority signs Bob's public key with its own private key to produce a "certificate." Now we only have to keep track of one public key - the certificate authority's. We can ask the certificate authority for everyone else's public key. When they send it to us we can check to make sure it really came from the certificate authority. Similarly, if someone else gives us Bob's public key we can check the signature and make sure it really came from the certificate authority.
If someone can modify the operating system on your iPhone at the lowest levels - say by intercepting calls to Apple's servers and providing a bogus iOS update - then all security on your device goes out the window. For this reason Apple has taken steps to try and ensure that the operating system running on your device is bit-for-bit blessed by them.
When you turn on your iPhone, the CPU is hardwired to execute only the software code found in the "Boot ROM." "ROM" is "Read Only Memory," meaning it's physically impossible to modify the contents of the memory once it has been written to at the factory. Apple calls this the "hardware root of trust," and for good reason.
Imagine you are new to the world, and the only thing you 100% believe is that your mother doesn't lie to you. Everything else must be treated with, at least, suspicion. If your mom tells you that everything your school principal says is true, you except that, because the root of your trust - that your mom doesn't lie - compels you to believe the statement. So now you can believe anything your mom says, and anything your principal says. If your principal says that you can believe everything your teachers say, then you accept that, because your mom said you can believe your principal. And if your math teacher says 1+1=2, you believe that, because your mom said you can believe your principal, and your principal said you can believe your math teacher. Hence you build a "chain of trust" starting with a "root of trust," the "root of trust" being the single, simple, absolutely trustworthy fact that is at the core of your belief system.
By relying on ROM, Apple assures that its root of trust is as reliable as is reasonably possible. The only way I can think of to defeat the ROM would be to physically access the device and replace the CPU with a custom made CPU with a different ROM (needless to say, that's not a convenient hack).
So what's in this ROM? Well, for one thing, the Apple Root CA public key. (The term "CA" presumably means "certificate authority.") This is a public key for Apple, that is certified correct by Apple. Since you bought an Apple phone, often from Apple, and since the public key is hardcoded in such a way that it can't be tampered with, the device can trust that the public key actually does correspond to Apple.
As we've already discussed, if you know someone's public key you can use it in at least two ways. You can encrypt something so that only that person can read it. Or you can authenticate that a message signed with that person's private key actually came from that person.
In this case, the public key is used to verify that the iOS Low-Level Bootloader, (LLB) which is signed by Apple, actually came from Apple. The LLB, in turn, verifies the next stage of the boot, iBoot, which in turn verifies and launches the iOS kernel. Since each stage must have the proper signatures, there is little chance that someone could maliciously tamper with the operating system.
Apple uses similar techniques for the other processors on the device, including the baseband unit which handles radio communications, and the Secure Enclave.
The simplest type of cryptography is symmetric cryptography. In symmetric cryptography the sender of a message, and the message's intended recipient, share a secret (often called a "shared secret"). This secret is shared ahead of any message being sent. In the olden days this required quite an effort, and usually required some sort of direct or indirect contact ahead of time just to agree on how future messages would be encrypted.
The life cycle of information using symmetric cryptography is represented in the figure at right. Plaintext undergoes the process of encryption to become ciphertext. The encryption uses the shared secret key. The ciphertext may be decrypted (using the same shared secret key) to become the original plaintext.
"Caesar's cipher" is an ancient example of symmetric cryptography.
Example shared secret key for Caesar's cipher |
Caesar's cipher involved two parties agreeing on a shared secret translation between two alphabets. As in the above figure, for each letter in the alphabet, another letter is substituted. So, for example "APPLE" becomes "ZHHFX." When someone receives a message containing the sequence "ZHHFX" he or she uses the same table to convert the ciphertext back to plaintext "APPLE."
This can serve both as authentication of the message and to secure the message. First, assuming that the recipient knows that no one other than the sender has the shared secret (let's assume this fact is true), then if the recipient can decode the text and the result is legible, it must have been encrypted by the sender. Therefore the recipient can authenticate that the message was sent from the sender. Moreover, as a stranger cannot read the message (since the stranger doesn't have the shared secret key), the message is secure.
One of the major difficulties with symmetric cryptography is distribution of the shared secret. After all, it wouldn't be very much fun if in order to visit a secure website I first had to drive to corporate headquarters for the website and whisper a secret word into someone's ear.
Asymmetric cryptography helps solve this problem.
In asymmetric cryptography there are two keys: a private key and a public key. The private key is known only by one individual. No one else in the world needs to know about it (in fact, the assumption is that no one else in the world knows it). The public key can be known by anyone in the world, and its dissemination is actually encouraged.
The public key and the private key are related to each other in a very special way. They go together. The public key works with the private key and only with the private key. The private key only works with that public key.
It's easy to come up with private and public keys that go together, but very hard (preferably requiring many many years of guessing) to figure out what the private key is if you only have the public key.
Consider Alice and Bob. Alice wants to send bob a message securely, so that only Bob can read it. The first thing Alice does is get Bob's public key. Bob's public key is just that - it's public. Everyone knows it. Perhaps there's a directory that for each potential message recipient provides public keys, and everyone uses the same directory. Or maybe Bob previously conveyed his key to Alice, or broadcast it on national TV, or printed it in the newspaper, or had it tattooed on his arm. In any event, Alice knows Bob's public key.
So Alice encrypts her plaintext message to Bob using Bob's public key. This creates ciphertext that cannot be read by anyone unless it is decrypted. The only way to decrypt it, however, is by using Bob's private key. No other key will work. And only Bob, and no one else, has that private key. Bob did not have to share it with Alice. It's his own personal, private, business.
Now consider that Bob receives the message and wants to make sure it came from Alice, and not someone masquerading as Alice; after all, everyone knows Bob's public key, so the fact that the message is encrypted with Bob's public key says nothing about who encrypted the message. Asymmetric cryptography offers us a solution to this problem as well.
After encrypting the message to produce ciphertext, Alice applies the same algorithm, this time using her own private key to the ciphertext. This is referred to as "signing" the message with her key. When she does this, she produces a signature - a signed version of the message. (She could also start with a plaintext message and sign it if she doesn't care about security, of course). Now, when Bob receives the message, he needs to "authenticate" it to make sure it came from Alice.
Bob merely runs the same encryption routine on the message, but using Alice's public key. The result is the message Alice sent. If the message is not useable after authenticating it using Alice's public key, then it wasn't signed with Alice's private key, so it was not sent by Alice.
Of course, once Alice's "signature" is stripped off in this manner, Bob can go ahead and decrypt the message (if necessary) using his own private key.
The summary of all this is that encryption allows one to prove identity and to modify messages so that only the intended recipient can read them. The other message is that to read a message that has been encrypted, or to sign a message, one must know the proper secret key. If the key is actually secret, then no one can forge a message and no one can read a message not intended for them.
One last point. We discussed that the use of public keys requires certainty that the public key actually belongs to the person we think it does. This is often handled by the use of "certificates" or "public key certificates." The idea behind this is that there is some central entity or agency that everyone "trusts" (insofar as we all agree that this agency holds a canonical database which accurately provides the public keys associated with people).
If Alice wants to communicate with Bob securely, Alice asks the central "certificate authority" for Bob's public key. The certificate authority signs Bob's public key with its own private key to produce a "certificate." Now we only have to keep track of one public key - the certificate authority's. We can ask the certificate authority for everyone else's public key. When they send it to us we can check to make sure it really came from the certificate authority. Similarly, if someone else gives us Bob's public key we can check the signature and make sure it really came from the certificate authority.
System Security
Apple says it designed the iOS system so that each component is "trusted" and validates the system as a whole. As your iOS device operates, each step is analyzed to ensure that the components are operating as they should so as to protect your data.Secure boot chain
If someone can modify the operating system on your iPhone at the lowest levels - say by intercepting calls to Apple's servers and providing a bogus iOS update - then all security on your device goes out the window. For this reason Apple has taken steps to try and ensure that the operating system running on your device is bit-for-bit blessed by them.
When you turn on your iPhone, the CPU is hardwired to execute only the software code found in the "Boot ROM." "ROM" is "Read Only Memory," meaning it's physically impossible to modify the contents of the memory once it has been written to at the factory. Apple calls this the "hardware root of trust," and for good reason.
Imagine you are new to the world, and the only thing you 100% believe is that your mother doesn't lie to you. Everything else must be treated with, at least, suspicion. If your mom tells you that everything your school principal says is true, you except that, because the root of your trust - that your mom doesn't lie - compels you to believe the statement. So now you can believe anything your mom says, and anything your principal says. If your principal says that you can believe everything your teachers say, then you accept that, because your mom said you can believe your principal. And if your math teacher says 1+1=2, you believe that, because your mom said you can believe your principal, and your principal said you can believe your math teacher. Hence you build a "chain of trust" starting with a "root of trust," the "root of trust" being the single, simple, absolutely trustworthy fact that is at the core of your belief system.
By relying on ROM, Apple assures that its root of trust is as reliable as is reasonably possible. The only way I can think of to defeat the ROM would be to physically access the device and replace the CPU with a custom made CPU with a different ROM (needless to say, that's not a convenient hack).
So what's in this ROM? Well, for one thing, the Apple Root CA public key. (The term "CA" presumably means "certificate authority.") This is a public key for Apple, that is certified correct by Apple. Since you bought an Apple phone, often from Apple, and since the public key is hardcoded in such a way that it can't be tampered with, the device can trust that the public key actually does correspond to Apple.
As we've already discussed, if you know someone's public key you can use it in at least two ways. You can encrypt something so that only that person can read it. Or you can authenticate that a message signed with that person's private key actually came from that person.
In this case, the public key is used to verify that the iOS Low-Level Bootloader, (LLB) which is signed by Apple, actually came from Apple. The LLB, in turn, verifies the next stage of the boot, iBoot, which in turn verifies and launches the iOS kernel. Since each stage must have the proper signatures, there is little chance that someone could maliciously tamper with the operating system.
Apple uses similar techniques for the other processors on the device, including the baseband unit which handles radio communications, and the Secure Enclave.
If something goes wrong, and any step is unable to authenticate the next piece of code, your device displays "Connect to iTunes" and you will have to do a software restore.
Sometimes it's not good enough to prevent running rogue OS code that was never authorized by Apple; sometimes Apple needs to make sure you don't run OS code that, while at one time authorized, is no longer reliable.
For example, imagine iOS 5.3.1 had a horrible bug that lets anyone in the world remotely download all the data off your phone at will. You leave your phone on your desk and walk away, and a bad guy wants access to your phone. All he has to do is "update" your software back to iOS 5.3.1 and he's off to the races. Obviously this is not an ideal result.
The way this works is that during a software "update," your phone (or iTunes, acting on your phone's behalf) checks Apple's "installation authorization server" and sends sends the server a list of cryptographic information as to what components are being sought, the devices unique ID (another immutable value created at the time of manufacture), and a "nonce."
A "nonce" is a (typically) random value that is used only once. Sometimes the nonce has the current time and date or other information embedded, to ensure that it is "unique." That is, no nonce can be used more than once. To understand how nonces work, consider a situation where I use my computer to place an order from the online Disney store for an Anna doll. Imagine that the information that is sent includes only my credit card number, my name, my shipping address, my credit card security code, and the product number. Imagine, too, that this is all bundled together into one long paragraph of data, which is then encrypted and sent to Disney.
If someone intercepts that encrypted data, they can't do too much with it. It's encrypted, so they can't see my credit card info. They can't try to modify the shipping address to point at their own home. It's pretty useless to them. But they could decide to ruin my day by copying the encrypted message and sending it over and over again to Disney, resulting in the placing of many duplicate orders. That would be bad.
If, however, I add a nonce to the data before it is encrypted, even something as simple as a number in a sequence, then Disney will be able to detect that the orders are all duplicates, because they will all have the same nonce (and a nonce must never repeat).
When the authorization server receives a request, it checks to see if installation of at the particular software version is permitted. If so, it sends a complete set of data, signed by its private key, to the device. Since the data contains the device's unique ID, the installation can only work on that one device. The nonce assures that an "old" authorization cannot be used again to install the OS without having to check again with the authorization server.
OS authorization
Sometimes it's not good enough to prevent running rogue OS code that was never authorized by Apple; sometimes Apple needs to make sure you don't run OS code that, while at one time authorized, is no longer reliable.
For example, imagine iOS 5.3.1 had a horrible bug that lets anyone in the world remotely download all the data off your phone at will. You leave your phone on your desk and walk away, and a bad guy wants access to your phone. All he has to do is "update" your software back to iOS 5.3.1 and he's off to the races. Obviously this is not an ideal result.
The way this works is that during a software "update," your phone (or iTunes, acting on your phone's behalf) checks Apple's "installation authorization server" and sends sends the server a list of cryptographic information as to what components are being sought, the devices unique ID (another immutable value created at the time of manufacture), and a "nonce."
A "nonce" is a (typically) random value that is used only once. Sometimes the nonce has the current time and date or other information embedded, to ensure that it is "unique." That is, no nonce can be used more than once. To understand how nonces work, consider a situation where I use my computer to place an order from the online Disney store for an Anna doll. Imagine that the information that is sent includes only my credit card number, my name, my shipping address, my credit card security code, and the product number. Imagine, too, that this is all bundled together into one long paragraph of data, which is then encrypted and sent to Disney.
If someone intercepts that encrypted data, they can't do too much with it. It's encrypted, so they can't see my credit card info. They can't try to modify the shipping address to point at their own home. It's pretty useless to them. But they could decide to ruin my day by copying the encrypted message and sending it over and over again to Disney, resulting in the placing of many duplicate orders. That would be bad.
If, however, I add a nonce to the data before it is encrypted, even something as simple as a number in a sequence, then Disney will be able to detect that the orders are all duplicates, because they will all have the same nonce (and a nonce must never repeat).
When the authorization server receives a request, it checks to see if installation of at the particular software version is permitted. If so, it sends a complete set of data, signed by its private key, to the device. Since the data contains the device's unique ID, the installation can only work on that one device. The nonce assures that an "old" authorization cannot be used again to install the OS without having to check again with the authorization server.
Secure Enclave
In a previous post we examined the Secure Enclave in great detail. Now Apple has provided us with some more information as to how it works. First, we learn that the Secure Enclave features a hardware random number generator. Random numbers are very useful in cryptography for multiple purposes, including nonces, key sequences, and various other things. Typically it involves the use of a linear feedback shift register to generate a pseudo-random sequence of values. Often these values are used as seeds to more complicated algorithms; for example, many encryption algorithms make good random number generators - one starts with a seed, then encrypts it to produce a new value that is very difficult to predict. Then one uses that to produce the next number in the sequence, ad infinitum. Apple instead uses timing variations during boot and interrupt timing variations to generate the seeds, and multiple ring oscillators to generate the seed. The seed is then provided to an algorithm called CTR_DRBG.
Block diagram showing the "SEP" or "Secure Enclave Processor" |
As we predicted in this article, the Secure Enclave uses a "mailbox" mechanism to isolate itself from other parts of the chip, so that if an app is compromised the "infection" cannot spread to the Secure Enclave.
During fabrication of the chip, the Secure Enclave is given its own, unique, ID. This ID is not accessible to other parts of the system, and is not known to Apple. Each time the device is powered on, a new temporary key is created. This key is combined with the Secure Enclave ID to create, essentially, a "super key." The super key is then used to encrypt the portion of the device's memory which is used by the Secure Enclave. In order for a rogue app to be able to understand the contents Secure Enclave's memory (many things would have had to have gone wrong for the rogue app to even have seen this memory, but...) it would have to guess the Secure Enclave's unique ID. Given enough trials it could eventually do so. However, it would also have to guess the temporary key, which is changing all the time, making repeated trials a difficult way to go about it.
When Secure Enclave has to save data to the file system, it also adds a nonce to the key, preventing replays (for example, preventing an attack where an old, known key is used to overwrite a new, unknown key).
Communications between the Secure Enclave and the CPU are encrypted and authenticated with a "session key" that is negotiated using the device's shared key. "Negotiation" usually refers to a process where two parties can agree on a shared secret even though they never directly communicate the shared secret from one to the other. This prevents interlopers from detecting the shared secret in transit. A "session key" is a key that is used only during a particular communications "session" and then replaced. The more a key is used, the easier it is to discover the key, so it is good practice to change keys periodically. Doing so each time a communications session starts is one typical way of accomplishing this.
Touch ID
Apple has also provided more information about the mechanisms by which Touch ID unlocks a device.
If the user disables Touch ID, all the keys associated with logging in, which are held in the Secure Enclave are discarded. Associated files (such as keychain items corresponding to login information) are inaccessible unless the user enters the correct passcode.
When Touch ID is enabled, the keys are kept in the Secure Enclave even when the device is locked. However they are encrypted with a key. When a user attempts to unlock the device with his fingerprint, Touch ID tests for a fingerprint match. If the fingerprint matches, Touch ID provides the key necessary to unlock the login keys, which allows the device to unlock. If the device reboots, or if 48 hours pass with no Touch ID login, or if there are five failed Touch ID attempts, the keys are discarded.
Encryption and Data Protection
All iPhones and iPads have a dedicated hardware AES 256 crypto engine built into the memory path between flash storage and the main memory, meaning that it is very efficient to encrypt files as they are being written to flash and to decrypt them as they are being loaded from flash. AES 256 is a symmetric cryptographic algorithm, and is generally considered secure.
There are two special identifiers found in the device that uniquely identify the device. The first is the unique ID (UID). This is "fused" into the CPU during manufacturing. Imagine a fuse box located inside the CPU; once the fuses are "blown" they cannot be repaired (at least not without a chip fab and a lot of work). This is a common way of forcing an ID code into the chip; by breaking some connections while allowing others to remain, a binary code can be embedded.
The other ID is the device group ID (GID) which is compiled into the CPU during manufacturing. This means that there is a specific collection of logic gates used to produce the PID on the silicon, and this collection is determined automatically as the chip is manufactured. All devices using a particular processor - e.g. all devices using A8 - have the same GID. The UID is unique to each device.
Neither the GID or UID can be accessed by software or firmware, and they cannot even be accessed using the mechanism that is used after manufacture to test the operation of the chip (a technique called "boundary scan.") The UID is used whenever something needs to be locked to a particular device. By including the UID in the collection of keys used to protect something, only the device with the right UID (or an improbably lucky guess) will be able to access those resources.
All of the other keys used in the device are created using the system's hardware random number generator.
Apple uses the term "data protection" to refer to its technology for protecting data stored in flash memory on your device. As of iOS 7, key system apps and all third-party apps automatically use data protection.
The basis of data protection is that each file stored in memory is assigned to a "class." Whether a particular file is accessible depends on whether the keys for that class have been unlocked.
Each time a new file is created, the system creates a new key, just for that file, and supplies it to the hardware encryption engine which uses it to encrypt the file as it is written into flash. The key is, itself, encrypted using a class key, depending on what class it belongs to. The encrypted key is stored in the metadata associated with the file.
When the file is opened, its metadata is decrypted, revealing the wrapped key for the file (still encrypted). If the class key is unlocked, it is used to decrypt the file key, and then the hardware cryptography engine can decrypt the file with that key as it is read from memory.
The data protection classes that a file may belong to are:
- complete protection - protected with a key derived from your passcode and the UID. 10 seconds after locking the device, if "Require Password" setting is "Immediately," the decrypted class key is deleted from the device. As a result, accessing files in this class requires you to again unlock the device with a passcode (or Touch ID).
- protected unless open - this is for files that need to be written to even when the device is locked. For example, a mail attachment that is downloaded in the background. The device uses asymmetric cryptography to create a public-private key pair. As soon as the file is closed (i.e. writing to the file is complete), the shared secret created form these keys is deleted. To open the file in the future, the shared secret will be re-created.
- protected until first user authentication - this is similar to "complete protection" except the decrypted class key remains in memory when the device is locked. This provides protection against attacks that involve rebooting the device, and is the default class for all third-party app data not otherwise assigned to one of the other classes.
- no protection - even if files fall into the "no protection" class they are encrypted, however the key depends only on the device UID.
Passcodes
Data protection is enabled when you set your device's passcode. (Note: if you are really concerned with security you should use an arbitrary-length alphanumeric passcode rather than the default 4-digit passcode).The passcode is used both to lock the device and as an input into certain encryption keys used on the device, meaning that even if the data on the device was extracted by cracking open the phone, decryption would still require the passcode.
The device's UID is also an input, meaning that if you want to perform a brute-force attack (as in, try every possible passcode) you would have to do it on the actual device (or somehow determine the device's UID, which generally would require having the device in-hand in any case).
The device intentionally slows down successive tries so that brute forcing is more difficult. Apple says it would take more than 5 years to try all combinations of a 6-character alphanumeric passcode with lowercase letters and numbers. Of course, if you add more characters to your passcode, or also use uppercase letters and punctuation, the time it would take to try all combination would increase substantially.
By the way, this is another advantage of Touch ID - if you don't have to type your passcode in very often, you can afford to make it long and complex.
On A7 or A8 devices, the Secure Enclave also creates a 5-second delay between repeated failed unlocking attempts, which also defends against brute force attacks.
App Security
One of the primary advantages to owning an iPhone, and one that many people take for granted, is that absent extraordinary steps taken by the user to defeat security mechanism (e.g. jailbreaking), one can take it for granted that if one's iPhone is running an app, that app isn't malware. Apps can't adversely affect the operation of other apps or of the system, apps can't access data they aren't explicitly permitted to access, etc.In order to develop apps, iOS developer just register with Apple and have their real-world identity verified by Apple. Once this happens, Apple issues them a certificate (containing the developer's key, and signed by Apple with its own private key). This certificate is used by the developers to sign their apps. This enables the real-world identity of the developer of an app to be verified, and prevents, for example, imposters from submitting replacement apps that you think are created by entities you trust.
All third-party apps are "sandboxed." This means they cannot access files stored by other apps or make changes to system files. This prevents, for example, a rogue app from poking its nose into your web history or banking app database.
Each app has a randomly generated directory name for its files. If the sandbox were to fail, rogue apps would not be able to search in hardcoded directory names for data, making life more difficult for malware authors.
The majority of the operating system runs as a non-privileged user ("mobile") so if a flaw is discovered in the system software it is less likely that the flaw can be leveraged to do much damage.
When an app wants to access user information (other than information the app, itself, "owns") it must declare "entitlements." These entitlements are digitally signed to prevent changes not intended by the real (and trusted) developer.
iOS supports address space layout randomization, which protects against memory corruption bugs by making it difficult for malware authors to leverage a memory corruption bug to do anything "useful."
iOS also uses the ARM "Execute Never" feature which marks memory pages as non-executable, providing protection against apps that intentionally (or accidentally) modify their own code. (For more on the hazards of self-modifying code, see this discussion of x86 vs. ARM.)
iMessage
There has been a lot of paranoia about what Apple knows and doesn't know about your iMessages. Well, unless Apple is lying (extremely unlikely), here's the summary:- Apple does not log messages or attachments.
- Messages are protected by encryption from end-to-end. No one but the sender and receiver can access them.
- Apple cannot decrypt the messages. In other words, Apple doesn't know what's in the messages, and has no capability of providing such information to government agencies.
How does this work?
When you turn on iMessage, two pairs of asymmetric keys are created. One pair, which is an extraordinary 1280 bits long, is used for encryption. The other pair, 256 bits long, is used for signing.
Recall that each pair of keys consists of a private key and a public key. The private keys are stored in the device's keychain, and only the public keys are sent to Apple (where they are stored in a directory of public keys, and are associated with your phone number or email address, and your phone's APN address). An APN is the unique address of your device that is used to handle push notifications.
Each time you register a new email address, device, or phone number for iMessage it gets added to this directory.
Sending messages
When you attempt to send a message, you first specify an email address, name, or phone number for the recipient. Once the recipient is identified, the device contacts Apple's directory service to retrieve the public keys and APN addresses for all devices associated with the recipient. Again, your device can only retrieve the recipient's public keys because that is all Apple has.Your message is encrypted, on your device, using the public keys of the recipient. Each of the recipient's devices has a separate public key, so your message is encrypted separately for each possible recipient device.
After being encrypted, each version of the message is signed using your own private key.
Attachments are a special case. In the event you are sending an attachment, your device produces a random key (a symmetric key) and uses it to encrypt the attachment The attachment is then uploaded to iCloud. Apple has no access to the random key used to encrypt the attachment. The URL for the attachment, and the random key, are then included in the message your device sends to the recipient (and are themselves encrypted with the recipient's public key and signed with your private key, like the rest of your message).
Receiving messages
Each of the recipient's devices receives its copy of the message. If necessary, the device also retrieves attachments from iCloud using the URL embedded int he message. The receiving device uses its own private keys, unknown to Apple, to decrypt the message.
FaceTime
When two devices communicate via FaceTime the encryptions are end-to-end encrypted and only the two device - not Apple and not anyone else - have the keys required for decryption. The two devices verify each other's certificates and establish a shared secret (symmetric encryption) key for one-time use in that session. Each device provides a nonce, and the two nonces are combined and used as salt fro the cryptographic keys ("salt" refers to adding random data to cryptographic data to make brute force attacks more time consuming).
Note that while Apple has the keys, the underlying data may itself be encrypted by keys that Apple does not have. For example, encrypted backups, iMessage attachments and the like are encrypted with private (or shared) keys not known by Apple, and then the encrypted version is encrypted again by Apple. In fact:
Even though Apple has access to these keys, it can't read them; they are protected by a UID-entangled key, meaning the keys can only be restored to the device from which they originated. No one else, not even Apple, can read these keys. This renders certain information absent when you restore to a new device, forcing you to reenter things like your iCloud password (which can be used to retrieve and decrypt the relevant prior UID).
When a restore takes place, the files are decrypted using the keys in the set, and re-encrypted on a per-file basis as they are written onto your device (as determined by the data protection classes we referred to earlier).
The source document contains lots more information about lots of other services, including iCloud Keychain, Continuity, Call Relay, etc. The point of all this is not that Apple's systems are immune to snooping and that it's impossible for Apple to see your data, but that Apple has provided a ton of information so that those who read and understand it can judge for themselves how safe their data is. Even if the code is all free from bugs, there are some possible avenues from attack, and while Apple has (obviously) not pointed out what they are, security pros can certainly determine what they are based on the information Apple has provided. And, of course, there very well may be bugs, meaning that some of the well-meaning mechanisms detailed by Apple may simply not be functioning properly.iCloud
Depending on your settings, iCloud may be used to store contacts, calendars, photos, documents, backup data, iMessage attachments, and third party app data. Each file is broken into chunks, and the chunks are encrypted using a key derived from each chunk's contents. The keys, and the file's metadata, are stored by Apple. If third-party storage services are used, then no user-identifying information is sent to those parties.Note that while Apple has the keys, the underlying data may itself be encrypted by keys that Apple does not have. For example, encrypted backups, iMessage attachments and the like are encrypted with private (or shared) keys not known by Apple, and then the encrypted version is encrypted again by Apple. In fact:
iCloud Backups
iCloud backups are stored as a "backup set" at Apple or at Apple's storage providers. The backup set consists of the user's encrypted files, and a set of iCloud Backup keys. These keys are protected by a random key, which is also stored with the backup set. Your password is not used for encryption, so changing your iCloud password doesn't invalidate existing backups.Even though Apple has access to these keys, it can't read them; they are protected by a UID-entangled key, meaning the keys can only be restored to the device from which they originated. No one else, not even Apple, can read these keys. This renders certain information absent when you restore to a new device, forcing you to reenter things like your iCloud password (which can be used to retrieve and decrypt the relevant prior UID).
When a restore takes place, the files are decrypted using the keys in the set, and re-encrypted on a per-file basis as they are written onto your device (as determined by the data protection classes we referred to earlier).
Conclusion
Nonetheless, it's good to see this level of transparency from Apple, and hopefully it foretells a new era of transparency at least with respect to issues of security and privacy; the surest way to know that Apple's security practices are the best they can be is for Apple to subject them to the prying and suspicious eyes of security and cryptography professionals trained in finding weakness and exploits; this can only make our privacy stronger in the long run.
No comments:
Post a Comment