Why?
Because 789.
http://www.usatoday.com/story/tech/2014/09/30/microsoft-windows/16474073/
Tuesday, September 30, 2014
Monday, September 29, 2014
iOS 8.1 beta 1
Apple released iOS 8.1 beta 1 to developers today. I'm surprised that when confronted with the opportunity, I decided not to install on my working devices - Apple finally managed to shake my faith with the 8.0.1 debacle.
I'll wait a couple weeks and see what's what.
I'll wait a couple weeks and see what's what.
Saturday, September 27, 2014
And what about the vampires?
http://www.washingtonpost.com/business/technology/2014/09/25/68c4e08e-4344-11e4-9a15-137aa0153527_story.html
Apple: endorsing pedophiles, terrorists, and presumably vampires.
Apple: endorsing pedophiles, terrorists, and presumably vampires.
Friday, September 26, 2014
8.0.1-gate: A Postmortem
So what the heck happened?
Apple isn't saying, at least not yet, so we can only speculate. The facts seem to be these:
Apple isn't saying, at least not yet, so we can only speculate. The facts seem to be these:
- Folks who updated their iPhone 6's and iPhone 6+'s to 8.0.1 lost the ability to connect to the cell networks and the use of Touch ID.
- This appears to have affected everyone who updated an iPhone 6 or 6+.
- This appears not to have affected anyone who updated earlier models. (There are some accounts to the contrary, but I'm guessing with every update there are a handful of folks who have issues like these, so let's assume these are outliers).
- This appears not to have affected anyone who updated via iTunes (as opposed to "over the air").
So how did this happen? Well, let's guess.
Thursday, September 25, 2014
Wednesday, September 24, 2014
Wow.
Hard to believe this is the official Apple solution to the 8.0.1-gate.
I did this hours ago, and it worked, but it's shocking that Apple couldn't rush out an over-the-air fix for those who don't have access to iTunes.
http://support.apple.com/kb/HT6487?viewlocale=en_US&locale=en_US
I did this hours ago, and it worked, but it's shocking that Apple couldn't rush out an over-the-air fix for those who don't have access to iTunes.
http://support.apple.com/kb/HT6487?viewlocale=en_US&locale=en_US
Do not update iPhone 6's to iOS 8.0.1!!!
You will not be able to connect to the cell network, and touch ID will break. Don't do it!
Damn, Apple. Get some quality control!
Damn, Apple. Get some quality control!
Tuesday, September 23, 2014
Apple is Doomed: iPhone 6 Sucks
It's that time of year again, when the iSheep and disciples of the cult-of-Steve gather for their annual ritual of putting on fresh underpants and meandering from their parents' basements to wait in line to fork over unreasonable sums of cash for the latest Cupertino-designed status symbol.
No, I don't think Samsung is fabbing any A8s
There have been some reports today that Samsung is producing 40% of the A8 dies. (Apparently sourced from IHS iSuppli to Re/code). I've seen no photographic evidence of this, whereas I have seen evidence (from ChipWorks) that A8 is manufactured by TSMC.
I doubt this Samsung "news."
First, it's not clear to me that IHS iSuppli even has the technical capability to make this determination, so they may simply be reacting to whispers from the magical "supply chain sources."
Second, TSMC and Samsung have different "design rules." "Design Rules" refers to the set of rules that chip designers must follow. For example, a design rule may specify that the minimum surface area of any piece of metal must be "X" or the spacing between two pieces of metal must be "Y" on metal layer 3. Different fabs also use different processes, meaning wire heights are different, transistors have different properties, etc.
It would be extremely difficult to design for two different fabs, each using a different process, simultaneously. The design effort would be around 90% of the design effort of designing two completely different chips. The only way to do it practically would be for Apple to create their own "worst case" design rules - a sort of "least common denominator." This would leave a lot of performance and power savings on the table. This would still entail difficulties as path timings will differ between the fabs, and fixing a critical path on Fab A's chip could cause a hold time violation on Fab B's chip.
Anything's possible, but this seems unlikely.
If it's true, it goes a long way toward explaining why A8 didn't make a much bigger leap in performance over A7.
I doubt this Samsung "news."
First, it's not clear to me that IHS iSuppli even has the technical capability to make this determination, so they may simply be reacting to whispers from the magical "supply chain sources."
Second, TSMC and Samsung have different "design rules." "Design Rules" refers to the set of rules that chip designers must follow. For example, a design rule may specify that the minimum surface area of any piece of metal must be "X" or the spacing between two pieces of metal must be "Y" on metal layer 3. Different fabs also use different processes, meaning wire heights are different, transistors have different properties, etc.
It would be extremely difficult to design for two different fabs, each using a different process, simultaneously. The design effort would be around 90% of the design effort of designing two completely different chips. The only way to do it practically would be for Apple to create their own "worst case" design rules - a sort of "least common denominator." This would leave a lot of performance and power savings on the table. This would still entail difficulties as path timings will differ between the fabs, and fixing a critical path on Fab A's chip could cause a hold time violation on Fab B's chip.
Anything's possible, but this seems unlikely.
If it's true, it goes a long way toward explaining why A8 didn't make a much bigger leap in performance over A7.
Monday, September 22, 2014
Neat iPhone 6+ Trick
I'm probably late to the party, but I didn't notice this until just now and wasn't aware of it previously. With iPhone 6+, in landscape mode, if you open a springboard folder with multiple pages it shows parts of subsequent and previous pages.
Neat.
Neat.
Why Macs will Get ARM'd - Interstitial 1 - The A8 Datapoint
Marco Arment is concerned that the benchmark results for the A8 may mean the era of rapid performance improvement in mobile is over.
He provides a graph, produced using data from GeekBench Browser, that looks something like this:
Now, even if we give a single benchmark undue weight as accurately indicative of overall performance, this is a bit misleading. First, there is a lot of low-hanging fruit the designers of Apple's processors can pick when market conditions force them to. One of the most obvious is adding more cores. Even though adding more cores doesn't necessarily provide linear improvement, it's pretty linear up to around 6 or 8 cores. So let's normalize the graph to emphasize performance per core to see how technology has advanced:
A bit of a drop-off in rate of improvement, but nothing to get too panicked about yet.
But what about clock speed? The designers may have chosen to simply clock at a lower rate so that the iPhone could be marketed as having better battery life this time around (which it does have). If the battery life were reduced to iPhone 5S-like levels, presumably the chip could be clocked faster.
To get an understanding of this, let's look at performance per core per MHz:
Well that looks worse!
So what's going on here? I think two things.
First, remember that unlike every prior iPhone chip, this one is fabbed not by Samsung but by TSMC. I'm sure TSMC is doing a fine job, but there any time you switch fabs like that all the "design rules" change, which means your entire chip layout has to be redone. This is a very intensive job, requiring the designers to almost start from scratch.
I suspect that a closer look at A8 would find that its cores are very similar to A7, with perhaps some bug fixes and fine tuning. Some regular structures may have been increased in size (certainly have, based on the 2 billion transistor count), including things like caches and buffers. This will cause a performance-per-clock cycle (or, as chip designers call it, an IPC) improvement, which meant that the performance improved both because of the 7.7% increase in clock speed and because the chip can do a little more work during each clock cycle (on average). That, and a battery life improvement (due by decreased device leakage and lower capacitance to charge and discharge each cycle because of the smaller feature sizes) were probably "good enough" for Apple this time around, especially since (1) the A7 is still near the top of the pack in performance in the smartphone market and (2) Apple had some other big things to crow about in selling the iPhone 6 this time around.
Whether this is a one time thing or the beginning of an Intel-like "tick-tock" cycle is hard to say. I assume A9 will be fabbed on TSMC's 16nm process (there are rumors to the contrary, however), so we could be looking at a one-time blip. Or we may be at the end of the "free candy" stage where Apple's design magic (using the techniques used at high end microprocessor places like AMD and not the techniques used by most everyone else - ASIC design methodology) can keep ratcheting up the heat on its competition.
I suspect, however, that we'll see A9 with around 10-15% per-core-per-clock improvement in performance, and with four cores.
We'll see!
As for what this means about a future laptop or desktop processor: not much. The A-series have a very low thermal envelope and Apple's designers are much more constrained than they would be for a laptop processor.
For more analysis of the Mac's future on ARM, look over here.
He provides a graph, produced using data from GeekBench Browser, that looks something like this:
Now, even if we give a single benchmark undue weight as accurately indicative of overall performance, this is a bit misleading. First, there is a lot of low-hanging fruit the designers of Apple's processors can pick when market conditions force them to. One of the most obvious is adding more cores. Even though adding more cores doesn't necessarily provide linear improvement, it's pretty linear up to around 6 or 8 cores. So let's normalize the graph to emphasize performance per core to see how technology has advanced:
But what about clock speed? The designers may have chosen to simply clock at a lower rate so that the iPhone could be marketed as having better battery life this time around (which it does have). If the battery life were reduced to iPhone 5S-like levels, presumably the chip could be clocked faster.
To get an understanding of this, let's look at performance per core per MHz:
Well that looks worse!
So what's going on here? I think two things.
First, remember that unlike every prior iPhone chip, this one is fabbed not by Samsung but by TSMC. I'm sure TSMC is doing a fine job, but there any time you switch fabs like that all the "design rules" change, which means your entire chip layout has to be redone. This is a very intensive job, requiring the designers to almost start from scratch.
I suspect that a closer look at A8 would find that its cores are very similar to A7, with perhaps some bug fixes and fine tuning. Some regular structures may have been increased in size (certainly have, based on the 2 billion transistor count), including things like caches and buffers. This will cause a performance-per-clock cycle (or, as chip designers call it, an IPC) improvement, which meant that the performance improved both because of the 7.7% increase in clock speed and because the chip can do a little more work during each clock cycle (on average). That, and a battery life improvement (due by decreased device leakage and lower capacitance to charge and discharge each cycle because of the smaller feature sizes) were probably "good enough" for Apple this time around, especially since (1) the A7 is still near the top of the pack in performance in the smartphone market and (2) Apple had some other big things to crow about in selling the iPhone 6 this time around.
Whether this is a one time thing or the beginning of an Intel-like "tick-tock" cycle is hard to say. I assume A9 will be fabbed on TSMC's 16nm process (there are rumors to the contrary, however), so we could be looking at a one-time blip. Or we may be at the end of the "free candy" stage where Apple's design magic (using the techniques used at high end microprocessor places like AMD and not the techniques used by most everyone else - ASIC design methodology) can keep ratcheting up the heat on its competition.
I suspect, however, that we'll see A9 with around 10-15% per-core-per-clock improvement in performance, and with four cores.
We'll see!
As for what this means about a future laptop or desktop processor: not much. The A-series have a very low thermal envelope and Apple's designers are much more constrained than they would be for a laptop processor.
For more analysis of the Mac's future on ARM, look over here.
Security and Privacy: Apple's Practices and Techniques
Introduction
In a series of posts investigating various Apple patents and patent applications, we happened to touch upon certain aspects of the security and privacy techniques that are built into Apple's products:
On Sept. 17, Apple rolled out an entire section of its website devoted to explaining its privacy policies and technologies. This coincided with several important improvements to the techniques it is using.
The reaction to this was bipolar. On the one hand, many applauded Apple not just for taking strong steps to secure and protect users' data, but for being open and transparent about what is protected and how. On the other hand, many doubted Apple's motives, insisting that there must be some sort of loophole, or that Apple is lying, or that this is all some sort of insidious Apple plot. Some mistakenly compared to Google's disclosures, which do little to explain how things are actually protected (and which, by the way, are substantively less protected than in Apple's systems).
In this report we'll take a closer look at Apple's practices, with a careful review of Apple's recently-published iOS Security white paper.
Saturday, September 20, 2014
A Day at the Mall
Valley Fair Mall, San Jose/Santa Clara California
Sept. 20, 2014
Samsung:
200' Away - Microsoft:
Just across from the Microsoft Store, the Apple Store:
Friday, September 19, 2014
Simultaneous voice and data on Verizon with iPhone 6!
All you have to do is go into Settings | Cellular and make sure LTE is enabled for both voice and data. It takes a few minutes for the change to trigger on Verizon's servers (which also seems to add HD Voice).
I tested this and it works.
I tested this and it works.
Wednesday, September 17, 2014
Update on Apple Security Technologies
Apple has posted a lengthy white paper describing the various security and privacy methods it uses in its software and hardware. The paper may be found here:
http://images.apple.com/privacy/docs/iOS_Security_Guide_Sept_2014.pdf
I will have a full report once I've had an opportunity to digest it, but I note that, as I surmised in this post, based on one of Apple's patents, Apple is indeed using a "mailbox" technique to isolate its Secure Enclave from the CPU.
http://images.apple.com/privacy/docs/iOS_Security_Guide_Sept_2014.pdf
I will have a full report once I've had an opportunity to digest it, but I note that, as I surmised in this post, based on one of Apple's patents, Apple is indeed using a "mailbox" technique to isolate its Secure Enclave from the CPU.
Pay Technology - Use of NFC
Once again we take a peek at the technology behind Apple's latest innovations, again focussing on Pay. This time we get some insight into the use of NFC courtesy of a recent Apple patent application, U.S. Pat. Pub. No. 2014/0019367.
See previous posts in this series:
Pay Technology: The Secure Enclave Processor
Pay - Tokenization for Security
See previous posts in this series:
Pay Technology: The Secure Enclave Processor
Pay - Tokenization for Security
Trouble in HealthKit land?
According to this article at MacRumors, HealthKit apps have been pulled from the App Store as iOS 8.0 is released, apparently because of some flaw in HealthKit itself.
I've been working on a HealthKit app all summer (and have lost 40 pounds and 15 mg/mm off my blood pressure because of it), and will report on my app soon.
In the meantime, based on my (now) extensive experience with the HealthKit API's, which act as a sort of central database for health-related information, I think there are two possibilities:
Update: Looks like it was either data loss or leakiness due to side channel, since Apple is saying it may take a couple of weeks to fix via software update.
I've been working on a HealthKit app all summer (and have lost 40 pounds and 15 mg/mm off my blood pressure because of it), and will report on my app soon.
In the meantime, based on my (now) extensive experience with the HealthKit API's, which act as a sort of central database for health-related information, I think there are two possibilities:
- Data loss. From time-to-time with various iOS betas I've experienced problems where data has disappeared from my device.
- Leakiness. Two sub-possibilities. Either there is some side channel vector that allows unauthorized access to HealthKit data (which would obviously be problematic) or Apple just discovered some flaw in the app review system that should probably be checking to make sure HealthKit apps don't do things like take HealthKit data and put it in the cloud.
Depending on what the issue is, it sounds like an iOS point update may be required to resolve it, which could result in delay of at least a week.
Update: Looks like it was either data loss or leakiness due to side channel, since Apple is saying it may take a couple of weeks to fix via software update.
Why Macs Will Get ARM'd, Part III
This is Part III of continuing series of articles that explain why it is likely that Apple will port its Mac line of desktop and laptop computers from x86 to the ARM architecture, and why it would be beneficial, both to customers and to Apple, for it to do so.
Here are quick links to the prior parts. I suggest reading them before reading this part.
Part I: Why Apple is Motivated
Part II: Why ARM is a Better Solution for Apple
In this Part III, I will discuss how this could work from a software point of view. Part IV will discuss hardware options.
Here are quick links to the prior parts. I suggest reading them before reading this part.
Part I: Why Apple is Motivated
Part II: Why ARM is a Better Solution for Apple
In this Part III, I will discuss how this could work from a software point of view. Part IV will discuss hardware options.
Monday, September 15, 2014
Pay Technology: The Secure Enclave Processor
This is part of a continuing series of articles focussing on discrete technologies developed by Apple in support of its products. The first article examined Apple's patent relating to tokenization and extrapolated from this how Pay appears to work.
This time we'll look at the Secure Enclave, basing the analysis on Apple's U.S. Patent No. 8,832,465 invented by, among others, my former colleague Manu Gulati.
This time we'll look at the Secure Enclave, basing the analysis on Apple's U.S. Patent No. 8,832,465 invented by, among others, my former colleague Manu Gulati.
iOS 9 "Predictions"
Okay, maybe more like a wishlist.
iOS 8 goes a long way toward making iOS devices more useful for more types of tasks, but there is still a lot of low hanging fruit that can be picked. Here are a few things I hope we see demoed at the next WWDC.
iOS 8 goes a long way toward making iOS devices more useful for more types of tasks, but there is still a lot of low hanging fruit that can be picked. Here are a few things I hope we see demoed at the next WWDC.
Apple sells 4 million iPhone 6's on first day of preorders
How's the latest iPhone killer - I guess it's Amazon's Fire Phone - doing?
Sunday, September 14, 2014
Wrongness Hall of Fame
Today we induct venerable tech opinion writer and long time PC Magazine columnist John C. Dvorak. Mr. Dvorak has raised the art of being wrong to a science, and like only the greats can manage he continues on with his wild predictions despite being wrong so many times in the past and continues to spout as fact statements that are easily disproven by even cursory examination. Yes, being inducted into the Wrongness Hall of Fame requires more than being repeatedly wrong; it requires a complete and utter lack of shame and an unwillingness to even try to get the facts straight.
And Mr. Dvorak has what it takes.
Perhaps Mr. Dvorak's proficiency at wrongitude is most easily understood by embarking on an illuminating journey through his wrongness. We'll follow with the cherry on top of this fetid falsity sundae with his latest missive, this time on the Watch.
And Mr. Dvorak has what it takes.
Perhaps Mr. Dvorak's proficiency at wrongitude is most easily understood by embarking on an illuminating journey through his wrongness. We'll follow with the cherry on top of this fetid falsity sundae with his latest missive, this time on the Watch.
Saturday, September 13, 2014
Shipping of initial iPhone 6's Beginning
Judging by my own order status, which was set at "Processing" a few hours ago, Apple is getting ready to ship iPhones for deliveries on Sept. 19.
Friday, September 12, 2014
Moto 360 Battery Life
From Engadget:
Ok.
The official specs state 320mAh, though a recent teardown reveals that the battery actually has "300mAh" printed on it. Motorola's official statement is that the battery offers a range of 300 to 320mAh, but opted to print just "300mAh" due to the lack of space.
Ok.
Pay - Tokenization for Security
Apple's new payment system, Pay, incorporates numerous technologies to improve security. As someone who's had to try and remember all of his credit-card autopay's and go through them one-by-one to update account information - twice in a year - due to theft of credit card information from various retailers, I appreciate this. Contrary to a dumb article at the New York Times (no link to dumbness), part of the cost of security breaches is born by the consumer, in the time and energy spent dealing with the aftermath.
One of the techniques used in Pay is "one time tokens."
What's that?
One of the techniques used in Pay is "one time tokens."

Keep it Classy, Walmart and Best Buy
Given what the joyous retail experience provided by Wal-Mart and Best Buy, and given how trustworthy they are (I almost made the mistake of not buying the $200 gold tipped audio cable and the accompanying extended service plan, but Best Buy set me straight), shocking that they wouldn't get behind Pay.
Will Stores Warm Up to Apple Pay? - WSJ
Will Stores Warm Up to Apple Pay? - WSJ
Thursday, September 11, 2014
Watch: Design is What You Leave On the Cutting Room Floor
Absence reveals purpose
Looking at what isn't (as far as we know) in the Watch tells us a lot about the problem the device is designed to solve. Unfortunately, unlike in some past keynotes, this time Apple didn't start their presentation by telling us what real-world problems the device is intended to solve. Which is not the same as saying that Apple didn't have such a vision in mind before they started designing; the fact that they left so many "features" on the cutting-room flaw tells me they had a clear understanding of what the watch was intended to be. This is not surprising, as it's part of Apple's DNA. Apple ruthlessly cuts features that don't further their products' ability to solve the problems that Apple intends them to solve.Wednesday, September 10, 2014
On Watch and User Interfaces
The second most interesting thing is the user interface Apple provided for this new device. Apple clearly put a lot of thought into it, in stark contrast to many of its competitors.
I've long had an interest in human machine interfaces and user interface design. It started when I wrote and co-wrote a series of tools at AMD that were used by the chip designers to make them more productive. These were very visual interfaces, designed to provide immediate visual feedback, in contrast to our existing tools which were largely text based or batch-oriented. At the time I learned a lot by trial and error, and began to read a lot of books and articles about designing products for use by humans. I learned what Apple already knows, of course - the design isn't about what it looks like, but about how it works.
As pointed out at the Watch announcement, Tim Cook pointed out that Apple has a long history of tailoring the interface to the device. iPods had click wheels, not keyboards and buttons, because the most common task was scrolling through long lists of music. Macs use keyboards and mice because long sessions of touching the screen are uncomfortable. iPhones got touch, because the large screen allows one to see the underlying information without a finger getting into the way, and multitouch because it's an incredibly natural way for humans to interact with information.
But some of the innovations in Watch may someday find their way into iPhone or iPad.
The Digital Crown
The Digital Crown is a direct digital equivalent to the crown found on the vast majority of traditional wristwatches.On traditional wristwatches, the crown was initially a superior alternative to keys that had to be inserted into pocket watches to wind the spring. The crown acted as a permanently available key to allow winding the spring. The crown also allows setting the time, and, where applicable, the date.
The crown thus universally supports rotation, usually free rotation. The crown also typically could be extended to different detented positions to determine its function - winding, time-setting, date-setting, etc.
Some more-modern watches use a rotating bezel around the face of the crystal as a method of user interface. I don't doubt Apple considered that as well and rejected it due to cost, reliability, or aesthetics. Additionally, rotating the bezel risks picking up false touches on the crystal. Still, it would have been a nice callback to the iPod scroll wheel, and I'm sure Apple thought about it.
In any event, as Tim Cook pointed out, the advantage of the Digital Crown is that it is out of the plane of the screen. Given the small screen, it's very easy to block out everything when touching it. This is different than the iPhone, where you can place your finger on a different part of the screen, someplace you aren't looking, to scroll.
Apple says the Digital Crown can be used to "zoom, scroll, and select." It appears that the choice between these behaviors depends on context. There doesn't seem to be a way to scroll and zoom an image at the same time using just the crown. It appears scrolling is only for lists, whereas anything else, where the information exceeds the screen window in both dimensions, is zoomed. As far as "selecting," I assume this means "moving the selection pointer among list items." This is similar to scrolling, in some sense.
The Digital Crown can also be pressed to return to the home screen. In some sense the Digital Crown is thus like the iPhone home button, if we image that Apple added some software subroutines to use the Touch ID sensor to detect swipes (like can be done on some Android phones). Of course, this is a difference between Apple and its competitors. Using a touch sensor for scrolling is infinitely worse than using touch to scroll on a large screen phone. Just because it could be done doesn't mean you have to implement it.
Note that Apple also gave consideration to the 10% of the population that is left-handed. Simply flip the watch upside down, reverse the band, and you can put it in left-handed mode. The Digital Crown wheel takes up reverse meaning. The only difference is that "the button" (talk about that soon) is then above, not below, the Digital Crown. This, by the way, is a trick you can't do on traditional watches.
The Button
Like the iPhone and iPad, the Watch provides a "Button." At the moment only two purposes of the button have been revealed. First, pressing the Button brings up a list of "Friends" (apparently favorite contacts) that you can contact.
The second purpose revealed so far involves a double-press, which is used during the payments process. (Interesting that there is no touch sensor, by the way. I suppose it's assumed that your watch is harder to get off of your wrist, and less likely to be left behind, than a phone).
I suspect the Button will have other uses - for example resetting the device, powering it on and off, and the like. It would be unusual for Apple to dedicate an entire button to just bring up Friends.
Presumably there won't be much confusion between buttons, as the "home" function, which is used more often, will be more noticeable to touch as it protrudes further.
Still, if there's one question that jumps out at me it's the decision to locate two buttons so close to each other. It may have made more sense for the button to be on the other side, to eliminate the cognitive burden of having to remember which is which; it's far easier to remember right vs. left. And why not assign that button to bring up the watch app? Wouldn't that be more often useful than the Friends feature? Maybe press-and-hold for Friends if Apple thinks it's an important feature?
The Screen

The screen also can detect force, which is a first for an Apple product. This allows distinguishing a tap from a hard press, and is used by the Watch to bring up context-sensitive information and settings, much like a mouse or trackpad right-click. For example, it allows bringing up action menus in Messages, and changing watch faces (from the watch app, presumably?)
It remains to be seen whether these interaction make more sense than if Apple chose to use the button, and it's not clear how the user will know when hard press is available (or if they have to guess).
This ability to detect force would certainly be useful on iPhone/iPad, both in artistic apps to control brush pressure and ink flow, and in the user interface where it would allow all sorts of new interactions with iOS controls.
It's not currently clear what the resolution of the pressure detection is; are there only two levels of pressure detected, or is the device capable of resolving finer differences in pressure?
Glances

It would appear to have little applicability to other Apple devices, which already support "widgets" of sorts in the notification center. There may be some cognitive burden dealing with notifications vs. "glances," however, as discussed below.
Notifications
Like iOS 8 and OS X Yosemite, Watch supports actionable notifications. Notifications are accompanied by audio cues and haptics ("a gentle tap"). The SDKs will provide App developers the ability to direct actionable notifications from iPhone to the watch. A swipe down from the top pulls up a notification center, similar to the gesture on iOS devices.Taptic Engine
The Watch provides feedback and requests your attention using a combination of haptic mechanisms and "subtle audio cues" which Apple refers to as the "Taptic Engine." This is used both in the traditional haptic feedback sense - providing feedback when you turn the Digital Crown for instance - and as an alert mechanism (sort of like a much more subtle version of iPhone's alert vibrations). It appears that a wide variety of haptic responses are possible, and Apple refers to being able to navigate without looking at the screen just by feeling different touches for left and right turns.This is a form of communication that simply isn't possible on iPhone or any other device that isn't pressed securely against one's bare skin. If it works as advertised, it's a very clever way of overcoming some of the social implications of wearable computing; one doesn't want a device to provide an alert signal noticeable to those around him or her, and it's rude to constantly "poll" to see if anything new has come across the transom. Some other smartwatches have provided vibrating alerts, but this seems much more advanced and capable of conveying information about the nature, and sometimes the contents, of the notification in a way that doesn't require visually checking the device.
MagSafe Inductive Charging
I'm somewhat surprised Apple hasn't come up with a clever name for this - Retina MagSafe? This is a particularly clever solution to the charging problem. While others have done inductive charging, to my knowledge none have combined it with a magnet in this way.
One of the primary problems with inductive charging is waste heat. Inductive charging takes advantage of a high school physics principle to induce an electric current in one coil of wire by creating a changing electromagnetic field in another. Unfortunately, coils of wire have resistance, so some of the energy used to create the alternating current that is used to create the electromagnetic field is lost to heat in the wire. This effect can be worsened if the coil must be bigger than otherwise necessary, say when the system works by putting the object-to-be-charged down on some sort of pad. Since the exact position of the coil in the object-to-be-charged can't be known ahead of time, the electromagnetic field is spatially "bigger" than it has to be, meaning the coils are likely to be longer and have more resistance. This means that a particular power plug will charge the object more slowly than it could otherwise.
Moreover, there's an inconvenience factor, which is more of an issue for something like an iPhone or iPad than the Watch. If the object-to-be-charged has to be flat on top of a charging pad (or floating pretty closely above) to charge, it's hard to use the object when it charges. Wired chargers don't have this problem.
Apple's MagSafe inductive charger neatly solves both of these problems. The coils are cleanly lined up and minimally-sized. You can play around with the watch while it continues to charge because the wire will be firmly attached at all times.
Because Apple has solved these problems, I would expect iPhones and iPads to adopt the same technology eventually. The primary hold-up is likely to be the higher charging current that must be generated for these devices, particularly the iPad. This may require a bigger connection just to handle the appropriate current levels in the coils.
Home Screen
The Watch home screen is in many ways the spiritual successor to "springboard," the home screen on iPads and iPhones.It consists of a grid (this time non-Manhattan) of app icons. The icons are round, here, which is all the rage these days (just look at the avatars in your favorite social network). There are several other differences from springboard, however.
First, the grid is, it appears "infinite." One doesn't swipe between pages (I think) but rather sweeps along a space containing all these icons. The icons are positioned in consecutive rings, with outer rings being smaller than the inner ones. One pans along using one's finger on the screen, and zooms in and out using the Digital Crown.
In some sense the icons can be thought of being arranged on a grid superimposed over a geodesic dome, with the center of the screen being the point on the dome closest to you and the outer ring being furthest away. (This isn't to say that the animations associated with scrolling adopt that paradigm, though. They don't.)
In some sense this may make more sense on iPhone than on iWatch; organization (as by assigning to pages) seems more important on a device with such a small screen. It's not clear whether "folders" exist, but I sense they currently do not. Icons can be arranged using the traditional press-to-jiggle method, it looks like.
I suspect Apple started with the idea of an infinite plane, with the concept that sizes of icons would provide spatial cues, and quickly realized that round icons allow more per-screen than square. Maybe they were inspired by honeycombs, or by hexagonally-shaped icons that have appeared in various devices from time-to-time.
It's interesting to consider whether this system could be adapted for iPhone/iPad (which is probably overdue for improvements to the home screen) and, if so, whether it would provide any advantages over the Manhattan grid.
Conclusion
There are a lot of questions left unanswered about Watch, and time will tell. Pebble, for example, changed its button functionality via software updates along the way (which I found very confusing at the time, by the way), and Apple may change many things between now and release day. And it's hard to judge the effectiveness of a user interface without actually using the device. Still, Watch presents several intriguing possibilities for future iPhone and iPad iterations, and shows possible directions Apple may take with other types of devices in the future. As we learn more, I'll dive deeper into some of these aspects and let you know what I think. (Apple, how about a simulator in Xcode?! I'm waiting...) One day soon I also expect to provide my thoughts on Apple's design decisions - almost as interesting as what the Watch is is what it's not.Update
Please see also Watch: Design is What You Leave On the Cutting Room Floor, for my analysis of what features Apple left off and why, what they will add in the future and why, and what the purpose of the Watch really is.
Why Macs will get ARM'd, part II
Update:
Be sure to check out Part III, in which I work through how this might look from a software perspective.
Last time I explained why, fundamentally, there is no reason that an Apple-designed ARM chip destined for laptops or desktops needs to have less performance than an x86 chip. I'm not a writer, and my words were misunderstood by some, so to clarify the point, I was merely explaining why a future ARM chip, designed to be different than all currently existing ARM chips, could fill this role. I was not suggesting that current ARM chips on the market are already good enough.
And, to be clear, I was merely addressing what is philosophically possible. There would certainly be some problems to be addressed, and practical matters like patents, poor fab relationships, or other factors could certainly mean that while possible in a theoretical sense, such a chip might be very difficult to achieve. Personally, I think it could be done, but that's a whole other conversation.
In this part of the analysis, I will discuss why, all else being equal - that is, with equivalent or nearly equivalent fab technology, equivalent design methodologies, similar design goals - an ARM chip would actually have an advantage in performance at any desired power consumption as compared to an x86-based chip.
One last note in response to some comments I received - I'm not biased against x86 or pro ARM. Such an accusation is weird - while I spent 9 years designing x86 chips (and a few years designing various RISC chips including PowerPC), I've never designed an ARM chip. I have no skin in the game, either - I am no longer in that industry.
Be sure to check out Part III, in which I work through how this might look from a software perspective.

And, to be clear, I was merely addressing what is philosophically possible. There would certainly be some problems to be addressed, and practical matters like patents, poor fab relationships, or other factors could certainly mean that while possible in a theoretical sense, such a chip might be very difficult to achieve. Personally, I think it could be done, but that's a whole other conversation.
In this part of the analysis, I will discuss why, all else being equal - that is, with equivalent or nearly equivalent fab technology, equivalent design methodologies, similar design goals - an ARM chip would actually have an advantage in performance at any desired power consumption as compared to an x86-based chip.
One last note in response to some comments I received - I'm not biased against x86 or pro ARM. Such an accusation is weird - while I spent 9 years designing x86 chips (and a few years designing various RISC chips including PowerPC), I've never designed an ARM chip. I have no skin in the game, either - I am no longer in that industry.
Tuesday, September 9, 2014
2011 MacBook Pro Fiasco
My first modern-era Mac was a 2007 17" MacBook Pro. (In the 1990's I owned a Mac something-or-other which I inherited at the demise of Exponential Technology). It ran Leopard, and came with the safety net of bootcamp. As an engineer I was a big UNIX guy (real UNIX. Not this new-fangled Linux stuff), and I had been very interested in acquiring a Mac because I understood that it's NeXT roots would give me reasonable comfort via the Terminal and related utilities. And I had reached the point where I was tired of my family's plasticky Windows notebooks needing to be constantly regedit'd, OS reinstalled, rolled back, or replaced for broken bits and doodads.
I loved my 2007 MBP, right up until the screen started acting all crazy and I couldn't get it to boot anymore. I fell victim to the great 2007/2008 MBP GPU screw-up. Apple fixed it under AppleCare and/or their special repair program addressed to this situation (either would have worked), and I kept using the machine up until 2011 when bulging batteries and slow performance convinced me it was time to move on.
In 2011 I purchased an "early-2011" MBP 17".
At least as far as choosing which MBP's to purchase, I apparently have the worst luck.
About a year and a half into my ownership, the machine died with the same symptoms as my 2007 MBP. Apple replaced the motherboard under AppleCare (twice - the first time they gave me a slower CPU than the one I gave them, and luckily I noticed). At the time they told me it otherwise would have cost me $800 to repair. Then, again, 37 months into ownership, it died again. Apple again replaced the motherboard for free, because my AppleCare had ended only a month earlier. This time a paid repair would have cost $310.
Now it's five months later, and it's dying again.
It always starts the same way - some graphics glitches, sometimes rendering the machine unusable and forcing a restart. Over time, the frequency of these problems increases - happened twice to me yesterday. It gets exponentially worse, until the machine simply can't be started.
Although I have my 2013 MacBook Air as backup, I decided to just go ahead and order a new 2014 retina MacBook Pro as a replacement. I can use my wife's education discount and if I order my today I get a $100 Apple gift card. I'm hoping it arrives while the old machine still works, to simplify transfer of the system (via Migration Assistant) to the new machine.
In the mean time, I'm using gfxcardstatus to lock the machine into "integrated graphics mode" in the hopes that this will prolong it's life a bit.
When the new machine arrives, I'll have to install Yosemite and then use migration assistant to copy everything over. I'll also de-authorize the old machine as an iTunes machine (I always forget to do that). I'll review the new machine and compare performance and usability to the old - is the 15" rMBP with scaling set to maximum a decent alternative to the old 17" for those of us who need lots of screen real estate?
Then I'll go fight with Apple about getting the old machine fixed (again) or replaced (unlikely). I suspect they'll fix it for free if I'm polite, point out this is the third failure, and am persistent. But it's about time for Apple to acknowledge that this is a design flaw and do the right thing. These machines are still very fast and should have a lifespan longer than three years, especially given the premium price Apple charges.
If I get the machine fixed, it will likely sit in a closet - can't rely on it to replace the 2009 MBP 15" I use as an iTunes/Apple TV server. If they replace it, I'll probably use the replacement to replace the server.
I loved my 2007 MBP, right up until the screen started acting all crazy and I couldn't get it to boot anymore. I fell victim to the great 2007/2008 MBP GPU screw-up. Apple fixed it under AppleCare and/or their special repair program addressed to this situation (either would have worked), and I kept using the machine up until 2011 when bulging batteries and slow performance convinced me it was time to move on.
In 2011 I purchased an "early-2011" MBP 17".
At least as far as choosing which MBP's to purchase, I apparently have the worst luck.
- http://appleinsider.com/articles/14/05/13/apple-remains-mum-as-complaints-mount-over-2011-macbook-pro-gpu-failures
- https://discussions.apple.com/thread/4766577?start=0&tstart=0
- http://action.mbp2011.com
- http://www.macrumors.com/2014/01/17/2011-macbook-pro-gpu-glitches/
- http://www.change.org/p/timothy-d-cook-replace-or-fix-all-2011-macbook-pro-with-graphics-failure
About a year and a half into my ownership, the machine died with the same symptoms as my 2007 MBP. Apple replaced the motherboard under AppleCare (twice - the first time they gave me a slower CPU than the one I gave them, and luckily I noticed). At the time they told me it otherwise would have cost me $800 to repair. Then, again, 37 months into ownership, it died again. Apple again replaced the motherboard for free, because my AppleCare had ended only a month earlier. This time a paid repair would have cost $310.
Now it's five months later, and it's dying again.
It always starts the same way - some graphics glitches, sometimes rendering the machine unusable and forcing a restart. Over time, the frequency of these problems increases - happened twice to me yesterday. It gets exponentially worse, until the machine simply can't be started.
Although I have my 2013 MacBook Air as backup, I decided to just go ahead and order a new 2014 retina MacBook Pro as a replacement. I can use my wife's education discount and if I order my today I get a $100 Apple gift card. I'm hoping it arrives while the old machine still works, to simplify transfer of the system (via Migration Assistant) to the new machine.
In the mean time, I'm using gfxcardstatus to lock the machine into "integrated graphics mode" in the hopes that this will prolong it's life a bit.
When the new machine arrives, I'll have to install Yosemite and then use migration assistant to copy everything over. I'll also de-authorize the old machine as an iTunes machine (I always forget to do that). I'll review the new machine and compare performance and usability to the old - is the 15" rMBP with scaling set to maximum a decent alternative to the old 17" for those of us who need lots of screen real estate?
Then I'll go fight with Apple about getting the old machine fixed (again) or replaced (unlikely). I suspect they'll fix it for free if I'm polite, point out this is the third failure, and am persistent. But it's about time for Apple to acknowledge that this is a design flaw and do the right thing. These machines are still very fast and should have a lifespan longer than three years, especially given the premium price Apple charges.
If I get the machine fixed, it will likely sit in a closet - can't rely on it to replace the 2009 MBP 15" I use as an iTunes/Apple TV server. If they replace it, I'll probably use the replacement to replace the server.
Apple Mega-Announcement
Today Apple will announce some new stuff (predictions here).
When they make their announcements I'll post my reactions by updating this post.
The event is scheduled to start at 10:00 AM Pacific.
Maybe we learn what the giant white cube is?
U2. So what.
Works with Handoff.
Now talking about health. Fitness App and Workout App.
Works with any iPhone 5 or 6.
Starts at $349. Early 2015.
WatchKit for developers. Apps, notifications and glances.
Twitter being demo'd. American Airlines checking in by tapping on watch. Starwood Hotels checkin and unlock door by waving watch in front of the door.
City Mapper, reminds you to get off at right stop. Pinterest can remind you of sites to see. MLB sports scores (nice).
Astronomy watch face is very cool. Katie will love it. Swipe up for "glances" (sort of like iOS notification center - important information. "Taptic" engine notifies you of notifications incoming.
Can act on notifications (Accept invites, etc.) Quick replies to messages.
Can use dictation to reply (or can send audio recording). Animated emoji.
Siri built in.
Maps built in. Calculates directions, etc. While walking, it will give you taptic feedback as to which way to go, so you don't need to look at screen.
New form of communication - digital touch. Press button under digital crown. See friends list. Tap friend. Can choose "digital touch" in addition to email, text, etc. When you tap screen, he feels it. Sort of like tactile SMS. Can send your heart beat so it can be felt on someone else's wrist.
Apple watch requires iPhone.
Force-touch to customize watch face.
Three editions: watch, watch sport, and watch edition.
Watch is stainless steel.
Sport is durable and light.
Edition is special hard gold.
Press to go home. Home screen is cute (depicted in the 11:00AM photo below).
Display turns on when you raise your list. Navigation/animations very smooth, from videos anyway.
Shortcut button under the crown allows you get shortcuts to contacts. Can send heartbeat, "pokes" and doodles to others.
Not "iWatch"
Crowd going nuts. It sure is pretty.
Not iPay and iWatch. End of "iNames"?
Accurate to plus/minus 50ms. Very customizable.
Brand new user interface.
New Pay SDK available to all developers. Launching next month (only iPhone 6 and 6 Plus, but I suspect the on-line stuff will work on 5S as well).
Select card from Passbook, touch the Touch ID and you are done paying.
They don't store the credit card - they create a "device only" account number and use a one-time payment ID and dynamic secure ID. Can suspend all payments from iPhone with Find My iPhone. If phone lost, no need to cancel credit card, because card isn't stored on phone.
Apple doesn't know what you bought, how much you paid, or where you bought it. Cashier doesn't see your name, credit card number or security code.
Starting with AmEx, Visa and MasterCard. 83% of credit card volume in the U.S. covered. Works in 220,000 merchant locations right now. Adding Macy's, Bloomingdales, Walgreens, DuaneReade, Staples, Subway, McDonalds, Whole Foods, Apple Stores, Disney, Toys R Us, Sephora, etc.
Apple plays a funny iPhone ad re: Health app. (Jimmy Fallon and Justin Timberlake)
New category of service: it's about the wallet....
Same colors as before. iPhone 6 starts at $199 with 16GB with contract. $299 gets you 64GB. $399 is 128GB.
iPhone 6 Plus starts at $299. Same memory configurations. iPhone 5S now $99. iPhone 5C now free with contract.
Sept. 19 ship date.
Preorder on Sept. 12. (Hey, Siri, wake me at 11:55PM on Sept. 11).
1080p at 60fps, and now slo-mo is now 240fps (up from 120fps).
New sensor has dedicated focus pixels for phase-detection autofocus. Wonder if it combines that with contrast-detect?
The A8 has 2 billion transistors. Wonder how much of that is logic and how much is cache. Apple made a big point of talking about thermal envelope and sustained maximum frequency this time. Been saying this for awhile - Android chips tend to run well only on benchmarks.
iPhone 6 Plus battery life looks good. Double the iPhone 5s for audio, 40% more for video, 20% more for browsing (50% on 3G), 24 horus of 3G talk (vs 10 hours on iPhone).
M8 chip.
"Advanced Wireless" - LTE is faster (150Mbps vs 100Mbps in previous). 20 LTE bands - more than any other smartphone (though that doesn't help any particular customer unless they travel). VoLTE support on AT&T, Tmobile and Verizon (yay! Simultaneous voice and data!)
802.11ac wifi support. Fantastic.
Wifi calling, but only on Tmobile for now.
Anyway, iPhone 6 and 6 Plus. Look identical to leaks:
I like the way the screen has a rounded taper. The screens are not sapphire, looks like.
This should be interesting.
Also, I note that for the first time ever, Apple is liveblogging itself. Apple is often accused of generating hype. In the past, in my opinion, most of the hype was ginned up by the Apple journalism ecosystem (with baseless speculation, "analyst" guesses, etc.) This time there is no question. Apple is raising expectations to a tremendous degree. They must be supremely confident as to what they're about to announce.
When they make their announcements I'll post my reactions by updating this post.
The event is scheduled to start at 10:00 AM Pacific.
11:46 AM
U2 is performing. Guess we won't learn about the cube.11:44 AM
One more one more thing? Music.Maybe we learn what the giant white cube is?
U2. So what.
11:43 AM
Pay works with Watch.11:39 AM
Walkie Talkie, control Apple TV, viewfinder for iPhone camera....Works with Handoff.
11:33 AM
I wonder if iPhone springboard will eventually work more like iWatch - endless sea of icons instead of Manhattan grid on pages.Now talking about health. Fitness App and Workout App.
Works with any iPhone 5 or 6.
Starts at $349. Early 2015.
11:30 AM
WatchKit for developers. Apps, notifications and glances.
Twitter being demo'd. American Airlines checking in by tapping on watch. Starwood Hotels checkin and unlock door by waving watch in front of the door.
City Mapper, reminds you to get off at right stop. Pinterest can remind you of sites to see. MLB sports scores (nice).
11:20 AM

Can act on notifications (Accept invites, etc.) Quick replies to messages.
Can use dictation to reply (or can send audio recording). Animated emoji.
Siri built in.
Maps built in. Calculates directions, etc. While walking, it will give you taptic feedback as to which way to go, so you don't need to look at screen.
New form of communication - digital touch. Press button under digital crown. See friends list. Tap friend. Can choose "digital touch" in addition to email, text, etc. When you tap screen, he feels it. Sort of like tactile SMS. Can send your heart beat so it can be felt on someone else's wrist.
11:16 AM
Live demo time.Apple watch requires iPhone.
Force-touch to customize watch face.
11:13 AM
Many watch faces. Choice of bands. Nice clasps. Many different buckles and clasps, actually. Some have magnetic closures. Two different sizes of watches.Three editions: watch, watch sport, and watch edition.
Watch is stainless steel.
Sport is durable and light.
Edition is special hard gold.
11:03 AM
"Digital crown" - infrared LEDs / photodiodes that translate rotary movement. Rotate to zoom or scroll.Press to go home. Home screen is cute (depicted in the 11:00AM photo below).
Display turns on when you raise your list. Navigation/animations very smooth, from videos anyway.
Shortcut button under the crown allows you get shortcuts to contacts. Can send heartbeat, "pokes" and doodles to others.
11:00 AM
WatchNot "iWatch"
Crowd going nuts. It sure is pretty.
Not iPay and iWatch. End of "iNames"?
Accurate to plus/minus 50ms. Very customizable.
Brand new user interface.
10:58 AM
Wow. That is a pretty watch.10:56 AM
One more thing... (Tim Cook almost cried)10:53 AM
On-line payments - one touch checkout with Pay. Same advantages. Target, Groupon, Panera, Uber, MLB, Apple Store, OpenTable adding "Buy with Pay" buttons. Will be able to use OpenTable app to pay your check at participating restaurants.New Pay SDK available to all developers. Launching next month (only iPhone 6 and 6 Plus, but I suspect the on-line stuff will work on 5S as well).
10:49 AM
New chip: "Secure Element" built into iPhone 6. Keeps all your credit card info encrypted. Can view cards in Passbook. Add card from iTunes account. Can also add a new card by taking picture of the card with the phone camera. Adds it to Passbook.Select card from Passbook, touch the Touch ID and you are done paying.
They don't store the credit card - they create a "device only" account number and use a one-time payment ID and dynamic secure ID. Can suspend all payments from iPhone with Find My iPhone. If phone lost, no need to cancel credit card, because card isn't stored on phone.
Apple doesn't know what you bought, how much you paid, or where you bought it. Cashier doesn't see your name, credit card number or security code.
Starting with AmEx, Visa and MasterCard. 83% of credit card volume in the U.S. covered. Works in 220,000 merchant locations right now. Adding Macy's, Bloomingdales, Walgreens, DuaneReade, Staples, Subway, McDonalds, Whole Foods, Apple Stores, Disney, Toys R Us, Sephora, etc.
10:47 AM
New process is called "Pay" (ApplePay). Just hold your phone near the terminal at the same time your finger is on the Touch ID button, looks like.10:45 AM
Apple vision is tor "replace the wallet" starting with payments. I now have John Appleseed's credit card number. I'm going shopping.10:44 AM
Apple plays a funny iPhone ad re: Health app. (Jimmy Fallon and Justin Timberlake)
New category of service: it's about the wallet....
10:42 AM
Sept. 17 official iOS 8 release date. I assume developers will get it sooner. Maybe later today.10:38 AM
The front-facing selfie camera is better as well. New silicone cases. Whatever.Same colors as before. iPhone 6 starts at $199 with 16GB with contract. $299 gets you 64GB. $399 is 128GB.
iPhone 6 Plus starts at $299. Same memory configurations. iPhone 5S now $99. iPhone 5C now free with contract.
Sept. 19 ship date.
Preorder on Sept. 12. (Hey, Siri, wake me at 11:55PM on Sept. 11).
10:35 AM
Entire Apple website seems to be broken now. Someone's getting fired.10:31 AM
43 Megapixel panoramas. Dedicated face detection hardware. The iPhone 6 Plus has optical image stabilization. Nice.1080p at 60fps, and now slo-mo is now 240fps (up from 120fps).
10:29 AM
New camera - 8MP, 1.5 micron pixels, f/2.2.New sensor has dedicated focus pixels for phase-detection autofocus. Wonder if it combines that with contrast-detect?
10:22 AM
I wonder if the various "wide screen" and "one handed use" stuff will make its way to iPad or iPad mini? The widescreen keyboard seems needlessly cluttered. It's like they had the extra space and figured they had to use it for something.The A8 has 2 billion transistors. Wonder how much of that is logic and how much is cache. Apple made a big point of talking about thermal envelope and sustained maximum frequency this time. Been saying this for awhile - Android chips tend to run well only on benchmarks.
iPhone 6 Plus battery life looks good. Double the iPhone 5s for audio, 40% more for video, 20% more for browsing (50% on 3G), 24 horus of 3G talk (vs 10 hours on iPhone).
M8 chip.
"Advanced Wireless" - LTE is faster (150Mbps vs 100Mbps in previous). 20 LTE bands - more than any other smartphone (though that doesn't help any particular customer unless they travel). VoLTE support on AT&T, Tmobile and Verizon (yay! Simultaneous voice and data!)
802.11ac wifi support. Fantastic.
Wifi calling, but only on Tmobile for now.
10:16 AM
Double-tap (not press) home button to shift display for one handed use, sounds like (hard to follow broken livestream).10:12 AM
4.7" is 1334x750, and 5.5" is 1920x1080. Latter is unexpected. Seems to me that will cause problems for developers.10:09 AM
Livestream is fubar. Now with Chinese live audio translation.Anyway, iPhone 6 and 6 Plus. Look identical to leaks:
9:56 AM
![]() |
Confidence. |
9:51 AM
I believe that this is the first Apple live stream that provides this sort of control from the browser:9:45 AM
9:42 AM
Seems to be growing consensus that the white box is an Apple Store. On the one hand it's sort of fitting that Cupertino get its own (beyond the company store, which sells t-shirts and Apple logo swag). On the other hand, that's not where you'd put a store (at least not where a normal retailer would). Maybe some sort of Apple museum/archives/library or something? Steve Jobs wasn't one for looking back, but Tim Cook might be. Though I would think, from the location, that the structure isn't permanent?9:20 AM
What on earth does Marie Claire have to do with the mysterious white block?9:00 AM
@Tim_Stevens on Twitter reports that there are DSLR's taking pics on timers pointed at the white cube.This should be interesting.
Also, I note that for the first time ever, Apple is liveblogging itself. Apple is often accused of generating hype. In the past, in my opinion, most of the hype was ginned up by the Apple journalism ecosystem (with baseless speculation, "analyst" guesses, etc.) This time there is no question. Apple is raising expectations to a tremendous degree. They must be supremely confident as to what they're about to announce.
8:40 AM
Daniel Eran Dilger (@DanielEran on Twitter) posted this closeup of the mysterious new DeAnza structure. Looks like an Apple store.Monday, September 8, 2014
Quick update
Part II of my article about Apple, the Mac, and ARM is about ready to go and will be published in a couple of days. Given the happenings tomorrow, not much point in doing it today. Part II covers some details of the differences between RISC and CISC architectures, and why it matters to Apple.
Predictions for Tomorrow
Tomorrow, of course, is Apple's big announcement at the Flint Center in Cupertino. Apple is expected to reveal two new iPhones, one 4.7" and one 5.5" (as compared to the current 4" design), and some sort of wearable device, probably a watch.
Based on reading the rumors, reviewing the iOS 8 developer's documentation, and gut feeling, here are some predictions.
Based on reading the rumors, reviewing the iOS 8 developer's documentation, and gut feeling, here are some predictions.
iWatch
There have been rumors of this for a long time, but it's fairly certain that tomorrow's the day. Gruber has essentially confirmed it, Ive is supposedly joking about how Switzerland is in trouble, and Apple built a ginormous new structure in Cupertino just for the announcement.
I expect this thing to have the build quality, design quality, and general workmanship of a $1000 watch, and to be priced starting at around $400. Apple can afford to charge less than Tag Heuer because the innards will cost less, it can trade volume for the "conspicuous consumption" tax, it can use cheaper labor, and it owns or controls massive parts of its supply chain. It can't afford to charge too much because unlike a nice watch that you might expect to own forever and pass down to your kids, any smartwatch will eventually (i.e. in two or three years) be obsoleted and become not very desirable to any but the most fevered collectors.
I expect the watch to have a Touch ID functionality, under the screen (I believe it will use a normal strap that can be replaced, and none of the device's smarts or battery will be in the strap). I believe it will have a battery life of at least 24 hours, possibly 48. The crystal will be sapphire, like any good watch. The back side will feature sensors for various health relating things, including at least pulse. It will have NFC both for use as when making payments and for use as a token with other Apple devices. For example, your Mac will be able to tell when you've gone away, and lock the screen. Or that you left your iPhone in the restaurant. You will be able to use it as a key to unlock devices (maybe only as a secondary factor, in addition to password or Touch ID).
It will be able to pair with an iPhone for use with Health Kit, and to use the iPhone's data connection. The new "auto hot spot" stuff in Yosemite is an indication of how some of this may work. Much like you can set up a new Apple TV by tapping it with an iPhone, you'll be able to do the same with the watch; you won't have to manually type in anything using an awkward built-in keyboard. This may work even without NFC-equipped iPhones, the same way it does with Apple TV. (Some combination of Bonjour, Bluetooth LE, and magic). Very little data will be stored on the watch itself, but it will be able to use the iPhone's network connection to pulldown iCloud data, and can access information from the iPhone itself (like music, photos, etc.) (hence a good time for Apple to increase the size of the iPhone's maximum flash memory to 128GB).
It will support Siri as its primary input method, but you'll be able to control it using your iPhone or other Apple device when need be.
It will support third-party apps, and will report itself using the new idiom system as being small in both dimensions (or extra small).
It will have no ports - no wires plug into this thing. It will charge by induction (unless I'm wrong about the strap, in which case there will be some sort of Lightning connector via the strap. Maybe as a backup for when you're on the road and don't want to bring an inductive charger).
There will be a Mickey Mouse watch face.
It will not have a camera (unless just for FaceTime, but I doubt even that).
It will look really nice.
It will be liquid metal.
It will be liquid metal.
In four years everyone will pretend that this is how Google Gear-based watches operated and looked today.
iPhone
Not much to add here other than I doubt the larger phone will be called the "iPhone 6+" and I think the device will use the barometer primarily for calculating how many flights of stairs you climb each day and for indoor navigation purposes.
[Update: I like to make predictions]
And now, MSN Windows Live Bing MSN - for Workgroups
I can't really add anything to this:
http://www.theverge.com/2014/9/8/6120669/bing-apps-rebranded-to-msn-with-ios-android-versions
http://www.theverge.com/2014/9/8/6120669/bing-apps-rebranded-to-msn-with-ios-android-versions
Sunday, September 7, 2014
Apple, NFC and Payments
It's beginning to look like Apple will include NFC and a payment system in its new wearable and iPhones, using Touch ID for authentication and the Passbook app (and system integration) for user experience.
A report earlier this week claimed Apple will be able to store credit card information in a "secure enclave" on the A8 chip (similar to how fingerprint representations are currently stored on the A7), and treat credit card transactions implemented using the phone as a "card present" transaction, which means discounted fees from the credit card networks/banks since there's much less likelihood of fraud if the user authenticates by fingerprint.
Today 9to5mac states the NFC payment system will use one-time-use tokens rather than transmitting sensitive credit card information (such as the magic three-or-four digit code on the back of the card), thus preventing the types of breaches we've seen where bad guys have hacked into retailers' back ends to get lists of credit card numbers and authenticating details:
Apple reportedly planning to implement tokenization in NFC payment system | 9to5Mac
A hacker, at best, could only replay the same transaction (and in reality not even that, since the token is presumably time-coded or tied to a central database so it can be executed only once).
Assuming all of this is true, presumably you will be able to go to many retailers and, using a combination of NFC and iBeacons, bring your phone close to a widget on the counter. A screen will pop up showing you the total to be charged, and you use your fingerprint to complete the transaction, with funds applied either against your iTunes account or from a linked credit card. (Of course I'm speculating). Presumably there is an app that keeps track of all these transactions (there's got to be a good reason us developers haven't seen a beta in a long time), possibly all linked to your iTunes account. In the Apple store, Nordstrom, and others using Apple POS equipment, perhaps you won't even have to tap your phone on a widget - roaming employees can direct the charge right to your device via iBeacons and knowledge of who you purport to be.
One could even imagine being able to receive payments in the same way, phone-to-phone, with the money applied to your iTunes account balance.
One wonders if we'll be hearing Tuesday afternoon from eBay, Amazon, Google, Square and the like about how this is a tough business and Apple can't just walk in and figure it out.
A report earlier this week claimed Apple will be able to store credit card information in a "secure enclave" on the A8 chip (similar to how fingerprint representations are currently stored on the A7), and treat credit card transactions implemented using the phone as a "card present" transaction, which means discounted fees from the credit card networks/banks since there's much less likelihood of fraud if the user authenticates by fingerprint.
Today 9to5mac states the NFC payment system will use one-time-use tokens rather than transmitting sensitive credit card information (such as the magic three-or-four digit code on the back of the card), thus preventing the types of breaches we've seen where bad guys have hacked into retailers' back ends to get lists of credit card numbers and authenticating details:
Apple reportedly planning to implement tokenization in NFC payment system | 9to5Mac
A hacker, at best, could only replay the same transaction (and in reality not even that, since the token is presumably time-coded or tied to a central database so it can be executed only once).
Assuming all of this is true, presumably you will be able to go to many retailers and, using a combination of NFC and iBeacons, bring your phone close to a widget on the counter. A screen will pop up showing you the total to be charged, and you use your fingerprint to complete the transaction, with funds applied either against your iTunes account or from a linked credit card. (Of course I'm speculating). Presumably there is an app that keeps track of all these transactions (there's got to be a good reason us developers haven't seen a beta in a long time), possibly all linked to your iTunes account. In the Apple store, Nordstrom, and others using Apple POS equipment, perhaps you won't even have to tap your phone on a widget - roaming employees can direct the charge right to your device via iBeacons and knowledge of who you purport to be.
One could even imagine being able to receive payments in the same way, phone-to-phone, with the money applied to your iTunes account balance.
One wonders if we'll be hearing Tuesday afternoon from eBay, Amazon, Google, Square and the like about how this is a tough business and Apple can't just walk in and figure it out.
Moto 360 Smartwatch
Good rundown at ArsTechnica. A few thoughts.
First, to the extent there are positive reviews, they focus on the looks. But I feel like they're grading on a curve. First, it's 46mm diameter, which is huge. Too big for at least a lot of men, and almost all women, I'd bet. Second, it may look sort of like an ordinary watch (apart from size), but it doesn't look like a nice watch. At least not one I'd want to own.
![]() |
Yeah, that looks right. |
Also, what's with shipping a product with that notch on the bottom of the screen, and then calling it the Moto 360. I mean, it's like they're drawing a target right on the most obvious visible flaw by naming it that.
And how the heck do you release a smartwatch running a 45nm processor? It's like they wanted the battery to die twice a day. (And, apparently, the visuals stutter and freeze, too. So the best of both worlds - bad battery life and bad performance).
Look, Motorola is at least somewhat on the right track here, but this thing is half-baked. This is the difference between releasing a product when you can vs. releasing a product when it's right.
Why Macs will get ARM'd
This is Part I of a multi-part analysis of Mac on ARM. Be sure to check out Part II in which I explain why x86 inherently has a small but real disadvantage in optimizing cost, performance and power and why ARM would offer Apple some real advantages in the desktop and laptop market. Also check out Part III I will examine how the transition would work from a software perspective and what the result would look like. In Part IV I will investigate what the hardware transition would look like.
Some folks noticed that I recently predicted that within a couple of years Apple will be selling at least some Macs running on an ARM-based architecture (as opposed to the AMD x86-64 architecture they currently use. Yeah, I call it that. Suck it Intel.) People questioned how this can be, given what they foresee as Intel's inherent performance advantage and progress toward reducing power consumption. Some of the doubters also believe that Intel has inherent advantages that cannot be reproduced on ARM. I disagree, but understanding why requires an understanding of where the three key metrics in microprocessor design - performance, power consumption, and cost - can be affected in the design process.
I've worked on designing MIPS-like CPUs, x86, x86-64, SPARC, and PowerPC processors, as well as some non-commercial architectures that no one ever heard of, and have been involved, at least to some extent, in most of the tasks that are involved in starting from a clean sheet of paper and ending up with functioning silicon. (And, in my case, with functioning gallium arsenide as well). Not every company does it the same way, but based on who Apple has hired and where their design teams come from, I'd bet I have a fairly decent understanding of how they might do things.
In Part I of this blog entry, I will describe the microprocessor design process and explain how the different steps of the process can qualitatively affect the key metrics (performance, power consumption, and cost). I will focus on the methodology I am most familiar with, both because that's the easiest for me to talk about and because I think Apple's design methodology isn't significantly different. In subsequent parts I will explain why I think ARM offers Apple advantages over Intel, and how Apple could make it work.
A new processor design always starts with an evaluation of the basic requirements. These may come from surveying customers, analyzing market direction, analyzing the competition, or identification of new markets to attack. Typically the requirements, at least at first, are quite general. There's a certain minimum acceptable performance, a certain power budget, and a certain maximum cost.
For example, processors intended for use in server farms need to consume very little power but don't need too much performance, and certainly don't need much floating point performance. Processors for workstations need high graphics performance and good multiprocessing performance, but power consumption and cost may be less important.
The requirements also include other things, some of which often go unsaid. At AMD we knew we had to be compatible with Microsoft operating systems (which meant we either had to be compatible with Intel or we had to convince Microsoft to support our deviation from Intel). At Sun we knew we were going to design a SPARC (I steadfastly refused to have anything to do with the other team which was designing "Java processors"). At Exponential we were pretty agnostic about what we were designing, but given Apple's investment we were lead to focus on PowerPC (for awhile).
Of course, much like Sun could feel free to migrate its customers from SPARC to "Java chip" (unsuccessfully) or to x86 (more successfully), Apple is not constrained by such factors from switching to ARM. Much like the transition it made from Motorola 68k to PowerPC and then again to Intel, Apple could switch to ARM (since it controls the OS, the compiler, and has shown it knows how to support fat binaries, emulators, and other technologies for smoothing over the transition period).
Sometimes we knew we had to have certain features - marketing wanted to differentiate by providing encryption functions or support for some new SIMD instructions or the like. Being able to differentiate in this way is something that OEM's who limit themselves to buying the same Intel chips as their competition can't do, of course. Apple does compete on processing specs where it can; in the PowerPC days it advertised the advantages provided by the processors it used (heck, Apple even named some models after the chips they used), and it has begun to do the same thing with the ARM chips it uses in iDevices (Secure enclaves. 64 bit. Performance. etc.) Apple cannot currently do this with Macs because it is using the same chips as everyone else. If Apple wants to introduce Touch ID with a secure enclave to protect fingerprint and other sensitive data, they can't do it unless they convince Intel to go along (and then, perhaps after a short exclusivity period, all of Apple's competitors have access to the same technology).
The Requirements stage is where the basic contours of performance/power/cost are set. Frequently there will be some back and forth with the engineering team (sometimes the requirements are not possible, or sometimes the engineers come up with alternate proposals that marketing never considered). But if it's a chip for ultralight laptops, one can be sure it won't run at maximum possible clock frequency and with a 100W TDP.
So:
Outside the industry, people often use the term "architecture" to refer to the instruction set (and associated specifications) the processor uses. To differentiate this other concept I will call that Instruction Set Architecture ("ISA" for short).
When I refer to architecture, I refer to the high level description of the operation of the processor. Does it have a cache? How big? One each for data and instructions? How many instructions can it process at once? Does it have multiple cores? How big is the register file (if not determined by the ISA)? Does it have trace-back caches? How big are the TLBs? Does it support out-of-order issue? Out-of-order retirement? How many instructions can be in flight at any time? How deep are the pipelines? How many cycles to do a 64-bit addition? The list of issues goes on and on.
Sometimes this includes whether to support optional portions of instruction sets (and instruction set extensions).
Some of this is often called "microarchitecture" but I'll include it all as "architecture" because the folks who did this work were universally called "architects" at the places where I worked.
The architecture makes a big difference in our key metrics. Performance, power consumption and cost all directly are affected by architectural decisions.
For example, doubling the size of the L2 cache may increase performance on key benchmarks by 10%. But doing so may double the die size (and hence the cost). And it may increase power consumption of the chip by 15%. It gets more complicated. While power consumption of the chip may increase by 15%, by reducing the frequency of main memory reads and writes the system power may decrease by 2%, which may mean that, overall, the entire system consumes less power. Of course, that's of little value if the power dissipated by the chip per square centimeter is such that the chip can't be properly cooled because the volume of the phone it's going in does not allow a sufficiently sized heatsink.
Perhaps, instead of increasing the cache size, the architect decides to double the speed of the CPU clock (assuming the engineers down the line can make this work). Even if doing so could be accomplished without increasing the CPU voltage (unlikely), this doubling of clock frequency will cause lots of wires and transistor gates to charge and discharge twice as fast, which will double power consumption (or, at least, double the portion of the power consumption that derives from switching, which can range from 50% to 80% of the overall power consumption, depending on factors to be described later).
Further, increasing clock speed may require shorter wires (electrons move through wires at finite speed) with less capacitance (wires take awhile to charge and discharge, and longer wires take longer). To accomplish this, more mask layers may be needed, which increases the price of the part.
The architect may add more registers, which speeds up some benchmarks but slows down others that involve a lot of task switching.
In short, there are many choices to be made, and each of them has a real effect on key processor metrics.
What's important to note here, is that the vast majority of the choices available in the architect's toolbox apply whether the chip is an Intel x86 or an ARM-based chip. In each case the architect can choose the number of cores, whether to support hardware multithreading, bus sizes, cache line widths, cache organizations, the dimensions of memory structure like translation lakeside buffers and caches, register renaming techniques, branch prediction strategies, etc.
As wafer sizes increase, transistors and wires decrease in size, and the number of transistors per die increases, the new architectural techniques that become available to Intel architects also become available to ARM architects.
So:
In lower-end designs, the behavioral code contains more detail, because rather than letting a designer determine the structure of these blocks, a synthesis tool (i.e. software - typically from Synopsys) does the heavy lifting. The extra detail in the behavioral model provides guidance to the synthesis tool so it doesn't go too far off the rails. Interestingly, the choice of whether or not to use a synthesis tool is another opportunity to affect our key metrics. In my experience (and we tested this extensively over the course of a decade), using a synthesis tool universally resulted in 20% worse outcome than allowing trained designers to do the work. You can pick your 20% - either 20% worse performance, 20% worse power consumption, or 20% worse cost (stemming from 20% more space on the die). Or various combinations that add up to 20%.
The next step, once the overall behavior of the various CPU blocks is determined, is to design the circuitry that produces that behavior. Here the division of labor varies from company to company, but I'll use the broadest definition.
First, it's important to understand that a block, say the "integer execution unit," is designed from smaller basic building blocks. These blocks generally fall into two types: "standard cells" and "macro cells." Standard cells are generic, reusable, circuits that perform basic functions. These cells have a predefined "layout" (i.e. the set of polygons on different mask layers that form the transistors and wires in the circuit) and logical behavior. For example, there are standard cells to perform basic Boolean functions such as NAND, NOR, NOT, XOR, and the like. Moreover, there are different versions of the standard cell depending on the number of inputs. So there's a NOR2 that performs a logical-NOR on 2 inputs, and a NOR3 that does the same for 3 inputs. Then there are different versions of each of these that have different drive strengths; this enables the designer to choose the cell that's just strong enough to drive its output load at the necessary speed to meet the clock frequency goal, but not so strong as to waste power. So there's a NOR3x1, an NOR3x2, etc.
Depending on the situation, the standard cell library may be provided, as-is, by the foundry. So, for example, TSMC may provide its customers with a cell library, and leave the customer with little option to deviate. I very strongly suspect Apple is not in this boat; it's a huge customer which has hired a lot of folks who would not be interested in using an as-is standard cell library that isn't optimized for its own needs.
By optimizing the standard cell library in various ways, one can affect performance, power, and cost. For example, one can choose the aspect ratio of the cells - are they tall and skinny, short and squat, or in-between? Are there special cells for certain types of structures? How do the cells connect to the wires? (i.e. are the pins drawn vertically or horizontally, and in what layer?) What cells are in the library? What's the power grid look like? How about the clock grid? Do I use flip-flops or latches? For one of our designs, we eliminated the so-called "positive polarity" cells like AND and OR and forced designers to create AND using a NAND followed by a NOT. This was more efficient because an AND is really just a NAND followed by a NOT anyway, and decoupling them encouraged the designer to move the NOT away from the NAND, where the NOT could perform a power-saving signal-repeating function. This set of choices, however, is independent of whether one is designing an x86 or ARM part. So if there's an optimal solution, it's equally available to everyone.
The other type of cell, the macro cell, is a customized cell that performs a more complicated function or a function that can't be implemented in a standard cell. For example, in the integer execution unit, the register file is likely to be a macro cell; essentially it's a highly optimized, albeit small, SRAM with a lot of read ports. Circuit designers design this cell on a transistor-by-transistor level and produce a block that can be snapped together with the standard cells to produce the block. While different instruction set architectures may require different macro cells (e.g. an x86 has a small register file while RISC architectures tend to have bigger ones), the circuit design tricks used by the designer to increase speed or reduce power are independent of ISA.
So:
Once there is a library of cells to choose from, the next step is to arrange them so they perform the proper Boolean functions. This is the "synthesis" I referred to earlier. We usually did it by hand, though the trend is to do at least some parts of even high-end chips using automated tools (bad idea, but no one listens to me).
The designer also has to physically position the cells on the chip, a process called "placement." Again, this is often automated, but we typically did it by hand. Synthesis and placement must be done in coordination - if two cells are far apart, then they may require a repeater between them in order for the signals not to degrade too much. And the drive strength of cells depends on how far apart the cells are, and which cells are connected to which. An x1 cell shouldn't drive more than x4, whether it be 2 x2's, 1 x4, or 4 x1's. But if there's a long wire between the driver and receivers, it can't drive the full x4 because it must also charge and discharge the long wire. It gets complicated!
Moreover, the wires between the cells (the actual metal) must be designed. This is called "routing." This is almost always automated, albeit we always did some "pre-routing" - i.e. hand routing - of the most critical wires, forcing the less critical wires to work around them. (Wires can't cross on the same layer, so sometimes wires had to move up and down to get around obstructions like pre-routed wires). Synthesis, placement and routing is an iterative process. You do it, find out if you meet all the specifications for speed and other electrical properties, and adjust. Hopefully you converge on a solution that meets your speed and power budget. But, it's important to note, that there's nothing in this process that's specific to any particular choice of instruction set.
So:
Part I: Microprocessor Design
![]() |
Exponential x704 microphotograph with overlaid floorplan. One of the chips I helped design. |
Some folks noticed that I recently predicted that within a couple of years Apple will be selling at least some Macs running on an ARM-based architecture (as opposed to the AMD x86-64 architecture they currently use. Yeah, I call it that. Suck it Intel.) People questioned how this can be, given what they foresee as Intel's inherent performance advantage and progress toward reducing power consumption. Some of the doubters also believe that Intel has inherent advantages that cannot be reproduced on ARM. I disagree, but understanding why requires an understanding of where the three key metrics in microprocessor design - performance, power consumption, and cost - can be affected in the design process.
![]() |
CPU design is all about tradeoffs |
I've worked on designing MIPS-like CPUs, x86, x86-64, SPARC, and PowerPC processors, as well as some non-commercial architectures that no one ever heard of, and have been involved, at least to some extent, in most of the tasks that are involved in starting from a clean sheet of paper and ending up with functioning silicon. (And, in my case, with functioning gallium arsenide as well). Not every company does it the same way, but based on who Apple has hired and where their design teams come from, I'd bet I have a fairly decent understanding of how they might do things.
In Part I of this blog entry, I will describe the microprocessor design process and explain how the different steps of the process can qualitatively affect the key metrics (performance, power consumption, and cost). I will focus on the methodology I am most familiar with, both because that's the easiest for me to talk about and because I think Apple's design methodology isn't significantly different. In subsequent parts I will explain why I think ARM offers Apple advantages over Intel, and how Apple could make it work.
Step 1: Requirements
![]() |
The original Athlon 64/Opteron die and floorpan. Hmm, that Micro-code block looks pretty big, huh? |
For example, processors intended for use in server farms need to consume very little power but don't need too much performance, and certainly don't need much floating point performance. Processors for workstations need high graphics performance and good multiprocessing performance, but power consumption and cost may be less important.
The requirements also include other things, some of which often go unsaid. At AMD we knew we had to be compatible with Microsoft operating systems (which meant we either had to be compatible with Intel or we had to convince Microsoft to support our deviation from Intel). At Sun we knew we were going to design a SPARC (I steadfastly refused to have anything to do with the other team which was designing "Java processors"). At Exponential we were pretty agnostic about what we were designing, but given Apple's investment we were lead to focus on PowerPC (for awhile).
Of course, much like Sun could feel free to migrate its customers from SPARC to "Java chip" (unsuccessfully) or to x86 (more successfully), Apple is not constrained by such factors from switching to ARM. Much like the transition it made from Motorola 68k to PowerPC and then again to Intel, Apple could switch to ARM (since it controls the OS, the compiler, and has shown it knows how to support fat binaries, emulators, and other technologies for smoothing over the transition period).
Sometimes we knew we had to have certain features - marketing wanted to differentiate by providing encryption functions or support for some new SIMD instructions or the like. Being able to differentiate in this way is something that OEM's who limit themselves to buying the same Intel chips as their competition can't do, of course. Apple does compete on processing specs where it can; in the PowerPC days it advertised the advantages provided by the processors it used (heck, Apple even named some models after the chips they used), and it has begun to do the same thing with the ARM chips it uses in iDevices (Secure enclaves. 64 bit. Performance. etc.) Apple cannot currently do this with Macs because it is using the same chips as everyone else. If Apple wants to introduce Touch ID with a secure enclave to protect fingerprint and other sensitive data, they can't do it unless they convince Intel to go along (and then, perhaps after a short exclusivity period, all of Apple's competitors have access to the same technology).
![]() |
Apple using unique features of its specially-designed A7 chip for iDevices as a point of differentiation in marketing |
The Requirements stage is where the basic contours of performance/power/cost are set. Frequently there will be some back and forth with the engineering team (sometimes the requirements are not possible, or sometimes the engineers come up with alternate proposals that marketing never considered). But if it's a chip for ultralight laptops, one can be sure it won't run at maximum possible clock frequency and with a 100W TDP.
So:
- Apple's reliance on Intel (or, in a broader sense, x86-based chips) prevents differentiation based on features and, more importantly, on capabilities. Apple has shown that it prefers such points of differentiation.
- Apple's reliance on Intel hinders its ability to offer features that it might wish to offer, like Touch ID, regardless of whether it's a point of differentiation.
- Apple has, through two prior instruction set architecture transitions, demonstrated it has the capability to make such transitions smoothly.
- Apple controls the entire stack, from OS, to SDKs, to programming languages (Objective C and SWIFT) to compiler. This provides Apple unique freedom to change its processor architecture.
Step 2: Architecture
![]() |
Layout of the RPI F-RISC/G cache controller |
When I refer to architecture, I refer to the high level description of the operation of the processor. Does it have a cache? How big? One each for data and instructions? How many instructions can it process at once? Does it have multiple cores? How big is the register file (if not determined by the ISA)? Does it have trace-back caches? How big are the TLBs? Does it support out-of-order issue? Out-of-order retirement? How many instructions can be in flight at any time? How deep are the pipelines? How many cycles to do a 64-bit addition? The list of issues goes on and on.
Sometimes this includes whether to support optional portions of instruction sets (and instruction set extensions).
Some of this is often called "microarchitecture" but I'll include it all as "architecture" because the folks who did this work were universally called "architects" at the places where I worked.
The architecture makes a big difference in our key metrics. Performance, power consumption and cost all directly are affected by architectural decisions.
For example, doubling the size of the L2 cache may increase performance on key benchmarks by 10%. But doing so may double the die size (and hence the cost). And it may increase power consumption of the chip by 15%. It gets more complicated. While power consumption of the chip may increase by 15%, by reducing the frequency of main memory reads and writes the system power may decrease by 2%, which may mean that, overall, the entire system consumes less power. Of course, that's of little value if the power dissipated by the chip per square centimeter is such that the chip can't be properly cooled because the volume of the phone it's going in does not allow a sufficiently sized heatsink.
Perhaps, instead of increasing the cache size, the architect decides to double the speed of the CPU clock (assuming the engineers down the line can make this work). Even if doing so could be accomplished without increasing the CPU voltage (unlikely), this doubling of clock frequency will cause lots of wires and transistor gates to charge and discharge twice as fast, which will double power consumption (or, at least, double the portion of the power consumption that derives from switching, which can range from 50% to 80% of the overall power consumption, depending on factors to be described later).
Further, increasing clock speed may require shorter wires (electrons move through wires at finite speed) with less capacitance (wires take awhile to charge and discharge, and longer wires take longer). To accomplish this, more mask layers may be needed, which increases the price of the part.
The architect may add more registers, which speeds up some benchmarks but slows down others that involve a lot of task switching.
In short, there are many choices to be made, and each of them has a real effect on key processor metrics.
What's important to note here, is that the vast majority of the choices available in the architect's toolbox apply whether the chip is an Intel x86 or an ARM-based chip. In each case the architect can choose the number of cores, whether to support hardware multithreading, bus sizes, cache line widths, cache organizations, the dimensions of memory structure like translation lakeside buffers and caches, register renaming techniques, branch prediction strategies, etc.
As wafer sizes increase, transistors and wires decrease in size, and the number of transistors per die increases, the new architectural techniques that become available to Intel architects also become available to ARM architects.
So:
- Nearly all architectural techniques for increasing performance and decreasing power consumption are equally available regardless of the instruction set of the CPU.
- Apple controls its compiler, so it can make sure that code takes full advantage of its architectural decisions.
In lower-end designs, the behavioral code contains more detail, because rather than letting a designer determine the structure of these blocks, a synthesis tool (i.e. software - typically from Synopsys) does the heavy lifting. The extra detail in the behavioral model provides guidance to the synthesis tool so it doesn't go too far off the rails. Interestingly, the choice of whether or not to use a synthesis tool is another opportunity to affect our key metrics. In my experience (and we tested this extensively over the course of a decade), using a synthesis tool universally resulted in 20% worse outcome than allowing trained designers to do the work. You can pick your 20% - either 20% worse performance, 20% worse power consumption, or 20% worse cost (stemming from 20% more space on the die). Or various combinations that add up to 20%.
Step 3: Logic Design (and Circuit Design) (and Physical Design)
![]() |
AMD K6-II micro photograph |
The next step, once the overall behavior of the various CPU blocks is determined, is to design the circuitry that produces that behavior. Here the division of labor varies from company to company, but I'll use the broadest definition.
First, it's important to understand that a block, say the "integer execution unit," is designed from smaller basic building blocks. These blocks generally fall into two types: "standard cells" and "macro cells." Standard cells are generic, reusable, circuits that perform basic functions. These cells have a predefined "layout" (i.e. the set of polygons on different mask layers that form the transistors and wires in the circuit) and logical behavior. For example, there are standard cells to perform basic Boolean functions such as NAND, NOR, NOT, XOR, and the like. Moreover, there are different versions of the standard cell depending on the number of inputs. So there's a NOR2 that performs a logical-NOR on 2 inputs, and a NOR3 that does the same for 3 inputs. Then there are different versions of each of these that have different drive strengths; this enables the designer to choose the cell that's just strong enough to drive its output load at the necessary speed to meet the clock frequency goal, but not so strong as to waste power. So there's a NOR3x1, an NOR3x2, etc.
Depending on the situation, the standard cell library may be provided, as-is, by the foundry. So, for example, TSMC may provide its customers with a cell library, and leave the customer with little option to deviate. I very strongly suspect Apple is not in this boat; it's a huge customer which has hired a lot of folks who would not be interested in using an as-is standard cell library that isn't optimized for its own needs.
By optimizing the standard cell library in various ways, one can affect performance, power, and cost. For example, one can choose the aspect ratio of the cells - are they tall and skinny, short and squat, or in-between? Are there special cells for certain types of structures? How do the cells connect to the wires? (i.e. are the pins drawn vertically or horizontally, and in what layer?) What cells are in the library? What's the power grid look like? How about the clock grid? Do I use flip-flops or latches? For one of our designs, we eliminated the so-called "positive polarity" cells like AND and OR and forced designers to create AND using a NAND followed by a NOT. This was more efficient because an AND is really just a NAND followed by a NOT anyway, and decoupling them encouraged the designer to move the NOT away from the NAND, where the NOT could perform a power-saving signal-repeating function. This set of choices, however, is independent of whether one is designing an x86 or ARM part. So if there's an optimal solution, it's equally available to everyone.
The other type of cell, the macro cell, is a customized cell that performs a more complicated function or a function that can't be implemented in a standard cell. For example, in the integer execution unit, the register file is likely to be a macro cell; essentially it's a highly optimized, albeit small, SRAM with a lot of read ports. Circuit designers design this cell on a transistor-by-transistor level and produce a block that can be snapped together with the standard cells to produce the block. While different instruction set architectures may require different macro cells (e.g. an x86 has a small register file while RISC architectures tend to have bigger ones), the circuit design tricks used by the designer to increase speed or reduce power are independent of ISA.
So:
- Choices of standard cell architecture that improve performance and power can be made independently of instruction set choice.
- Circuit design choices are independent of instruction set choice.
Once there is a library of cells to choose from, the next step is to arrange them so they perform the proper Boolean functions. This is the "synthesis" I referred to earlier. We usually did it by hand, though the trend is to do at least some parts of even high-end chips using automated tools (bad idea, but no one listens to me).
The designer also has to physically position the cells on the chip, a process called "placement." Again, this is often automated, but we typically did it by hand. Synthesis and placement must be done in coordination - if two cells are far apart, then they may require a repeater between them in order for the signals not to degrade too much. And the drive strength of cells depends on how far apart the cells are, and which cells are connected to which. An x1 cell shouldn't drive more than x4, whether it be 2 x2's, 1 x4, or 4 x1's. But if there's a long wire between the driver and receivers, it can't drive the full x4 because it must also charge and discharge the long wire. It gets complicated!
Moreover, the wires between the cells (the actual metal) must be designed. This is called "routing." This is almost always automated, albeit we always did some "pre-routing" - i.e. hand routing - of the most critical wires, forcing the less critical wires to work around them. (Wires can't cross on the same layer, so sometimes wires had to move up and down to get around obstructions like pre-routed wires). Synthesis, placement and routing is an iterative process. You do it, find out if you meet all the specifications for speed and other electrical properties, and adjust. Hopefully you converge on a solution that meets your speed and power budget. But, it's important to note, that there's nothing in this process that's specific to any particular choice of instruction set.
So:
- Logic, circuit and physical design techniques do not provide any particular instruction set choice with any notable advantage.
Step 4: Technology
This isn't really a "step," but it's another important factor in determining performance, power consumption and cost. I lump the electronic package and the semiconductor fabrication process into this category.
The choices here include all sorts of things - process node (i.e. minimum drawn transistor sizes), metallization (alloy, width, thickness), dielectric choices, substrate (SOI? Bulk?), transistor design (3D gates? Number of pillars?), number of metal layers, etc.
These choices have a huge effect on performance, power and cost.
Now, Intel may have the best fab (good argument for that. At least it seems the most reliable), but there's nothing in the choice of instruction set that inherently prevents the use of any of these choices. An ARM produced on Intel's best fab will benefit just as much as an x86.
So:
- Fabs are instruction-set neutral.
Then Explain This, Mister...
The obvious question, then, is why aren't ARM chips already competing with Intel in the "PC" market? Why have ARM chips always had less performance than x86 chips?
Design techniques
Remember that 20% you lose by doing synthesis? Well, almost all ARM designs use the so-called "ASIC" (or, more recently, the related "SoC") flow, which involves a tremendous amount of software automation of the design process. Part of this is the way ARM is licensed - many licensees receive just synthesizable Verilog or "hard blocks" that have already been synthesized. There have been some notable exceptions (StrongARM at DEC, for sure, and presumably the Apple A7), but as a general rule ARM designs haven't been lovingly hand-crafted the way Intel, AMD, and the like design their processors. This hasn't been much of an issue, though, since these designs were not intended to compete with high-end microprocessors anyway. Of course, this problem is easily overcome...
Fabs
Generally-speaking, most ARM processors are not produced on the best fab lines. Apple's A7 is built on Samsung's 28-nm process. Intel's state-of-the-art processors are fabbed using a 14-nm process with 3-D transistor gates. GlobalFoundries' Fab 7 purportedly runs at 13-nm (and I assume may be using SOI wafers). TSMC offers a 16-nm process with 3-D gates. It's hard to compete (in performance, power, or cost) when you are using a technology node that's a generation and a half behind in terms of transistor size and with older transistor structures. Again, this hasn't been too much of an issue since these designs were not intended to compete with high-end microprocessors. And, of course, this too would be easy enough for Apple to deal overcome. (Arguably Apple couldn't achieve complete fabrication parity with Intel, but it could certainly come close enough).
Goals
Until now, it hasn't been anyone's goal to compete on the "desktop (or laptop)" and produce an x86-class processor using ARM. Why would it have been? The graveyards of silicon valley are filled with the discarded remnants of past instruction set architectures. SPARC never got much further than it's Sun and Sun-clone (Fujitsu/HaL) roots. MIPS managed to spread beyond Silicon Graphics to a few handheld devices and some car engines, but so what. PowerPC's made a decent run of it - powering IBM RS workstations, Macs for a few years, and last generation's game consoles - but it's run is about over. Even competing with Intel directly by adopting x86 hasn't worked out for anyone (ask Cyrix, Transmeta, Rise, National, Nexgen, Exponential). AMD made a good go of it for a brief time with Opteron, forcing Intel to clone it, but those glory days were short-lived. For a long time, if it didn't run Windows, forget about it. If it did run Windows, it had to be cheaper and faster than Intel to even get a nibble from the OEMs, and outrunning Intel while all else is equal is not sustainable in the long run. The solution, of course, is to make sure that all else is not equal, which hasn't been possible until very recently.
Today is different, because among other things, the markets have changed - who cared about battery life 10 years ago? Who would have believed that laptops like the MacBook Air, marketed not for being the most powerful but for having all day battery life, being thin and light, and for removing common features, would be driving the laptop market? Who would have thought that Windows compatibility would today not be a huge requirement? (If it still is, it won't be for much longer. And it sure isn't an issue Apple cares about - who talks about Bootcamp anymore? And something like Parallels would still be able to run Windows regardless of the underlying ISA).
Part I Summary
The point of all this was to explain why there's nothing in the chip designer's toolbox of tricks that provides x86 with a particular advantage. The reason Intel performance is better than ARM performance is not because the choice of instruction set provides Intel with an inherent advantage, but rather because of history and market forces.
In the next part I will explain why x86 inherently has a small but real disadvantage in optimizing cost, performance and power and why ARM would offer Apple some real advantages in the desktop and laptop market.
Subscribe to:
Posts (Atom)