Well, for better or worse, Locationgate is over.
As you may recall, last week, a pair of researchers said that they‚Äôd found a secret file in the hard-drive backup of every iPhone and cellular iPad. This file, they said, contained a time-stamped list of everywhere you‚Äôd been with your phone since last June (or whenever you installed iOS 4).
There was no indication, they said, that this information was being transmitted to Apple, the government, or the Warren Commission; it was just sitting there on your hard drive, accessible only with Unix commands (or with an app that the researchers wrote). But even so, conspiracy theorists immediately went into hysterics.
On Wednesday, Apple responded with a statement explaining the presence of the file. ‚ÄúApple is not tracking the location of your iPhone,‚Äù it said. ‚ÄúApple has never done so and has no plans to ever do so.‚Äù
The statement goes on to say: ‚ÄúThe iPhone is not logging your location. Rather, it‚Äôs maintaining a database of Wi-Fi hot spots and cell towers around your current location, some of which may be located more than one hundred miles away from your iPhone, to help your iPhone rapidly and accurately calculate its location when requested. Calculating a phone‚Äôs location using just GPS satellite data can take up to several minutes. iPhone can reduce this time to just a few seconds by using Wi-Fi hot spot and cell tower data to quickly find GPS satellites, and even triangulate its location using just Wi-Fi hot spot and cell tower data when GPS is not available (such as indoors or in basements). These calculations are performed live on the iPhone using a crowd-sourced database of Wi-Fi hot spot and cell tower data that is generated by tens of millions of iPhones sending the geo-tagged locations of nearby Wi-Fi hot spots and cell towers in an anonymous and encrypted form to Apple.‚Äù
So basically, Apple is saying that the researchers were wrong on both counts. First, the ‚Äúsecret file‚Äù contains information about nearby Wi-Fi hot spots and cell towers, not your exact location. And second, your device is sending information to Apple, although in an anonymous, encrypted form.
Now, one part of Apple‚Äôs response is a little unconvincing. Yeah, O.K., the file includes the location of Wi-Fi hot spots and cell towers ‚Äî near you. So no, it doesn‚Äôt record what park bench you were sitting on. But it‚Äôs still tracking what city you were in, and is therefore keeping a record of your travels.
Last week, based on the information available at the time, I wrote about why I didn‚Äôt think this was such a big deal. First, you‚Äôre the only one with the access to your iTunes backup, and presumably you already know where you‚Äôve been. Second, there are so, so many other ways your privacy is compromised these days, some much more urgent than this. (How come, for example, nobody‚Äôs in hysterics over the 77 million names, e-mail addresses, passwords and possibly credit card numbers that Sony says were swiped from its PlayStation database?)
I‚Äôm always baffled by the hypotheticals that people with the Conspiracy Gene come up with. Dozens of you wrote me to say, ‚ÄúBefore you dismiss our privacy concerns out of hand, what about an abusive ex-spouse?‚Äù
That‚Äôs totally illogical. First, what good does it do the ex-husband to know where you‚Äôve been before your last iTunes backup? He can‚Äôt exactly go back to last month and accost you there.
But even more important, to find out where you‚Äôve been, the ex would have to be in your house, at your computer. And listen: if your abusive ex is inside your house, sitting at your computer, you have much bigger problems than the iTunes backup. What you need isn‚Äôt a different phone ‚Äî it‚Äôs a locksmith.
Anyway, the whole thing is moot now. In a software update in the next couple of weeks, Apple will (a) stop backing up the location database to your computer, (b) store only a week‚Äôs worth of hot spot locations and (c) stop collecting hot spot locations if you turn off Location Services.
(‚ÄúThe reason the iPhone stores so much data is a bug we uncovered and plan to fix shortly,‚Äù Apple says, not entirely believably. ‚ÄúWe don‚Äôt think the iPhone needs to store more than seven days of this data.‚Äù You can read more about the response, including a Times interview with Steve Jobs, here.)
In the next iOS update, furthermore, the location list will also be encrypted on the phone itself.
Interestingly, Apple also performed a pre-emptive strike. It revealed something else your iPhone is tracking: traffic data (presumably measuring how fast your iPhone is moving when it‚Äôs on roadways). ‚ÄúApple is now collecting anonymous traffic data to build a crowd-sourced traffic database with the goal of providing iPhone users an improved traffic service in the next couple of years.‚Äù That‚Äôd be cool.
This is the second time Apple has found itself neck-deep in a P.R. brouhaha where it blamed the problem on a bug. Remember the Death Grip issue, where the iPhone 4 would lose signal bars if your hand covered the lower left corner? And how Apple said that the problem was, in part, a bug in how the phone counted the number of bars to display on the screen?
Both times, Apple wasn‚Äôt apologetic, but at least it took prompt action through a software update. In this case, at least, security consultants seem satisfied. Apparently, the Locationgate case is now closed.
Now then. About those PlayStation credit cards‚Ä¶
Receive the latest news, reviews and trends on your favorite technology topics
Worldwide operating system (OS) revenue grew to US$30.3 billion last year, clocking an increase of 7.8 percent from 2009, according to new figures from Gartner.
Pointing to the recovery of the global economy as a significant factor in the market’s healthy showing, the research firm added that client platforms grew faster than the server segment, growing 9.3 percent last year compared to 5.7 percent for server OSes.
Matthew Cheung, Gartner’s principal research analyst, said in the report: “The long-pending demand for PC refreshment was unleashed as the economy stepped out from the economic turndown, which drove growth of client OSes.”
Within this OS segment, Apple’s Mac OS saw the fastest growth as its laptops and desktops saw strong sales, although from a much smaller base. Microsoft’s Windows client remained the largest client OS segment, registering a high single-digit growth which was particularly driven by the adoption of Windows 7 and the termination of Windows XP.
In fact, Microsoft’s Windows client revenue saw higher growth at 9.2 percent, compared to its Windows server business which grew 7.5 percent.
In the server OS segment, Linux proved the fastest-growing sub-segment as end-users embraced more open standards systems,
Linux’s server was the fastest-growing sub-segment in 2010 because end users embraced more open-standard systems, explained Alan Dayley, managing vice president at Gartner. Within the Unix OS market, he noted that IBM AIX clocked high single-digit growth but Unix experienced “modest or negative growth”.
“The end-of-life threat for Unix OSes such as Tru64 and NetWare pushed the ‘other proprietary Unix’ sub-segment down 39.6 percent in 2010, as some vendors retired their proprietary Unix and moved users to more open systems,” Dayley said.
According to Gartner, Red Hat continued to dominate the commercial Linux server market where the vendor’s Red Hat Enterprise Linux server license climbed 18.6 percent to US$592 million last year. The platform accounted for 58.2 percent of the overall Linux server market.
Worldwide, the overall OS market was dominated by Microsoft which accounted for 78.6 percent of overall revenue, an 8.8 percent increase from 2009, Gartner revealed. It added that the software giant maintained its pole position with revenue totaling US$23.8 billion last year, compared to US$21.9 billion in 2009.
IBM and Hewlett-Packard ranked in second and third, respectively. Big Blue grew its revenue by 5.6 percent to US$2.2 billion last year with a 7.5 percent market share, a slight dip from 7.7 percent in 2009. HP clocked a 1.4 percent revenue growth to US$1.1 billion last year with a 3.7 percent market share, down from 3.9 percent in 2009.
Oracle and Red Hat rounded up the worldwide Top 5 OS vendor at 4th and 5th positions, respectively, with a market share of 2.6 percent and 2 percent.
Apple ranked 6th with a market share of 1.7 percent.
Edsger W. Dijkstra wrote an influential paper back in 1988 called On the cruelty of really teaching computing science, which advocated an approach strongly grounded in the study of formal systems.
While I would be the first to admit that I am not fit to carry the late Dr.’s punch cards, I would say that if he really wanted to see cruelty, he might have tried his hand at the undergraduate course I taught this semester: C/C++ Programming in a UNIX Environment. This class was cruelty personified.
How long did it take you to learn C++? Do you think you could squeeze that into a single undergraduate semester, giving it only the 20% of your attention span that is due?
Now throw into the mix the requirement that you have to learn C as well.
And to keep things interesting, all your work has to be done on a UNIX or Linux machine. You’ve used Windows since you were 8, and think that cat is a four-legged pet and grep is some sort of gastrointestinal complaint.
Yep, you are in trouble.
The good news in all this was that I didn’t make any of Andrew Koenig’s egregious errors when teaching the class. The bad news is that every homework assignment I created looked to my students like two impassable mountain ranges instead of one: an incomprehensible C++ problem to be implemented on an inscrutable O/S, using an IDE that was decidedly not Visual Studio.
Just as an example, for a recent assignment, I had the class warm up with a pure C++ implementation of mergesort, reading a string of words from standard input and writing the sorted list to standard output. Using all the facilities at hand in the C++ standard library meant that the mergesort implementation was a breeze ‚Äî about the only piece of the algorithm that required much thought was merging the two subcontainers after dividing and conquering.
After the warmup part of the assignment came the meatier portion: I asked the students to implement the mergesort algorithm by passing the subproblems to child processes. This meant they needed to solve a few very common problems encountered when programming on *IX:
Using fork() to create child processes
Managing the lifetime of parent and child processes
Managing unnamed pipes for communication between a parent and child process
Serializing and deserializing C++ containers so they can be transmitted through a de-objectifying pipe
Admittedly, this is not a perfect demonstration of a way to parcel a problem out multiple processes. In fact, if done using a straightforward implementation of the problem, you can create a beautiful example of a fork bomb, bringing your system to its knees. But I thought it would be a good way to get the hang of working with child processes in a somewhat realistic way. (And this could actually be a good way to distribute a sorting process ‚Äî if you only forked a limited number of times at the top of the merge.)
By fooling around a bit with process names, I was even able to do a poor man’s animation of the process using pstree.
When I worked up the assignment, it seemed like a reasonable assignment to tackle over the course of a week. Alas, at the one-week deadline, my inbox was empty.
This wasn’t the first assignment that turned out to be a semi-disaster. My Scrabble game board manager saw a similar fate, as did my Scrabble word generator assignment.
As part-time non-tenure track faculty, I don’t have a lot of say in curriculum development. But after going through this course, I will definitely be passing along a strong recommendation: If we are going to ask students to learn a new language using new tools with a new O/S, we need to modify the class structure so that at least half of the hours are spent in the lab.
When I worked through these problems in a lab environment, it was easy to provide the gentle nudge to help someone who was stuck trying to get Eclipse or NetBeans to do the expected, instead of whatever perverse path they were on. I could help with the C++ compiler errors, which over a decade after standardization are still a travesty. And I could help coax the debugger into providing usable information when looking at standard C++ objects ‚Äî which the IDEs will do only grudgingly. In other words, help with the undocumented tips and tricks that experienced C++ programmers take for granted.
At the end of the semester, I do feel pretty good about the class’s mastery of C++ ‚Äî they soaked up as much of this huge language as was humanly possible. UNIX/Linux expertise didn’t seem to get the same level of commitment ‚Äî which is unfortunate but understandable.
Learning C and C++ in the same class can be a bit tricky. While C is nearly a proper subset of C++, the libraries are radically different, and just learning how to use the C subset of the language is not enough. But learning C++ is so time-consuming that it is easy to shortchange the C side of the curriculum.
‚Ä®‚Ä®Of course, there are plenty of C++ haters out there, and they will object to the fact that C++ is even part of the course name. If you believe people like Linus, simply expressing a preference for C++ means you are not capable of producing decent products. But that’s a topic for another post.
Today is the last day that Novell (NASDAQ:NOVL) will exist as a publicly traded company. Trading in Novell’s stock will cease at the close of business, as the company is taken private.
Novell is being acquired by privately held Attachmate in a deal that closes today. The $2.2 billion acquisition was first announced in November of 2010.
Attachmate is paying $6.10 per Novell share and will keep Novell as a wholly owned subsidiary. Novell shareholders approved the deal back in February. The final step that was required for the deal to move forward was the completion of a patent deal.
As part of the Attachmate acquisition, Novell is selling 882 patents to a group called CTPN Holdings for $450 million. CPTN is a group that includes Microsoft, Apple and EMC. The patent sales was contested by open source advocates as well as authorities in Europe. The fear was that Novell’s open source patents could be used to attack open source vendors.
The other concern was about what would happen to Novell’s UNIX copyrights, which were the subject of a court battle with SCO. Novell has publicly stated that Attachmate would be holding onto the UNIX copyrights.
Thanks to the U.S. Department of Justice (DoJ), Attachmate will be holding onto a lot more than just UNIX. The DoJ adjusted the CPTN patent sale in a move designed to prevent attacks against the open source ecosystem.
As part of the adjusted deal, Microsoft will be selling back to Attachmate all of the Novell patents that Microsoft would have otherwise acquired. Attachmate in turn will continue to receive a license for the use of the Novell patents.
“Novell has had many chapters in its 28-year history and today marks the start of a new, exciting chapter,” John Dragoon, the Chief Marketing Office at Novell blogged. “As Novell joins forces with The Attachmate Group, the result will be a powerful portfolio of companies united by a common purpose and dedicated team.”
Royal wedding fever hits the Inq!
What would you give Wills and Kate as a wedding gift (not that any of you oiks are invited)
The open systems revolution of the late 1980s and early 1990s, which espoused interoperability between platforms, made the various Unix operating systems the world‚Äôs most popular server platforms.
If you consider Linux a kind of Unix, then you can argue that although Windows has spread into small and medium businesses, the Unix platform still dominates the enterprise server landscape.
The openness in Unix and Linux has helped these systems maintain their position in the market, even though Windows has an order of magnitude more software developers.
People do learn. When any new technology comes down the line, thanks to the healthy effects of the open-systems approach, there is soon a call for standards and interoperability through common APIs and open source code.
The open call is now being made for the brains behind computing clouds ‚Äì what are often called cloud fabrics or cloud controllers.
These are the traffic cops controlling how hypervisors and virtual machines are fired up and shut down on the clusters of servers that are wired together as a giant pool of CPU capacity.
Cloud fabrics are the √ºber operating system ‚Äì and the high ground from which key software suppliers will try to dominate the virtualisation field. And that means, despite all the talk of cooperation and openness, vendors will be tempted to make their platforms better and maintain some incompatibilities.
We’re in the money
VMware, the server virtualisation industry juggernaut, took an early lead with its ESX bare-metal hypervisor. Although VMware likes to pretend that many of the components in its vSphere stack of server virtualisation tools are add-ons to the hypervisor, most of the advanced functions, such as live migration, backup, failover and other virtual machine features, are coded in the hypervisor and accessed through its vCenter console.
The company’s vCloud Director, on the other hand, is a cloud fabric. It wraps around multiple instances of ESX hypervisors and vCenter controllers and adds capacity management, metering and orchestration functions, as well as a self-service portal.
This last is key because it allows end-users to request computing, storage and networking resources from the IT department in a consistent manner, and enables the IT department to deliver it in a consistent manner.
It is a closed system just as much as Windows is
Importantly for VMware, vCloud Director manages only ESX virtual machines. It is a closed system just as much as Windows is.
There‚Äôs some big money to be made here. The first release of vCloud Director spans up to 25 vCenter consoles and up to 10,000 concurrent-running virtual machines. vCloud Director costs $150 (¬£90) per virtual machine under management when it is used with the vSphere stack.
The Request Manager portal costs another $100 (¬£60) per virtual machine and the Capacity IQ capacity planner costs an additional $75 (¬£45) per virtual machine. On a cloud with 10,000 virtual machines, The vSphere Enterprise Plus hypervisor and vCenter consoles would cost $3.8m( ¬£2.3m) at list price, and the vCloud extensions would cost another $3.25m (¬£2m). This is not petty change: each piece ‚Äì hypervisor and cloud fabric ‚Äì individually rivals the cost of hefty two-socket servers underlying the cloud, which might cost somewhere around $3.5m (¬£2.1m) for the 450 machines making up that cloud.
Jaunty Red Hat
Unsurprisingly, the open-source community sees this kind of lock-in and those kind of prices, and declares it has a better idea.
Commercial Linux distributor Red Hat, which has done the best job of turning the open-source Linux and Java middleware stacks into a growing and profitable business, ate Qumranet a few years ago to gain control of the KVM (kernel-based virtual machine) hypervisor.
With the RHEL Enterprise Linux 6 operating system announced last autumn, the KVM hypervisor is ready for primetime and has been extended with its own cloud fabric, which Red Hat calls the Cloud Foundations stack.
Red Hat is putting forth its own KVM and related tools as all that is needed to create a cloud. But the company is also talking up its Deltacloud management APIs, and promising interoperability with private clouds based on VMware’s ESX or Microsoft’s Hyper-V, as well as public clouds created by Amazon, IBM and others.
Red Hat does not provide a price for its cloud stack, but presumably if you buy enough hypervisors and support contracts, it will be lower than what VMware is trying to charge.
Eucalyptus Systems, which created a mostly open-source cloud fabric that emulated Amazon‚Äôs EC2 cloud using a homegrown Xen hypervisor, looked like it might be the cloud fabrics leader to rival VMware. With Canonical embedding Eucalyptus in its own Ubuntu Enterprise Cloud, it certainly looked that way.
But then NASA and Rackspace Hosting decided to work together with a who’s who list of IT vendors to make a cloud fabric that is completely open source. That software, called OpenStack, is evolving fast, has a lot of energy behind it, and aims to create a cloud that can span one million host servers and control up to 60 million virtual machines at once.
And if history is anything to go by, commercial support for OpenStack will be a lot cheaper than VMware’s vCloud Director, once companies start offering support services for it the way Red Hat and Canonical do for Linux. In fact, it probably won’t be long before both Red Hat and Canonical adopt OpenStack as their cloud controllers of choice.
Don’t expect VMware or Microsoft to follow suit any time soon, though. ¬Æ
In a recent survey, Gabriel Consulting Group got responses from 450 enterprise customers with opinions on Oracle (NSDQ:ORCL)¬ís decision to drop development of its software for the Intel (NSDQ:INTC) Itanium processor platform, and by extension Hewlett-Packard (NYSE:HPQ)’s HP-UX Unix platform. Respondents, according to CRN, see the decision as part of an ambitious agenda to put its server competitors at a disadvantage and give its own server line the advantage.
“¬íSo customers definitely see an agenda here on Oracle’s part that this isn’t business as usual, but instead is a step in a strategy,¬í he said. ¬ëAnd this is not necessarily the first step. I’ve seen some customers say that changes in Oracle’s pricing per processor is aimed at putting their servers at an advantage, or others’ servers at a disadvantage.¬í”