Archive for the ‘Technology’ Category

The Thinker

Mankind is not going to the stars

I’m something of a space geek. It helps to have grown up during the space race. I still find the exploration of outer space fascinating. Our knowledge of the universe is growing by leaps and bounds. Hundreds of planets have been discovered, so many that it no longer makes news to announce new ones. Many of these planets look like they might support human life, although it’s hard to say from such a fantastic distance away. Some of us are waiting expectantly for someone to discover the warp drive, so we can go visit and colonize these distant worlds. Maybe it will be just like Star Trek!

It fires my imagination too and it excites the popular press as well. It all sounds so exotic, fascinating and Buck Roger-ish. The only problem is that almost certainly none of this will happen. We might be able to put men on Mars, but colonizing the planet looks quite iffy. Even colonizing the moon, as I suggested when Newt Gingrich was promoting the idea during his last presidential campaign, is probably cost prohibitive. Which means we need to keep our dreams of visiting strange new worlds in check. It won’t be us, it won’t be our children or grandchildren and it probably won’t happen at all. To the extent we visit them it will occur virtually, using space probes.

I don’t like to be the bearer of bad news but hear me out. We probably won’t colonize the moon permanently because it will be too costly to sustain it. It’s one thing to land a man on the moon, which America did rather successfully. It’s much more costly to stay there. For a sizeable group of humans, say ten thousand or so, to live self sufficiently on the moon is probably impossible. If it can be done it will take a capital investment in the trillions of dollars. My guess is that it would take tens of trillions of dollars, if not hundreds of trillions of dollars. It’s unlikely the moon has enough water that we could mine but if it does it’s likely very inefficient to process as it is wrapped up in moon dust. Otherwise water would have to be imported from earth at ruinous costs. In fact, colonists would have import pretty much everything. Even if citizens of the moon could grow their own food and recycle their own water, manufacturing is likely to be limited. We might have a small scientific colony on the moon, like we do at the International Space Station. It probably won’t amount to more than a dozen men or women, and it’s likely to cost at least ten times as much as the international space station cost, since you have to move equipment and men a much further distance.

What about man colonizing Mars? It doesn’t seem out of reach. When the orbits are working just right, a spacecraft can transit each way in about six months. The cost of getting a pound of matter to Mars is likely ten to a hundred times the cost of getting it to the moon, which is probably cost prohibitive in itself. The journey there and back though looks chancy. It’s not just the possibility that some critical system will fail on the long journey; it’s the cosmic rays. Our astronauts are going to absorb a heap of radiation, the equivalent of 10,000 chest X-rays. It looks though that this is probably manageable. What of man living on Mars itself?

The good news is that humans could live on Mars, providing they don’t mind living underground. The atmosphere is much thinner than Earth’s, it is much colder on Mars than on the Earth in general and you can’t breathe the air and live. It’s true that by essentially burying our houses structures in Martian soil humans could be safe from much of it. Slapping on some SPF-50 sunscreen though won’t do the job. Anyone on the surface will have to wear spacesuits. So far we haven’t found a reliable source of water on Mars either. Colonizing Mars is within the realm of probability, but it is fairly low. Frankly, it’s a very remote, cold and arid place with nothing compelling to it other than a lot of empty mountains and valleys and swirling Martian dust, almost always in a pink or orange haze.

Colonizing distant moons and asteroids have similar problems: no suitable conditions for sustaining life as we know it, insufficient gravity, toxic radiation, frequently toxic chemicals and cold of the sort that most of us simply cannot comprehend it. Both Venus and Mercury are simply too hot to inhabit, and Venus is probably as close to hell as you will find on a planet in our solar system.

What about colonizing planets around other solar systems? Here’s where we need to fire up the warp drive of our imaginations because as much as physicists try to find exceptions around Einstein’s theories of relativity, they can’t. The closer you get to the speed of light the more energy it takes. It’s like Sisyphus pushing that rock up the mountain. To get a spacecraft with people in it to even 10% of the speed of light looks impossible with any technology we have or can reasonably infer. The closest star is three light years away, so even if this speed could be achieved it would take thirty years to get to the closest star. But it can’t be achieved. In fact, we’d be lucky to get to 1% of the speed of light, which would make a journey to Proxima Centauri a voyage of 300 years. Moreover, if some generations could make the journey, it is likely that our closest star is not inhabitable.

Perhaps we could freeze ourselves and wake up millions of years later at our destination. Maybe that would work. Obviously, we don’t have technology to do anything like this now. And given the laws of entropy, it’s hard to imagine any spacecraft that could survive a voyage of that duration intact.

What we need is a warp drive and a starship. But what we will really need is an escape clause from the theories of relativity and the technology that will allow humans to utilize it: a spacecraft that can slip through a wormhole or something. It’s not that this is impossible, but with what we know it looks impossible for us and will always be impossible. In any event there doesn’t appear to be any wormholes conveniently near Earth.

In short, we are stuck to the planet Earth. We’d best enjoy what we have and stop squandering the planet on which all human life depends. So far we are doing a terrible job of it.

 
The Thinker

Mt. Gox: more evidence of why BitCoin is best avoided

Dorothy in The Wizard of Oz learned from Glinda that if she clicked her ruby slippers, closed her eyes and kept repeating “there’s no place like home” that she would magically return to Kansas. So simple! BitCoin adherents are a lot like Dorothy. Dorothy at least made it home from her fantastical journey. True believers in BitCoin, the libertarian currency, got a splash of cold water across their faces this week instead. Mt. Gox, the Tokyo-based BitCoin exchange, has gone belly up, along with about $300M in BitCoins. Most likely someone stole those BitCoins, either someone inside the firm or some shadowy hackers. By any standard, this was quite a heist. Looking at history, you’d have a hard time finding any instance of a similar theft inside what amounts to a bank.

In any case, sorry you BitCoin suckers. Real banks and exchanges still have vaults, but they don’t carry much of their assets in cash. Much of it is commercial paper, bonds, mortgage deeds, promissory notes and Federal Reserve Notes. Whether in paper, assets on an electronic register somewhere, or gold bars in a vault, these assets are quite tangible. Someone with a car loan who defaults on their payments is likely to find their car repossessed. Those who defaulted on home loans during the Great Recession found their houses foreclosed and if they had ready cash assets, they were put under legal assault. BitCoin owners with their BitCoins in Mt. Gox now have nothing and the police just aren’t interested in serving them justice.

This was not supposed to happen to this libertarian currency. Freed of its tie to governments, it was supposed to soar above inflation and always retain a finite empirical value. It was all secure and such through the power of math. After all, exchanging a BitCoin involves keeping a record of who its next owner is. Unless, of course, it just disappears. Undoubtedly these stolen BitCoins were converted into a real currency, just unbeknownst to its owners, and perhaps with the help of some money laundering exchange, perhaps Mt. Gox itself. BitCoin is after all the preferred currency of drug dealers, at least until their fingerprints have disappeared and they can convert the digital money into something more tangible and fungible, like U.S. dollars.

I keep my cash in a couple of credit unions and a bank. It’s unlikely that a credit union like Pentagon Federal, where I have a couple of accounts, is going to go under like Mt. Gox. In the unlikely event that it does, I’ll get my money back because it is backed up by what amounts to the full faith and credit of the United States. Mt. Gox was backed up by the full faith and credit of, well, Mt. Gox. It’s like asking the fox to guard the henhouse.

And there’s the rub with BitCoin exchanges. When you create a currency detached from a government that will assert and protect its value, there is no one to complain to when your BitCoin bank goes bust. The government of Japan is looking into the event, but it is mostly hands off. It never promised to underwrite Mt. Gox, and Mt. Gox never asked it to. In any event, Japan underwrites its Yen, not BitCoins. Japan has a vested interest in keeping its currency solvent. It has no such interest in keeping another currency, particularly one it cannot control, solvent.

An exchange like Mt. Gox could of course seek out local governments for underwriting of their exchanges. Those BitCoin exchanges and banks that want to remain viable are going to have to do something just like this. Good luck with that. In doing so though they are of course defeating the whole purpose of BitCoin. BitCoin is about a libertarian ideal; it’s about money having a value independent of government apron strings. Affiliate the BitCoin currency in a BitCoin exchange with a government, and you tacitly admit that BitCoin is not a libertarian currency after all. In short, you have to give up the notion that money can be decoupled from government control.

It’s unlikely that many governments will be willing to protect BitCoin exchanges. It is reasonable to protect assets that you can actually control: your national currency. For a government to protect a BitCoin currency, it is reasonable to expect that they would also be able to control the amount of BitCoins in circulation and set rules for their use and misuse. They can’t do that, which means that they would be asked to put the good faith and credit of their country against an erratic currency that could prove digitally worthless at any time. This strikes me as a foolish thing to do, but there may be entrepreneurial countries out there, say, the Cayman Islands, that will take the plunge. The risk might be worth the rewards.

I don’t think you have to worry about governments like Germany, England, Japan, China and the United States doing something this foolish. If there is any organization that might see profit in this, it will probably be the Mafia, or other criminal syndicates, many of who are already using BitCoins as a mechanism for money laundering.

Doubtless other BitCoin exchanges will work real hard to sell trust that is now deservedly absent from these exchanges. As I pointed out in an earlier post, it’s going to be a hard sell given that BitCoin’s value is essentially based on faith in its mathematics and algorithms.

Absent from the minds of BitCoin true believers is an understanding that money must be tied to a governmental entity to be real money. It’s tied to governments for many reasons, but primarily because governments are required to govern, and this includes having the ability to enforce its laws and to collect taxes. Money is based on the idea that entities can force everyone to play by the same rules, including using the same currency as a means of exchange within the country for lawful debts. The truth is, there are no rules with BitCoin other than its math. It is a lawless currency. That Mt. Gox’s treasury of BitCoins can be plundered with impunity proves it.

Libertarianism is built on the idea of caveat emptor: let the buyer beware. No warranties are expressed or implied, but even if they are expressed they depend on the trust of the seller. No one can force the seller to do squat. The best a buyer can hope for is to track the thief down and take justice with his fists or a gun. That’s no way to run an economy, which is why libertarianism is an ideology that simply does not work in the real world.

Again, a word to the wise: just say no to BitCoins.

 
The Thinker

Bitcoin is libertarian bit nonsense

Are you intrigued by Bitcoin? It’s a digital currency much in the news these days. It even got a hearing on Capitol Hill last month. Surprisingly the foundation overseeing Bitcoin came out relatively unscathed. Some places are accepting Bitcoins as payment for actual goods and services. They do so on the assumption the currency has value. Like any other currency it has value because some people assert it has value.

Which raises the question, what is its value? There are clearly things you can do with Bitcoin that are convenient. It’s a sort of digital cash for our electronic age. Only it’s not really cash. Real cash doesn’t leave fingerprints. You make a Bitcoin transaction and the transaction is recorded in the coin itself.

If there is value in Bitcoin, maybe it is from the faith we place in its math. There is not much we trust anymore, but you can still trust math, and Bitcoin depends on math, not to mention encryption algorithms, to assert its value. The number of Bitcoins has a finite limit because of the power of math and algorithms. Each attempt to mint a new bit coin requires lots of computers to spend lots of time and use lots of energy. For all its electronic novelty, it’s hardly an environmentally friendly currency. In fact, it’s bad for the environment.

You can’t say that about gold. Granted, the process of getting gold out of the ground is often bad for the environment, but once you have it, there it is, probably to sit in highly protected bank vaults and never to be actually moved or for that matter seen. A Bitcoin is entirely virtual but it depends on lots of computer hardware to mint and to assert its value. You won’t be creating one of these with a pad of paper and a slide rule. In fact, a Bitcoin is entirely dependent on computers and high speed networks. No wonder then that it was abruptly devalued last week when China blocked Bitcoin transactions. Keep it from being used in the world’s most populous country and it has lot less utility. Of course, it’s useless to anyone without a computer or some sort of digital device, not to mention some network so you can trade the currency. So it’s not even universal. You can’t say that about the U.S. dollar.

The larger question is whether a currency built on nothing but math really can have value. It does have value at the moment, as I can actually trade Bitcoins for U.S. dollars, which in my country is what everyone accepts as currency. In the long run though I think Bitcoins are going to be worthless. I don’t plan to own any of them and maybe I can make a case why you shouldn’t either.

First, there is whether counterfeit Bitcoins can be created. New ones can be minted if you have the computer horsepower and these are “legal”, but if they can be created for virtually no computer time then they would be counterfeit. Call me suspicious but I bet either the NSA has already figured out a way to hack it or will soon. In short, to trust a Bitcoin you must buy into its assumption that it can’t be hacked. Since the dawn of the computer age, hackers have demonstrated their ability to hack anything. They love the challenge. It’s reasonable to believe that Bitcoin is going to be hacked one of these days.

Second, there’s the question of what its value represents. I’ve discussed the value of money before. My conclusion is that money essentially represents faith that the country coining the currency will remain solvent and viable. I based this conclusion on the observation that currency value falls whenever these assumptions are shaken. Having a currency based on the gold standard doesn’t seem to make any difference, as the United States has been off the gold standard since the 1970s. Printing new currency doesn’t seem to be that big a deal either, providing the new currency is used to acquire assets of value. This is what the Federal Reserve has been doing since the Great Recession: creating money (none of it actually printed, apparently) and using it to buy long term securities like mortgage-backed securities. Curiously, just printing money is not inflationary when it is used to buy tangible goods. This is providing that the institution printing the money is trusted, and the Federal Reserve is trusted. In any event, investors can value or devalue a currency based on examining its monetary system and the country’s economy. With Bitcoins, you can’t do this. It is backed by no country, which is its appeal to its adherents.

What is Bitcoin really about then? It’s about a political idea; more specifically it’s about libertarianism. It’s trying to be a means by which libertarianism becomes institutionalized. If you are not familiar with libertarianism, it’s all about freedom, buyer beware and minimal (and ideally no) government. Libertarians (at least the committed ones) are vesting their wealth in Bitcoins because it’s how they show loyalty to the cause. They want money to be frictionless and outside governmental control. Arguably, Bitcoin does a good job with this, providing buyers and sellers will accept it as having value.

But libertarianism is an idea, not a thing. Libertarianism is really more of a verb than a noun. A currency though has to be based on something real. The U.S. dollar is essentially backed up by the collective wealth of all of us who possess dollars, or assets valued in dollars, or really any property within the United States. It’s based on something tangible. You buy a house in dollars instead of Bitcoins because everyone in the transaction has faith that those dollars mean something. This is because everyone else is trading in dollars too to buy real goods and services. If the U.S. dollar gets too low, there are things we can do about it. We can petition Congress or the White House to take action. There is no one to go to to complain about the sinking value of your Bitcoins. Assuming the currency cannot be counterfeited, its only value is its finiteness, enforced by math and increasingly expensive computational processes to make new coins. That’s it. As those libertarians say, caveat emptor (buyer beware). Bitcoin buyers, caveat emptor!

This tells me something important: Bitcoin is a bogus currency, at least in the long term. Yes, you can buy stuff with it now, but only from a very limited number of sellers: those who have faith in the idea of a libertarian currency. It’s obvious to me that libertarianism is just not doable as a sustainable way of governing. I have no faith it in whatsoever because its philosophical underpinnings do not actually work in the real world.

I would like to see it in Glenn Beck’s libertarian community, however, if it ever gets built. One thing is for sure, no one is going to build it for Bitcoins. They are going to demand U.S. dollars.

 
The Thinker

The feminization of Yahoo News

I have nothing against females as CEOs. It’s clear that females make up a tiny minority of corporate CEOs and members of corporate boards. Recent news reports suggest that progress on this front has stalled. But there are a few of them. Few have been more prominent in the news lately than Marissa Mayer, the relatively new 38-year-old CEO of Yahoo. She has been making a splash for herself, not just for leaving a job at Google to take on the troubled Yahoo, but also for her many changes to the relatively staid and unprofitable Yahoo, Inc., something of a great grandfather on the World Wide Web.  These included bringing a nursery into the office for her now year old son and requiring her employees to actually come into the office instead of telecommute.

Mayer though has a track record of success with Google and she’s proving adept so far at changing the dynamics at Yahoo. Her old employer Google must really miss her because she successfully led a number of divisions at Google, including some of its principle products: its search engine and GMail. Yahoo had been losing revenue and market share, but things are quickly turning around with Mayer in charge. Yahoo now gets more web traffic than Google again, no small feat and while not quite profitable again, it is making strides toward profitability. She has purchased the blogging site Tumblr and Yahoo’s stock price is rebounding. It has more than doubled during her brief tenure as CEO.

So she is doing good for stockholders and with her reputation she can probably turn around Yahoo, which is good because a World Wide Web mostly overseen by the benevolent Google overlord is not a healthy dynamic. She is getting more eyeballs and more interest from advertisers. Yahoo stockholders should be happy with her performance to date, and hope that they can keep her around.

I was a Yahoo fan from early on. At one time it was the only destination worth going to on the web. It was my home page for many years. It attempted to index the Internet, and actual humans were categorizing content. I’m old enough to remember what Yahoo really stands for (Yet Another Hierarchical Officious Oracle). It was the first web site to do a really good job, in a 1995 kind of way, of helping us find stuff on this new medium called the Internet. For many years I had a Yahoo email account. But Yahoo proved not very agile as it aged, and various ineffective CEOs tended to make things worse.

I don’t go to Yahoo more often than I used to and use Google and its services even more than I used to, although I often feel guilty about it. But I do keep Yahoo News as my principle news page, or did until recently. It was a habit hard to break. The page was edited by actual human beings, rather than Google News, which is edited by a human-programmed computer algorithm. Considering Mayer also ran Google News, I expected Yahoo News might look a lot more like Google News. It is taking on some of its characteristics, including more personalization options. It is also, I am sorry to say, loading up the news site with a lot of fluff. This is making me very unhappy.

Stockholders are probably applauding this move to add these “human interest” stories. If you go to Yahoo News, you can’t possibly miss them, as they comprise about one of every three stories on its main page. It’s not quite National Enquirer stuff, but it’s a lot of Good Morning America-like stuff. In fact, Good Morning America (ABC) is one of their featured content providers. What do I mean by fluff? Well, there was the recent live broadcast of The Sound of Music on TV, and Yahoo News was all over it (it was mostly dissed by the critics). Is this really news? It probably gets a lot of clicks so it surely must be interesting to a lot of people. But no, this is not really news, except possibly in the category of entertainment news. It would be fodder for Variety’s web site. It’s not news in my book.

Perhaps it is just me. News to me is a newspaper like the Wall Street Journal or the Los Angeles Times. I expect to learn, not just what is happening right now that could affect my locality, my country, the world and me. I expect some in-depth reporting on an issue so I can understand the dynamics of the many pressing issues of the day. In short, I read news not to be entertained, but to gain knowledge. I need lots of facts and I need unbiased, in-depth understanding of these facts by reporters who sift through these issues and talk with leading authorities. I seek knowledge because to change the world I must understand not just how it behaves but why it behaves the way it does. News should have its pulse on the planet and should tell citizens like me who are reasonably informed more of what we need to know to stay informed.

I’m not getting much of this on Yahoo News anymore, and I hold Marissa Mayer to blame. I get lots of popcorn articles like this Sound of Music piffle, which today includes an ancillary story about the von Trapp’s mountain lodge in Stowe, Vermont. I get Dear Abby, now available only online at Yahoo but linked daily through its “news” page. I get stories about the lottery. And when I do deign to read an article that looks like real news, it is often short when I want depth. Worse, I get articles that aren’t articles at all, but you don’t know that until you click on it. Instead, it’s video that starts loading even if you don’t want it to load, and for which you have to “pay the freight” of an annoying commercial first. Expect more of the same because one of Marissa Mayer’s recent ideas is to hire Katie Couric as its “global anchor”. I expect lots of little fluff pieces like this and “lite-news” interwoven into its news site during the course of the day. It’s all part of the Yahoo experience, or something that Mayer is planning.

It may be successful for Yahoo and Mayer, but it’s not what I’m looking for in a news site because most of this is not really news. It’s marketing designed to attract eyeballs, perhaps making it a somewhat toned down version of Huffington Post, another site designed by a female overlord full of sauce but little relevant news.

I don’t like where this is leading. It will probably lead to profitability for Yahoo, but as far as leaving us citizens better informed, it’s a poor effort at best. There are plenty of other news sites out there including CNN and all the major networks, but most of these are becoming less newsworthy and saucier as well. Which leaves me looking for a real news site. There is the reliable and local washingtonpost.com site, but I get most of that content from my newspaper subscription. Ironically, I find myself getting most of my news from one of Mayer’s old projects: Google News. For the most part, unless you choose to delve into an area like Entertainment, its news is topical, relevant and in-depth articles tend to get priority. I find I like the algorithmic approach better than Mayer’s approach on Yahoo. I’m just hoping Google doesn’t try to sauce up its news algorithms.

Marissa, consider that public service may be part of Yahoo’s mission as well as enriching shareholders. How about a version of Yahoo News that is just news, instead of so much fluff, like maybe real.news.yahoo.com? And while I am making suggestions, please get rid of the cutesy Yahoo News animated image in the top left corner of the site. And surely you have noticed that since your top menu bar is stuck on the top and you can’t avoid it, when you page down it hides some content, which means you have to cursor up or drag the window up a bit to read it. And you often have the same article, or a variation of it, on the same page. Can’t these be cleaned up?

It seems moot to me. I like your old product better, so I’m hanging out now on Google News.

 
The Thinker

How healthcare.gov failed: the technical aspects

(Also read parts 1, 2 and 3.)

A lot of how healthcare.gov works is opaque. This makes it hard to say authoritatively where all the problems lie and even harder to say how they can be solved. Clearly my knowledge is imperfect and thus my critiques are not perfect either. I am left to critique what the press has reported and what has come out in public statements and hearings. I can make reasonable inferences but my judgment will be somewhat off the mark because of the project’s opacity.

It didn’t have to be this way. The site was constructed with the typical approach used in Washington, which is to contract out the mess to a bunch of highly paid beltway bandit firms. Give them lots of money and hope with their impressive credentials that something will emerge that is usable. It was done this way because that’s how it’s always done. Although lots of major software projects follow an open source approach, this thinking hasn’t permeated the government yet, at least not much of it. Open source projects mean the software (code) is available to anyone to read, critique and suggest improvements. It’s posted on the web. It can be downloaded, compiled and installed if someone has the right tools and equipment.

It’s not a given that open sourcing this project was the right way to go. Open source projects work best for projects that are generic and used broadly. For every successful open source project like the Apache Web Server there are many more abandoned open source projects that are proposed but attract little attention. Sites like sourceforge.net are full of these.

In the case of healthcare.gov, an open source approach likely would have worked, and resulted in a system that would have cost magnitudes less and would have been much more usable. It would still have needed an architectural committee and some governance structure and programmers as well, principally to write a first draft of the code. Given its visibility and importance to the nation it would have naturally attracted many of our most talented programmers and software engineers, almost all of who would have donated their time. Contractors would have still been needed, but many of them would have been engaged in selecting and integrating suggested code changes submitted by the public.

If this model had been used, there probably would have been a code repository on github.com. Programmers would have submitted changes through its version control system. In general, the open source model works because the more eyes that can critique and suggest changes to code, the better the end result is likely to be. It would have given a sense of national ownership to the project. Programmers like to brag about their genius, and some of our best and brightest doubtless would have tooted their own horns at their contributions to the healthcare.gov code.

It has been suggested that it is not too late to open source the project. I signed a petition on whitehouse.gov asking the president to do just this. Unfortunately, the petition process takes time. Assuming it gets enough signers to get a response from the White House, it is likely to be moot by the time it is actively taken up.

I would have like to have seen the system’s architecture put out for public comment as well. As I noted in my last post, the architecture is the scaffolding on which the drywall (code) is hung. It too was largely opaque. We were asked to trust that overpaid beltway bandits had chosen the right solutions. It is possible that had these documents been posted early in the process then professionals like me would have added public comments, and maybe a better architecture would have resulted. It was constructed instead inside the comfort of a black box known as a contract.

After its deployment, we can look at what was rolled out and critique it. What programmers can critique is principally its user interface because we can inspect it in detail. The user interface is important, but its mistakes are also relatively easy to fix and have been fixed to some extent. For example, the user interface now allows you to browse insurance plans without first establishing an account. This is one of these mistakes that are obvious to most people. You don’t need to create an account on amazon.com to shop for a book. This was a poorly informed political decision instead. Ideally someone with user interface credentials would have pushed back on this decision.

What we saw with the system’s rollout was a nice looking screen that was full of many things that had to be fetched through separate calls back to the web server. For example, every image on a screen has to be fetched as a separate request. Each Javascript library and cascading style sheet (CSS) file also has to be fetched separately. In general, all of these have to complete before the page can be usable. So to speed up the page load time the idea is to minimize how many fetches are needed, and to fetch only what is needed and nothing else. Each image does not have to be fetched separately. Rather a composite image can be sent as one file and through the magic of CSS each image can be a snippet of a larger image and yet appear as a separate image. Javascript libraries can be collapsed into one file, and compressed using a process called minifying.

When the rollout first happened there was a lot of uproarious laughter from programmers because of all the obvious mistakes. For example, images can be sent with information that tells the browser “you can keep this in your local storage instead of fetching it every time the user reloads the page”. If you don’t expect the contents of a file to change, don’t keep sending it over and over again! There are all sorts of ways to speed up the presentation of a web page. Google has figured out some tricks, for example, and has even published a PageSpeed module for the Apache web server. Considering these pages will be seen by tens or hundreds of millions of Americans, you would expect a contractor would have thought through these things, but they either didn’t or didn’t have the time to complete them. (These techniques are not difficult, so that should not be an excuse.) It suggests that at least for the user interface portion of the project, a bunch of junior programmers were used. Tsk tsk.

Until the code for healthcare.gov is published, it’s really hard to assess the technical mistakes of the project, but clearly there were many. What professionals like myself can see though is pretty alarming, which may explain why the code has not been released. It probably will in time as all government software is in theory in the public domain. Most likely when all the sloppy code behind the system is revealed at last, programmers will be amazed that the system worked at all. So consider this post preliminary. Many of healthcare.gov’s dirty secrets are still to be revealed.

 
The Thinker

How healthcare.gov failed: the architectural aspects

(Read parts 12 and 4.)

If you were building a house that you didn’t quite know how it would turn out when you started, one strategy would be to build a house made of Lego. Okay, not literally as it would not be very livable. But you might borrow the idea of Lego. Each Lego part is interchangeable with each other. You press pieces into the shape you want. If you find out half way through the project that it’s not quite what you want, you might break off some of the Lego and restart that part, while keeping the part that you liked.

The architects of healthcare.gov had some of this in their architecture: a “data hub” that would be a big and common message broker. You need something like this because to qualify someone for health insurance you have to verify a lot of facts against various external data sources. A common messaging system makes a lot of sense, but it apparently wasn’t built quite right. For one thing, it did not scale very well under peak demand. A messaging system is only as fast as its slowest component. If the pipe is not big enough you install a bigger pipe. Even the biggest pipe won’t be of much use if the response time to an external data source is slow. This is made worse because generally an engineer cannot control aspects of external systems. For example, the system probably needs to check a person’s adjusted gross income from their last tax return to determine their subsidy. However, the IRS system may only support ten queries per second. Throw a thousand queries per second at it and the IRS computer is going to say “too busy!” if it says anything at all and the transaction will fail. From the error messages seen on healthcare.gov, a lot of stuff like this was going on.

There are solutions to problems like these and they lay in fixing the system’s architecture. The general solution is to replicate the data from these external sources inside the system where you can control them, and query replicas instead of querying the external sources directly. For each data source, you can also architect it so that new instances of it can be spawned on increased demand. Of course, this implies that you can acquire the information from the source. Since most of these are federal sources, it was possible, providing the Federal Chief Technology Officer used his leverage. Most likely, currency of these data is not a critical concern. Every new tax filing that came into the IRS would not have to be instantly replicated into a cloned instance. Updating the source once a day was probably plenty, and updating it once a month likely would have sufficed as well.

The network itself was almost certainly a private and encrypted network given that privacy data traverses it. A good network engineer will plan for traffic ten to a hundred times as large as the maximum anticipated in the requirements, and make sure that redundant circuits with failover detection and automatic switchover are engineered in too. In general, it’s good to keep this kind of architecture as simple as possible, but bells and whistles certainly were possible: for example, using message queues to transfer the data and strict routing rules to handle priority traffic.

When requirements arrive late, this can introduce big problems for software engineers. Based on what you do know though, it is possible to run simulations of system behavior early in the life cycle of the project. You can create a pretend data source for IRS data that, for example, always returns an “OK” while you test basic functionality of the system. I have no idea if something like this was done early on, but I doubt it. It should have been if it wasn’t. Once the interaction between these pretend external data sources was simulated, complexity could be added to the information returned by each source, perhaps error messages or messages like “No such tax record exists for this person” to see how the system itself would behave, with attention to the user experience through the web interface as well. The handshake with these external data sources has to be carefully defined. Using a common protocol is a smart way to go for this kind of messaging. Some sort of message broker on an application server probably has the business logic to order the sequence of calls. This too had to be made scalable so that multiple instances could be spawned based on demand.

This stuff is admittedly pretty hard to engineer, and is not the sort of systems engineering that is done every day, and probably not by a vendor like CGI Federal. But the firms and the talent are out there to do these things and would have been done with the proper kind of system engineer in charge. This kind of architecture also allows for business rule changes to be centralized, allowing for the introduction of different data sources late in the life cycle. Properly architected, this is one way to handle changing requirements, providing a business-rules server using business rules software is used.

None of this is likely to be obvious to a largely non-technical federal staff groomed for management and not systems engineering. So a technology advisory board filled with people who understand these advanced topics certainly was needed from project inception. Any project of sufficient size, scope, cost or of high political significance needs a body with teeth like this.

Today at a Congressional hearing officials at CGI Federal unsurprisingly declared that they were not at fault: their subsystems all met the specifications. It’s unclear if these subsystems were also engineered to be scalable on demand as well. The crux of the architectural problem though was clearly in message communications between these system components, as that is where it seems to break down.

A lesson to learn from this debacle is that as much effort needs to go into engineering a flexible system as goes into the engineering of each component. Testing the system early under simulated conditions, then as it matures under more complex conditions and higher loads would have detected these problems earlier. Presumably there would then have been time to address them before the system went live because it would have been a visible problem. System architecture and system testing is thus vital for complex message based systems like healthcare.gov, and a top notch system engineering plan needed to be have been at its centerpiece, particularly since the work was split up between multiple vendors with each responsible for their subsystem.

Technical mistakes will be discussed in the last post on this topic.

 
The Thinker

How healthcare.gov failed: the programmatic aspects

(Also read parts 1, 3 and 4.)

I am getting some feedback: healthcare.gov isn’t really a failure. People are using the website to get health insurance, albeit not without considerable hassle at times. I’ll grant you that. I’ll also grant you that this was a heck of a technical challenge, the sort I would have gladly taken a pass on, even for ten times my salary. It’s a failure in that it failed to measure up to its expectations. President Obama said there would be “glitches”, but these were far more than glitches. If this were a class project, a very generous professor might give it a D. I’d give it a D-, and that’s only then after a few beers. Since I don’t drink to imbibe, I give it an F.

In the last post, I looked at the political mistakes that were made. Today I’ll look at the programmatic mistakes. I’m talking about how in general the program was managed.

Some of it is probably not the fault of the program or project manager. This is because they were following the law, or at least regulation. And to follow the law you have to follow the FAR, i.e. the Federal Acquisition Regulation. It’s the rules for buying stuff in the federal government, including contracted services. Violating the FAR can put you in prison, which is why any project of more than tiny size has a contracting officer assigned to it. In general, the government wants to get good value when it makes a purchase. Value usually but does not always translate into lowest price. With some exceptions, the government considers having contractors construct a national portal for acquiring health care to be the same as building a bridge. Put out the requirements for an open bid and select the cheapest source. Do this and taxpayers will rejoice.

This contract had a lot of uncertainty, which meant it had red flags. The uncertainty was manifested in many areas, but certainly demonstrated in requirements that were not locked down until this year. I’d not want to waste my time coding something that I might have to recode because the requirements changed. This uncertainty was reflected in how the contract was bid. It’s hard to bid it as a fixed price contract when you don’t know exactly what you are building. If you were building a house where every day the owner was directing changes to the design you wouldn’t expect builders to do it using a fixed price contract. Same thing here. It appears the contract was largely solicited as “time and materials”. This accounts in part for total costs, which at the moment are approaching half a billion dollars. This kind of work tends to be expensive by its nature. CGI Federal probably had the lowest cost per hour, which let it win the bid.

There is some flexibility in choosing a contractor based on their experience constructing things a lot like what you want built. CGI Federal is a big, honking contractor that gets a lot of its business in government contracts. Like most of these firms, it has had its share of failures.  A system of the size of healthcare.gov is a special animal. I am not sure that any of the typical prime contractors in the government software space were qualified to build something like this, at least not if you wanted it done right.

There is some flexibility allowed in the statement of work (SOW), generally put together by the program manager with help from a lot of others. I don’t know precisely what rules applied to the contracting process here, but it is likely, probably by expending a lot of political capital, to create SOW that would have properly framed the contracting process so something actually usable could be constructed. A proper SOW should have included criteria for the contractor like:

  • Demonstrated experience successfully creating and managing very large, multi-vendor software projects on time that meet requirements that change late in the system life cycle
  • Demonstrated ability to construct interactive web-based software systems capable of scaling seamlessly on demand and interacting quickly with disparate data sources supplied by third parties

The right SOW would have excluded a lot of vendors, including probably CGI Federal but very possibly some of the big players in this game like Unisys, IBM and Northrop Grumman. Yes, many of these vendors have built pretty big systems, but they often come with records that are spotted at best, but whose mistakes are often overlooked. Until recently I used a Northrop Grumman system govtrip.com for my federal travel. They did build it, but not successfully. For more than a year the system was painfully slow and the user interface truly sucked.

Successfully building a system of this type, which was highly usable upon initial deployment, should qualify that contractor to bid on this work. Offhand I don’t know who would qualify. I do know whom I would have wanted to do the work: Amazon.com. They know how to create large interactive and usable websites that scale on demand. Granted even Amazon Web Services is not perfect, with occasional outages of its cloud network, but we’re talking a hassle factor of maybe .1% compared to what users have experienced with healthcare.gov. They used to do this for other retailers but may have gotten out of that business. I would have appealed to their patriotic senses, if they had any, to get them to bid on this work. In any event, even if they had bid they did not get the contract. So there was a serious problem either with the SOW or the “one size fits all” federal contracting regulations the doubtlessly very serious contracting officer for this project followed.

The size of this project though really made building it in-house not an option. So a board consisting of the best in-house web talent and program management talent in the government should have overseen it. Others have noted that the team that constructed President Obama’s websites, used to win two elections, would have been great in this role. In any event, the project needed this kind of panel from the moment the statement of work (SOW) was put together through the life of the project, and that includes post deployment.

Probably what they would have told those in charge was things they did not want to hear, but should have heard. The project should be delivered incrementally, not all at once. It should not be deadline driven. Given the constantly changing requirements, risk management strategies should have been utilized throughout. When I talk about architectural and technical mistakes in future posts, I’ll get into some of these.

In short, this project was a very different animal: highly visible, highly risky, with requirements hard to lock down and with technical assumptions (like most states would build their own exchanges) far off the mark. You cannot build a system like this successfully and meet every rule in the FAR. It needed waivers from senior leaders in the administration to do it in a way that would actually work in the 21st century, rather than to follow contracting procedure modeled on the spendthrift acquisition of commodities like toilet paper. An exception might even have been needed to have been written into the ACA bill that became law.

Next: architectural mistakes.

 
The Thinker

How healthcare.gov failed: the political aspects

(Also read parts 23 and 4.)

You know a federal IT manager has a problem when the President of the United States is dissing the very web site he was paid to manage. That’s what President Obama was doing today with the healthcare.gov site, the rollout of which was botched by any standard. Also botched was the obscene amount of money paid for the site, obscene even if it had worked. The Canadian contractor CGI Federal got the award, initially $93.7M, but with extra work is now at more than $292M. This is a crazy amount of money to pay for an interactive site and may be the most expensive site of its kind ever purchased with tax dollars.

I wrote a week or so back about my initial critique of the website. It is easy to criticize in hindsight. I can’t claim to know all of the site’s requirements. From news reports it is not too hard to infer a lot of them. There were a number of external data sources such as at the IRS and Social Security Administration that had to be queried to do things like figure out your eligibility for a subsidy, if any. There were many business rules that had to be followed. There were tight security rules to follow because Privacy Act data had to be stored. And there were accessibility rules required of any federal or federally funded website, to ensure access to the visually impaired. All this plus the site had to scale to meet demand.

As a certified software engineer (MS Software Systems Engineering, 1999, George Mason University) and federal employee with more than twenty-five years experience designing, maintaining and managing systems and websites, I can speak with some authority, in part because I have made many of the mistakes I will allude to, just not so spectacularly. I learned from my mistakes. There are many dimensions to engineering a site like this: political, programmatic, architectural and technical. I plan to take each of these in turn in various posts.

Today: the political dimension.

All work for the government is inherently political. This is true even in a science organization where I work. You can’t avoid it because politics are built into the rules and regulations you must follow, such as the Privacy Act and accessibility requirements (Section 508 of the Rehabilitation Act, to be specific). Projects of a certain size, like healthcare.gov, fall into the bucket of a program. A program is basically one or more projects that are interrelated which, because of their overall size, need to be packaged, managed and sold politically and which typically continue indefinitely. Managing a program requires a fistful of certifications. Having the certifications though is not enough. The effective program manager has to really understand all the power players at work and market to them. It’s probably the toughest job out there, particularly for very large scale or high visibility programs. I am sure the program manager for this project tried his or her best, but they got the wrong person. Someone with a lot of experience, a proven ability to manage a program this large successfully, and with the right political skills was needed.

The right program manager would have spoken truth to power, tactfully of course. There were red flags all over this project. Few things are more controversial than health care. He or she probably reported directly to HHS Secretary Kathy Sibelius. To start he or she should have mentioned the triple constraint. It affects all projects and it is basically this:  a project is naturally bounded by cost, schedule and scope. What this means in practice is that if the project was deadline driven, then scope would have to be reduced. This means not all the features of the website could be delivered by October 1, 2013. If the minimal scope was too big, it may have been technically impossible to deliver by the deadline. The typical political response is to throw money at the problem, which is probably why CGI Federal has billed more than $200M dollars so far. Unfortunately, at some point throwing more money at a project is counterproductive. It actually makes the project worse. This means there is an upper limit to what money can buy you as far as features for a given deadline. Someone was probably being dishonest to power by not laying these facts on the table because it was politically incorrect to do so. It was either that or someone in power refused to listen. If that was what happened then Secretary Sibelius should resign. If it was the program manager, he/she should resign.

The White House has some blame here too. This is the Obama Administration’s signature initiative. The Chief Technology Officer for the government should have been all over this project. He should have found the best talent inside and outside the government and brought these resources to bear for HHS, which doesn’t often handle projects like this. Instead, it was developed largely hands off. The CTO should have warned the White House of the high probability of failure, and recommended early on ways to preclude its possibility. Either he did not do this or his warning fell on deaf ears. The Federal CTO wields enormous political capital. It’s hard to imagine that if he squawked that the White House Chief of Staff would ignore him.

In any event, those in the chain of command must have largely acted in CEO mode. “Tut, tut, don’t bother me with details. No excuses, just get it done,” was probably their mentality. Given the prominence of this initiative, everyone from the president on down should have been engaged. They were not.

So a good part of the failure of healthcare.gov is simply an absence of the right kind of leadership. This was a problem that required getting out of the ivory tower and getting your hands dirty. Shame on all who acted in this way.

I don’t operate at the program level but I know enough about it to know I don’t want to. I don’t have the requisite people skills. But if I did I would have not taken responsibility for this work without written and personal assurances from these stakeholders that they would provide the resources to let the project succeed. I’d also want assurances that they would empower me and support me to the maximum extent possible to make it succeed.

Next: the programmatic missteps.

 
The Thinker

Healthcare.gov and the problems with interactive federal websites

Today’s Washington Post highlights problems with the new healthcare.gov site, the website used by citizens to get insurance under the Affordable Care Act. The article also talks about the problems the federal government is having in general managing information technology (IT). As someone who just happens to manage such a site for the government, I figure I have something unique to contribute to this discussion.

Some of the problems the health care site are experiencing were predictable, but some were embarrassingly unnecessary. Off the top of my head I can see two clear problems: splitting the work between multiple contractors and the hard deadline of bringing the website up on October 1, 2013, no matter what.

It’s unclear why HHS chose to have the work done by two contractors. The presentation (web-side) was done by one contractor and the back end (server-side) was done by another. This likely had something to do with federal contracting regulations. It perhaps was seen as a risk mitigation strategy at the contracting level, or a way to keep the overall cost low. It’s never a great idea for two contractors to do their more mostly mindless of the other’s work. Each was doing subsystem development, and as subsystems it’s possible that each worked optimally. But from the public’s perspective it is just one system. What clearly got skipped was serious system testing. System testing is designed to test how the system behaves from a user’s perspective. A subset of system testing is load testing. Load testing sees how the system reacts when it is under a lot of stress. Clearly some of the requirements for initial use of the system wildly underestimated the traffic the site actually experienced. But it also looks like in an effort to meet an arbitrary deadline, load testing and correcting the problems from it could not happen in time.

It also looks like the use cases, i.e. user interaction stories that describe how the system would be used, were bad. It turned out that most initial users were just shopping around and trying to find basic information. It resulted in a lot of browsing but little in the way of actual buying. Most consumers, particularly when choosing something as complex as health insurance, will want to have some idea of the actual costs before they sign up. The cost of health care is obviously a lot more than just the cost of premiums. Copays can add thousands of dollars a year to the actual cost of insurance. This requires reading, study, asking questions of actual human beings in many cases, and then making an informed decision. It will take days or weeks for the typical consumer to figure out which policy will work best for them, which means a lot of traffic to the web site, even when it is working optimally.

The Post article also mentions something I noticed more in my last job than in my current one: that federal employees who manage web sites really don’t understand what they are managing. This is because most agencies don’t believe federal employees actually need experience developing and maintaining web sites. Instead, this work is seen as something that should be contracted out. I was fortunate enough to bring hands on skills to my last job, and it was one of the reasons I was hired. In general, the government sees the role of a federal employee to “manage” the system and for contractors to “develop and maintain” the system. This typically leads to the federal employee being deficient in the technical skills needed and thus he or she can easily make poor decisions. Since my last employer just happened to be HHS, I can state this is how they do things. Thus, it’s not surprising the site is experiencing issues.

Even if you do have a federal staff developing and maintaining the site, as I happen to have in my current job, it’s no guarantee that they will all have all the needed skills as well. Acquiring and maintaining those skills requires an investment in time and training, and adequate training money is frequently in short supply. Moreover, the technology changes incredibly quickly, leading to mistakes. These bit me from time to time.

We recently extended our site to add controls that give the user more powerful ways to view data. One of these is a jQuery table sorter library. It allows long displays of data in tables to be filtered and sorted without going back to the server to refresh the data. It’s a neat feature but it did not come free. The software was free but it added marginally to the time it took the page to fully load. It also takes time to put the data into structures where this functionality can work. The component gets slow with large tables or multiple tables on the same page. Ideally we would have tested this prior to deployment, but we didn’t. It did not occur to me, to my embarrassment. I like to think that I usually catch stuff like this. This is not a fatal problem in our case, but it is a little embarrassing, but only to the tune of a second or two extra for certain web pages to load. Still, those who have tried it love the feature. We’re going to go back and reengineer this work so that we only use it with appropriately sized tables. Still, the marginal extra page load time may be so annoying for some that they choose to leave the site.

Our site like healthcare.gov is also highly trafficked. I expect that healthcare.gov will get more traffic than our site, which is thirty to 40 million successful page requests per month. Still, scaling web sites is not easy. The latest theory is to put redundant servers “in the cloud” (commercial hosting sites) to use as needed on demand. Unfortunately, “the cloud” itself is an emerging technology. Its premier provider, Amazon Web Services, regularly has embarrassing issues managing its cloud. Using the cloud should be simple but it is not. There is a substantial learning curve and it all must work automatically and seamlessly. The federal government is pushing use of the cloud for obvious benefits including cost savings, but it is really not ready for prime time, mission-critical use. Despite the hassles, if high availability is an absolute requirement, it’s better to host the servers yourself.

The key nugget from the Post’s article is that the people managing these systems in many cases don’t have the technical expertise to do so. It’s sort of like expecting a guy in the front office of a dealership to disassemble and reassemble a car on the lot. The salesman doesn’t need this knowledge but to manage a large federal website you really need this experience to competently manage your websites. You need to come up from the technical trenches and then add managerial skills to your talents. In general, I think it’s a mistake for federal agencies to outsource web site development. Many of these problems were preventable, although not all of them were. Successful deployment of these kinds of sites depends to a large extent on having a federal staff knowing the right questions to ask. And to really keep up to date on a technology that changes so quickly, it’s better to have federal employees develop these sites for themselves. Contractors might still be needed, but more for advice and coaching.

Each interactive federal web site is its own unique system, as healthcare.gov certainly is. The site shows the perils of placing too much trust in contractors and in having a federal managerial staff with insufficient technical skills. Will we ever learn? Probably not. Given shrinking budgets and the mantra that contracting out is always good, it seems we are doomed to repeat to these mistakes in the future.

Don’t say I didn’t warn you.

(Update: 10/28/13. This initial post spawned a series of posts on this topic where I looked at this in more depth. You may want to read them, parts 1, 2, 3 and 4.)

 
The Thinker

Ditch Google Glass and give us Google Car!

Google Glass, its internet-friendly eyeware, has been making news lately. It’s the creepy device that looks sort of like glasses (for one eye) that some nerds with close ties to Google are wearing that keeps them continuously connected to the Internet. It projects information from the Internet onto the inside of the glass so they can both walk around and see internet content at the same time. It also offers voice recognition capabilities so you can interact with the Internet. The truly creepy thing is that Google Glass can also record what you see in real-time, both audio and visual.

It used to be that if you used a camera its use was overt. A camera is pretty hard to hide. With Google Glass, it couId be on all the time but because it is like wearing glasses, we may not react to it like a camera. Yes, it could be recording everything it sees and hears, and perhaps storing it to your Google cloud permanently, and possibly the NSA’s cloud as well. The City of London, where there are cameras on every street corner and most places in between, might actually want people to use Google Glass: it could be one more tool at their disposal to keep an eye on crime. Here in the United States, the whole thing sounds ultra-Big-Brotherish, kind of like the NSA on steroids. It’s not that the NSA is necessarily able to tap into Google Glass content, at least not yet. Give them time and who knows? Whether or not the NSA can tap into Google Glass feeds, the whole idea is creepy at best and repugnant at worst. I don’t like the idea of anyone having a constant video stream from Google Glass in their cloud. I am imaging its use by perverts, voyeurs, estranged lovers and criminals, among others.

Google Glass strikes me as a tool that will make our already disappearing privacy shrink even further, maybe to the point where it can no longer be found, or is simply meaningless. I don’t want dozens of people recording me walking down the street! Moreover, their eyeware is also not in the least bit cute although they are working with eyeware manufacturers to sex them up. When people wear Google Glass, I think of the Borg, the evil villains, cyborgs really, half men, half machine, introduced in Star Trek: The Next Generation. It’s one thing to become part of the collective. It’s another thing to become part of it so unconsciously. It’s an Orwellian sort of technology. We’re not that good at getting rid of technology that has some uses once it is commercially available. So I am putting my hopes in the power of shame. I am hoping we will reflexively tell people sporting Google Glass: “Your eyeware is creepy. I wish you would never wear it or use it. And it upsets me that you would use it at all, knowing that it can continuously record what you are seeing!” Shame might work, or be powerful enough where it is used so rarely that it has no appreciable impact. Its true impact happens when its use becomes commonplace and accepted.

Google though racks up enormous profits, so I am not too surprised that they have a research arm looking into technologies like this. A lot of their technologies do not get out beyond the labs. That may well be the case with Google Glass. On the other hand, sometimes you can see a technology that they are working on and think, “I got to have that! Can I have it now?”

I want a Google Car.

A Google Car is completely cool and extremely useful. The Google Car, right now a dozen or so specially configured Toyota Priuses, an Audi TT and some Lexus RX450h, are driverless cars. You leave the driving to the car and it delivers you to your destination safely. Right now it is being tested in Las Vegas. The state of Nevada has actually issued a license, to a car, not a person, for its use within Nevada. With its computers, internet access and sensors, it takes you where you need to go in complete safety. Granted, there are not a whole lot of Google Cars working today, and they can be categorized as experimental. But right now they have an accident rate that would delight insurance actuaries everywhere: zero. That’s right; at least so far Google Car has proven to be completely flawless, if you measure it by its ability to cause an accident. With its radars it is always aware of traffic around you, not to mention curbs, speed bumps, potholes, traffic congestion and how to mitigate it. With reflexes far better and more accurate that the best trained racing driver, it can keep you safe getting you from Point A to Point B. Can it avoid every accident? Possibly not as it has been involved in some accidents caused when in manual mode or when it was hit by other cars. It is possible that some crazy driver will come out of left field so quickly that it cannot react quickly enough, and the driver will hit you. But (knock on wood, recalling issues with Boeing’s 787 fleet) so far at least it has not caused any accidents.

Senior citizens in particular should be rooting for Google Car, and demanding the right to buy one as soon as possible. For eventually senior citizens loose cognitive and muscular controls as they age, and this often means they lose the ability to drive safely, and the loss of freedom that comes from a loss of mobility. Yet to stay alive, they must meet with lots of physicians and need a way to get there. Maybe they can take a bus, but it’s a hassle. Maybe they can take a taxi, but it’s expensive. Get in a Google Car, and by using Google’s voice recognition system it will deliver them safely to their doctor. Safely means getting them into the parking lot and into a parking space all by itself. That’s cool technology; it’s mind-boggling stuff when you think about it.

Actuarial statistics don’t lie: if some accident is going to kill you, it is almost surely going to be when you are moving in a car. That’s because human beings drive cars, and we are obviously not perfect creatures. The only amazing thing about humans driving cars is that there are not more accidents. But, particularly if we reach the point where all vehicular driving is automated, death or injury from auto accidents may become a thing of the past, something that simply doesn’t happen except in very rare cases, like an unexpected and sudden bridge collapse.

There is another more selfish reason why I want a Google Car. I don’t like to drive. I drive out of necessity but I don’t enjoy it. I never have. It requires sustained concentration. It requires constantly juggling lots of real-time inputs by my already overtaxed brain which, even while I am driving is also sifting with lots of stuff, including issues at work, various erotic fantasies that have no chance of actualization, issues in computer science which for some reason my brain prioritizes, and my desire to have a constant source of chocolate. I’d much rather leave the driving to Google Car and concentrate on this other stuff. Or maybe I’d prefer to lie down in the back seat (in a special restraint just in case of accident) and sleep. It will be a better use of my time than the tedium of driving.

So Google, give Glass the heave ho and focus on Car instead. It’s not just what we want, but don’t know it yet, but it’s what we need. And it will save millions of lives. I’ll be first in line to buy one.

 

Switch to our mobile site