Archive for the ‘Technology’ Category

The Thinker

Changes to subscription services

Sorry if this is a somewhat geeky post.

I am using the Feedburner feed service. It allows many of you to acquire this blog through various mechanisms that don’t actually require that you to come to the site, a great way to read the blog if you are busy and/or lazy. It either emails my posts to you or by caching it on the Feedburner site makes it highly available in your feed reader.

Feedburner was the first to succeed in this market. It hadn’t been in operation too many years before it was acquired and Google and stuffed into its vast holdings. There it has been languishing, still working, but ignored. I can tell it is not being maintained because Google turned off the Feedburner API. In addition, it can’t even bother to maintain the documentation on the site. For example, it references Google Reader and iGoogle, which it retired a year or so back. This means that Feedburner is becoming untrustworthy. Google will probably get rid of it at some point.

Syndication is an important way for me to distribute my blog posts. Feedburner says I had 118 subscribers on average over the last week. This includes 22 active email subscribers. Given Feedburner’s problematic and untrustworthy status, I need to take some actions.

Those of you who subscribe via email will start receiving posts from my blog instead. Mail will come from m...@occams-razor.info. It’s possible your email program will move this into spam or trash. You may need to create a rule or filter to put these in your inbox. Each email should contain a link allowing you to unsubscribe.

Those of you that subscribe via news aggregators like feedly.com may need to change the feed URL. Rather than get it from Feedburner, you need to get it directly from my site. This generic feed URL should work fine: http://occams-razor.info/feed/.

You can also choose feeds for a specific feed protocol:

Thank you and thanks for reading the blog.

 
The Thinker

IBM: The Dilbert of companies

IBM
UBM
We all BM
For IBM

HARLIE (the computer), from When HARLIE was One, by David Gerrold

I grew up in an IBM town. IBM pretty much owned Endicott, New York when I lived in the area. The exception was the Endicott-Johnson shoe factories, which were in serious decline in the 1960s. In fact, IBM was founded in Endicott, New York in 1911.

Big IBM-white boxy concrete buildings line McKinley Avenue and other Endicott streets. If you didn’t work for IBM, you prospered from mooching off of IBM. IBM guys were cool if white guys in white shirts, black pants, narrow ties and short hair could be cool in the 1960s. In any event they lived well, worked hard and gave their all to the company Thomas J. Watson founded. It sure looked like a cool company to me back then. Not only did they rake in all these billions in revenue, but also their employees were happy with terrific pensions, great salaries (because IBM hired top talent only) and had pretty much a guarantee of lifetime employment. Management actually listened to their employees and encouraged them to be creative and innovative. The guys (and they were almost all guys, except in the clerical or punch card pool) wore THINK buttons on their suits and shirts. It was embedded in their logo — so much so that it was hard not to associate IBM with THINK (in capital letters).

That was then, but it bears no resemblance to the IBM of today. At least that’s my conclusion having finished Robert X. Cringley’s eBook on IBM, The Decline and Fall of IBM: End of an American Icon? Cringley has been a tech journalist since the 1980s, and made a name for himself (under a pseudonym I am pretty sure) writing for InfoWorld, the tech publication that focuses on information technology in the enterprise. I credit InfoWorld for much of my career success, since it was always topical and ahead of current trends, plus it told me stuff I needed to know to succeed in the workplace of the moment.

InfoWorld is still around, but its print publication is long gone. So, in fact, is Robert X. Cringely. Well, not quite. You see, there are two Robert X. Cringleys. There’s the guy that wrote the original columns over many years, and then there’s the trademark “Robert X. Cringley”, which InfoWorld claims to own. So there is still a reputed tech spy named Cringley on infoworld.com, but not the real Cringley, the tech guy that amused us with likely fictitious anecdotes about his relationship with “Pammy”, a curvy younger woman that ran hot/cold. Reading his column was half neat behind the scenes tech news, and half soap opera. It was fun and addictive. Anyhow, the first and legitimate Cringley, now 60+, is still one of the few people doing honest information technology journalism, and can be read on his website. And I assume the model in the picture is “Pammy”.

Cringley has been studying IBM for a long time, having grown up in an IBM town like me. He believes the company is ready to implode. This is because, very sadly, the company has morphed into the Dilbert of companies. It is overrun by pointy-haired bosses that are busy working their employees into early graves, if they are not being summarily fired to hire greatly discounted and frequently incompetent employees from India who largely have no idea what they are doing, or who have mastered the idioms of American English.

From the perspective of Wall Street, IBM is doing great. The managers are doing a great job of increasing their earnings per share quarter after quarter. It’s a metric they are focused on like a laser beam. You know what the problem is when you focus: it distracts you from the rest of the world. As Cringley’s analysis points out, the things that should matter about IBM are simply being ignored. It’s crazy what its managers are doing to its core assets, not to mention its employees. They are burning the seed corn, to use an analogy from the Civil War. For many years they have been relentlessly firing their best employees, mainly because they cost too much. They cut pensions and eventually did away with them altogether. They outsourced a lot of their work overseas, adding huge communications barriers and dispensable employees, who were often just cheap contractors, to handle technical interactions with their global services customers. These are very profitable customers that need a long-term relationship with a tech firm to manage their complex systems. To do this right, it requires a deep understanding of their technical needs, their business and a rigorous, engineered approach to managing their complex technical infrastructure. Done right these are hugely profitable customers for life. They used to do this right, and now it’s hard to find an example of a company that does it worse, or charges more for the privilege.

Sadly, the more you read of this relatively short eBook, the more appalling the whole thing becomes. (It’s a quick read and at $3.99, this self-published book that no publisher would otherwise touch is also a bargain. About half of it is an appendix of comments he has received over the years.) It doesn’t take much reading though to discover what the real problem is: managers come exclusively from the sales ranks, not the technical ranks. Consequently overall they have little clue what their customers want, and lack the creativity to direct their employees to give them what they want, or even bother to ask them. Moreover, it has more bureaucracy than the federal government, so many incredible layers of hierarchical management, despite implementing a flawed version of the Lean efficiency program.

Managers and employees often widely geographically separated, causing stilted communication that adds cost and delay. Not that employees have the time to give feedback. They are kept working like slaves: sixty or more hours a week, for now below par industry wages and they are massively overcommitted, with the grim reaper of outsourcing always at their heels. Their customers are being pick pocketed too: they pay highly inflated prices for crappy services, made worse by contracts based on billable hours that are often inflated. The smarter customers have moved on, which is fine with IBM. They then lay off more employees, which helps increase earnings per share, and Wall Street applauds because they equate this with good management.

Cringley has solutions but IBM’s leadership has proven both tone deaf and hostile to creating growth again in the company. As for listening to their employees, they simply can’t be bothered. Which means that IBM is a shadow of its former self. And this has been going on for a decade or so. I know people who have been laid off from IBM. As I read Cringley, I wonder why they didn’t bail long ago. In many cases, it’s because they are in their late 40s and 50s, and it’s hard to find a job that pays as well or even at all.

IBM is also buying back tons of its own stock, often with borrowed money, simply to prop up its earnings per share. No one seems to be looking at its sales and how they have been dropping, and how many of their largest customers have gone elsewhere. No one, least of all its management, is looking at the quality, innovativeness, or value of its product lines. Management simply isn’t interested.

What is IBM management good at? It’s good at creating Potemkin Villages: shells that look good to outsiders, but with hollow or non-existent insides. Its major advantage is a huge legacy of accumulated cash from its glory years that lets it hide its inefficiencies and which they apparently won’t invest in innovative products and services. Touring Endicott, New York, where only a couple hundred of the thousands that it employed in its glory days remain, easily demonstrates its hollowness as a corporation.

Cringley’s analysis, and it’s voluminous as well as filled with insider dope, is unfortunately right. I don’t invest in individual stocks, but if the price of increasing earnings per share is to piss off its customers and stop creating products that lead the market or offer greatest value, then it’s only a matter of time before its house of cards collapses. From the looks of things, it shouldn’t be too much longer. It won’t matter to its managers. Much of their pay is based on IBM’s earnings per share so their prosperity is already assured, so in some sense they are betting on failure. By tying pay to earnings per share, IBM embraced a false Wall Street value. Real growth and real value comes from companies that innovate, like Apple Computers. IBM is proving to be the stodgiest and most tone deaf of companies. The Davids of the corporate world have already hit this Goliath with a rock on the forehead. Goliath simply hasn’t figured out that he is falling to the ground.

At the start of the book, Cringley relates a real story. As a child in the 1950s he had a great idea that he took to IBM. Thomas J. Watson himself read and forwarded his letter. He actually got an interview with a group of IBM engineers. To say the least those days are long gone. Watson should be rolling in his grave. Most likely though IBM executives will remain clueless until Wall Street finally notices, and the company collapses into a bunch of sub-prime parts that get sold off by ticked off stockholders. Pretty much any company out there could do a better job of managing these parts than IBM.

I hope you will read Cringley’s book. It doesn’t take long and should make you cry, particularly if you knew the IBM that used to be. It should also make you very angry.

 
The Thinker

The Internet is already not net neutral

Upset by proposals by the Federal Communications Commission to create “express lanes” on the Internet? If the current proposal now out for public comment becomes a rule, it would allow Internet Service Providers (ISPs) like Verizon and Comcast to charge a fee to those web sites that want faster content delivery.

This is the opposite of net neutrality, which is the principle that all web content should be delivered by an ISP at the same speed. (Actually, it’s at the same bandwidth, since all network traffic is effectively at the speed of light.) The argument goes that without net neutrality, those companies with deeper pockets, particularly those who are already established, such as Netflix, have an unfair competitive advantage over other services or start ups without such deep pockets. It’s a concern I certainly share, so much so that I first blogged about it in 2006. Bottom line: I am still concerned and I think this proposal must be fought.

What I didn’t write about back in 2006 was that there was no net neutrality back then either. Effectively, bandwidth is already discriminatory because it is based on ability to pay. It’s just based on your ability to pay, not the content provider’s. For example, Verizon has basically four tiers of Internet service from it’s “high speed” service (actually it’s lowest speed service) where content delivery does not exceed 1MB per second to its “high speed Internet enhanced” service where you can download at up to 15MB per second. It’s hard to quantify what the cost of the 1MB/sec plan is compared to the 15MB/sec plan, because it depends on many factors including what bundle you may or may not choose. Suffice to say if you want a 15MB/sec service, you will pay more than a 1MB/sec service. So if streaming Netflix is critical to you, consider their 15MB/sec service. (Of course, this assumes that the port between Verizon and Netflix can handle 15MB/sec. If it can’t then there is no point in paying Verizon the premium.)

You can think of the Internet connection from your ISP like a water pipe. If the water pipe is big (and the water pressure is high enough) you can get more water per second through a bigger pipe. What the FCC is proposing is to take this pipe and put two pipes inside it. One is a fat pipe that will serve certain content very quickly, the “fast lane”. The other smaller pipe is for those who can’t afford to pay ISPs these premiums, i.e. the “slow lane”. Since I live in traffic-congested Washington D.C., I think of the “fast lane” as the pricey HOT (High Occupancy Toll) lanes on the beltway, and the “slow lane” as the toll free and usually congested other lanes. It’s not hard to imagine the Internet feeling a lot like it did in 1995, when the hourglass was principally what you saw in your web browser. Pages took forever to load, if they ever did. For those of us who remember those days, revisiting them sounds quite frightful. ISPs would have every incentive to throttle the slow lanes, because it would mean that web content providers would come to them and negotiate to use their fast lanes. In addition, they would have little incentive to increase bandwidth for their customers overall, but plenty of profit to funnel back to stockholders from those that pay for fast lanes. It is the antithesis of what the Internet is about.

So already there is no net neutrality of content delivery, unless you have an ISP that provides a “one speed for all customers” plan. The issue is not content delivery; it is the speed of particular content distribution within the ISP’s network. Which brings up another less noticed way that the Internet is not equal. It has to do with Content Delivery Networks (CDNs).

If you access my blog with a browser you will notice it takes a while to render a web page. Why is that? It’s because I don’t pay for a content delivery network. I did a test from home on accessing my web site. I had to go through 13 routers (switches on the internet) between my home computer and my web host:

1 wireless_broadband_router (192.168.1.1) 0.525 ms 0.244 ms 0.216 ms
2 l100.washdc-vfttp-93.verizon-gni.net (173.66.179.1) 7.083 ms 7.095 ms 8.161 ms
3 g1-5-3-0.washdc-lcr-21.verizon-gni.net (130.81.216.76) 9.435 ms 12.101 ms 12.305 ms
4 ae7-0.res-bb-rtr1.verizon-gni.net (130.81.174.208) 9.731 ms
   so-12-1-0-0.res-bb-rtr1.verizon-gni.net (130.81.151.230) 27.151 ms
   ae7-0.res-bb-rtr1.verizon-gni.net (130.81.174.208) 8.855 ms
5 0.ae1.xl1.iad8.alter.net (140.222.226.149) 10.166 ms
   0.ae5.xl1.iad8.alter.net (152.63.8.121) 9.396 ms 10.254 ms
6 0.xe-8-3-1.gw12.iad8.iad8.alter.net (152.63.37.14) 9.610 ms
   0.xe-8-0-0.gw12.iad8.alter.net (152.63.35.134) 9.693 ms
   0.xe-10-1-0.gw12.iad8.alter.net (152.63.35.102) 10.872 ms
7 customer.alter.net.customer.alter.net (152.179.50.206) 8.733 ms 10.023 ms 9.717 ms
8 he-2-4-0-0-cr01.ashburn.va.ibone.comcast.net (68.86.83.65) 10.252 ms
   he-2-6-0-0-cr01.ashburn.va.ibone.comcast.net (68.86.83.73) 14.819 ms
   he-2-5-0-0-cr01.ashburn.va.ibone.comcast.net (68.86.83.69) 12.388 ms
9 he-4-3-0-0-cr01.56marietta.ga.ibone.comcast.net (68.86.89.150) 39.468 ms 42.618 ms 37.101 ms
10 he-1-12-0-0-cr01.dallas.tx.ibone.comcast.net (68.86.88.234) 42.852 ms 45.176 ms 44.283 ms
11 be-22-pe01.houston.tx.ibone.comcast.net (68.86.85.174) 50.270 ms 49.438 ms 50.270 ms
12 as8075-1.2001sixthave.wa.ibone.comcast.net (75.149.230.54) 49.692 ms 85.009 ms 50.379 ms
13 216.117.50.142 (216.117.50.142) 49.597 ms

Electrons still travel at the speed of light, but they are thirteen stoplights between my computer and my web server, at least for me. You can see how long my request took at each stop. For example, hop 13 took 49.597 milliseconds. Add up all the milliseconds to see how long it took for me to get to my site. If you do the same thing, the number of hops will probably vary, along with the access time. In short, it’s relatively slow to get to, which alone may explain why my traffic is down. People are impatient when they click on a link to my site from a search index. So they go elsewhere or get an effective CDN by using a subscription service to read content like Feedburner or feedly.com.

This is not much of a problem if I go to google.com. Here is the route:

1 wireless_broadband_router (192.168.1.1) 0.557 ms 0.229 ms 0.202 ms
2 l100.washdc-vfttp-93.verizon-gni.net (173.66.179.1) 6.919 ms 8.588 ms 7.432 ms
3 g1-5-3-0.washdc-lcr-21.verizon-gni.net (130.81.216.76) 12.248 ms 12.530 ms 9.252 ms

So basically Google has figured out a way for its servers to be “close” to me, usually geographically, so I get their content more quickly, or at least with fewer stoplights between their servers and my computer. This magic is done through a content delivery network. I’m pretty sure Google rolled their own, and that takes a lot of money, which Google helpfully has.

You can imagine if a company wanted to create a new amazing search index, it would be at a significant disadvantage if it didn’t have a content delivery network. They probably won’t roll their own like Google, but use one of the companies out there that do this for profit, like Akamai and Level 3. The technology behind this is interesting but I won’t detail it here. The linked Wikipedia article explores it if you are interested. Suffice to say it does not come free, but there are times when it is justified. The U.S. Geological Survey where I work uses a commercial content delivery network. Whenever there is a major earthquake they push the content out to the CDN, otherwise their servers would get overloaded and it would be like a massive denial of service attack. It also gets this data out more quickly to the public, as the typical customer probably only has to traverse three hops instead of thirteen to get the information.

We like to think that the Internet is free, but of course it isn’t. We all pay for access to it. Even if we don’t pay it directly, we pay indirectly, perhaps for the cup of coffee at Starbucks while we surf on their wireless network, or through taxes if we use Internet kiosks at our local library. Doing away with net neutrality is just another means by which ISPs hope to make gobs of money from having a monopoly on the last mile between the content you want and your computer. This may be due, in part, by our refusal to pay for their pricier tiers of service. The only difference is that this time you are not directly paying for it but other content providers will be. (You would think ISPs might cut you in on the deal and discount your rate, but that assumes they are benevolent, and not the profit-obsessed weasels they actually are.) As we all know, nothing is free, so these costs will certainly be passed on to you if you are a subscriber, and that profit will go to the ISP.

Given that bandwidth to the home is a limited commodity, giving discriminatory access to web content providers that can afford to pay must by necessity mean that others will get less access. In that sense, the latest FCC proposal is smoke and mirrors, and it is in everyone’s interest to get off our lazy asses and oppose it.

You can leave a short comment to the FCC here or a long comment here.

 
The Thinker

There may be a Chromebook in my future

In principle, I am against getting in bed with any computer company. And yet it is hard to avoid.

Since 2008, I have been principally using Apple computers. I have an iMac where I do most of my work, and an iPad when I want to read more than interact with the web. I also have, courtesy of my employer, a Windows 7 laptop. I need it for work but there are also times when I just need Windows. Unfortunately, I’ll have to turn in that machine when I retire August 1. I don’t like Windows enough to want to buy a Windows computer, or even pay a license to run it virtually on my iMac, particularly now that Windows 8 is your user interface. In any event, upon retirement this will leave me with an Android-based Smartphone as my remaining computing device.

So you basically have to pick your platform. It’s almost always Windows or Mac for the desktop, and Android or iOS for mobile devices. None of them are ideal, even Apple with its shiny computers and snappy user interfaces. There is also no one-size-fits-all device, which is probably good because what you need often depends on your intended use.

For example, I don’t need to run Quicken on my Smartphone. I don’t need to edit Microsoft Office documents on my smartphone either, although seeing them on my smartphone is occasionally useful. When I am doing financial stuff, writing or banging out code, that’s when I really need a desktop or laptop computer. This kind of work is either mostly a lot of entering numbers or text. The work is primarily assertive computer use.

By the way, this is a term I just made up. It means I need to assert lots of real world facts to a computer, basically translating my thoughts into something that a computer can use. Assertive computer use often involves repetition but it also means expressing structured content and thought. Creating this post, for example, is assertive use. It requires not just a brain dump, but structuring my words carefully so exact meaning is communicated. In theory I can do this with voice recognition software. In practice it is much more efficient to do it with a keyboard.

During my last vacation I brought along just my iPad and a wireless keyboard, basically to see how realistic it was to do assertive computer work on this kind of device which is really optimized for browsing. What I discovered was that it was possible to do assertive work, but it was a hassle. The Microsoft Office suite has now arrived for the iPad, but it doesn’t make doing assertive work that much less challenging. It’s a hassle because I am using an iPad, and it’s not a desktop computer, and a tablet computer is basically used for browsing and for simple interactions that can be done by pointing. For assertive work, it’s like expecting a subcompact to haul a trailer. It is technically possible perhaps, but not close to ideal. Moreover, by its size and nature, it never will be ideal for this work.

So there is no one-size-fits-all device. We like to think that it can be done, but it can’t all be done elegantly on one device. But even when a device can do something elegantly, it cannot always do it optimally. That’s what I’m learning about my iMac. Mostly what I am learning is that after six years with the machine, I need to replace it. It’s not because there is something wrong with my machine, it’s that software has evolved a lot in six years. It’s gotten bigger and fatter and is causing my iMac to go into conniptions.

My 2008 iMac has 4GB of memory. It’s no longer close to enough, particularly when I am using Google Chrome as my browser, but also when I am running Dreamweaver or any Microsoft Office product. Chrome is fast, provided you have the memory. I now need 16GB of memory to get good performance and keep all the programs I use regularly handy. Unfortunately, I can’t add more. Once memory is used then when I start new programs I often wait, and wait. The operating system had to create a whole lot of virtual memory on my disk drive, which is much slower to read and write to than memory. It can take a couple of minutes to open Excel for the Mac, particularly if I have Chrome running.

Apple would like me to buy a new Mac, and I may have to. Six years is a long time to use any computer. However, the computer still looks like new. There is no reason to replace it other than due to general slowness due to new and more bloated programs I am running. I can’t replace the drive with a solid state drive to improve performance. And I can’t reengineer Chrome, Microsoft Office or any of these memory hogs. I can choose less memory intensive programs, perhaps by using Firefox instead of Chrome. But I moved to Chrome from Firefox because of its instabilities.

The general problem is there is no way to really know how efficiently a program will run until you use it a while with other memory resident programs. Software developers, being lazy, assume you have the latest machines with plenty of memory and super-fast processors. Coding for minimal memory use generally does not occur to them. What I can do is use my iMac just for assertive tasks, like writing documents, coding and email and stop using it for web browsing, in favor of devices which are better optimized for that, like my iPad. Or I can get a new computer and go through the same cycle again in a few years.

Or I could get a Chromebook. A Chromebook is Google’s version of a laptop computer, optimized exclusively for Google services. It runs on its own ChromeOS operating system. It basically requires you to do all your work inside of the Chrome browser. To use it effectively you generally need to be on a high speed wireless network. Of course you have access all the features of Google Drive so you have word processing, spreadsheets and presentations. Google is working hard to allow it to work easily disconnected from the network, via Chrome Apps.

Why does this help? Well, for one thing, I don’t need to wait a couple of minutes for Excel to load my spreadsheet. The functionality is there in a Google spreadsheet already. It’s true that their spreadsheets are not quite the same as Excel, but they are now close enough. In addition, all the stuff on your Google Drive is readily sharable. Google spreadsheets even have capabilities that Excel does not, perhaps the most useful of which is they are in the cloud, instead of sitting on your hard disk when you are a thousand miles away. And since my use is minimal, it is essentially free. There is no need to worry about installing the latest version of Google spreadsheets. There is no requirement to pay a Microsoft ransom periodically to keep writing or maintaining a spreadsheet. I also don’t need to spend more than a grand to upgrade my iMac. It’s all done in a web browser. These hassles of doing a lot of my assertive work, if it works as advertised, largely go away.

Moreover, I don’t need to spend a lot of money to buy a Chromebook. A decent Mac laptop is going to cost well over $1000. Chromebooks start around $200. Even if it only lasts you a few years, your data is in the cloud, hence always backed up. In addition, the device is cheap enough to easily replace. It can be used for most assertive tasks, as well as for browsing. Perhaps most cool of all, there is almost no “boot” time. Your Chromebook is available when you need it in seconds.

Its downside is limited use. If it can’t be done in a browser or one of their apps, you can’t do it at all. But I don’t see a Chromebook as my only computer, but as a primary computer to use except when I need the power of a desktop computer.

In short, it’s a pretty compelling solution as long as you don’t mind getting in bed with Google. If I’m going to have to get into bed with any company however, I might as well save money and time.

 
The Thinker

Mankind is not going to the stars

I’m something of a space geek. It helps to have grown up during the space race. I still find the exploration of outer space fascinating. Our knowledge of the universe is growing by leaps and bounds. Hundreds of planets have been discovered, so many that it no longer makes news to announce new ones. Many of these planets look like they might support human life, although it’s hard to say from such a fantastic distance away. Some of us are waiting expectantly for someone to discover the warp drive, so we can go visit and colonize these distant worlds. Maybe it will be just like Star Trek!

It fires my imagination too and it excites the popular press as well. It all sounds so exotic, fascinating and Buck Roger-ish. The only problem is that almost certainly none of this will happen. We might be able to put men on Mars, but colonizing the planet looks quite iffy. Even colonizing the moon, as I suggested when Newt Gingrich was promoting the idea during his last presidential campaign, is probably cost prohibitive. Which means we need to keep our dreams of visiting strange new worlds in check. It won’t be us, it won’t be our children or grandchildren and it probably won’t happen at all. To the extent we visit them it will occur virtually, using space probes.

I don’t like to be the bearer of bad news but hear me out. We probably won’t colonize the moon permanently because it will be too costly to sustain it. It’s one thing to land a man on the moon, which America did rather successfully. It’s much more costly to stay there. For a sizeable group of humans, say ten thousand or so, to live self sufficiently on the moon is probably impossible. If it can be done it will take a capital investment in the trillions of dollars. My guess is that it would take tens of trillions of dollars, if not hundreds of trillions of dollars. It’s unlikely the moon has enough water that we could mine but if it does it’s likely very inefficient to process as it is wrapped up in moon dust. Otherwise water would have to be imported from earth at ruinous costs. In fact, colonists would have import pretty much everything. Even if citizens of the moon could grow their own food and recycle their own water, manufacturing is likely to be limited. We might have a small scientific colony on the moon, like we do at the International Space Station. It probably won’t amount to more than a dozen men or women, and it’s likely to cost at least ten times as much as the international space station cost, since you have to move equipment and men a much further distance.

What about man colonizing Mars? It doesn’t seem out of reach. When the orbits are working just right, a spacecraft can transit each way in about six months. The cost of getting a pound of matter to Mars is likely ten to a hundred times the cost of getting it to the moon, which is probably cost prohibitive in itself. The journey there and back though looks chancy. It’s not just the possibility that some critical system will fail on the long journey; it’s the cosmic rays. Our astronauts are going to absorb a heap of radiation, the equivalent of 10,000 chest X-rays. It looks though that this is probably manageable. What of man living on Mars itself?

The good news is that humans could live on Mars, providing they don’t mind living underground. The atmosphere is much thinner than Earth’s, it is much colder on Mars than on the Earth in general and you can’t breathe the air and live. It’s true that by essentially burying our houses in Martian soil humans could be safe from much of it. Slapping on some SPF-50 sunscreen though won’t do the job. Anyone on the surface will have to wear spacesuits. So far we haven’t found a reliable source of water on Mars either. Colonizing Mars is within the realm of probability, but it is fairly low. Frankly, it’s a very remote, cold and arid place with nothing compelling to it other than a lot of empty mountains and valleys and swirling Martian dust, almost always in a pink or orange haze.

Colonizing distant moons and asteroids have similar problems: no suitable conditions for sustaining life as we know it, insufficient gravity, toxic radiation, frequently toxic chemicals and cold of the sort that most of us simply cannot comprehend it. Both Venus and Mercury are simply too hot to inhabit, and Venus is probably as close to hell as you will find on a planet in our solar system.

What about colonizing planets around other solar systems? Here’s where we need to fire up the warp drive of our imaginations because as much as physicists try to find exceptions around Einstein’s theories of relativity, they can’t. The closer you get to the speed of light the more energy it takes. It’s like Sisyphus pushing that rock up the mountain. To get a spacecraft with people in it to even 10% of the speed of light looks impossible with any technology we have or can reasonably infer. The closest star is three light years away, so even if this speed could be achieved it would take thirty years to get to the closest star. But it can’t be achieved. In fact, we’d be lucky to get to 1% of the speed of light, which would make a journey to Proxima Centauri a voyage of 300 years. Moreover, if some generations could make the journey, it is likely that our closest star is not inhabitable.

Perhaps we could freeze ourselves and wake up millions of years later at our destination. Maybe that would work. Obviously, we don’t have technology to do anything like this now. And given the laws of entropy, it’s hard to imagine any spacecraft that could survive a voyage of that duration intact.

What we need is a warp drive and a starship. But what we will really need is an escape clause from the theories of relativity and the technology that will allow humans to utilize it: a spacecraft that can slip through a wormhole or something. It’s not that this is impossible, but with what we know it looks impossible for us and will always be impossible. In any event there doesn’t appear to be any wormholes conveniently near Earth.

In short, we are stuck to the planet Earth. We’d best enjoy what we have and stop squandering the planet on which all human life depends. So far we are doing a terrible job of it.

 
The Thinker

Mt. Gox: more evidence of why BitCoin is best avoided

Dorothy in The Wizard of Oz learned from Glinda that if she clicked her ruby slippers, closed her eyes and kept repeating “there’s no place like home” that she would magically return to Kansas. So simple! BitCoin adherents are a lot like Dorothy. Dorothy at least made it home from her fantastical journey. True believers in BitCoin, the libertarian currency, got a splash of cold water across their faces this week instead. Mt. Gox, the Tokyo-based BitCoin exchange, has gone belly up, along with about $300M in BitCoins. Most likely someone stole those BitCoins, either someone inside the firm or some shadowy hackers. By any standard, this was quite a heist. Looking at history, you’d have a hard time finding any instance of a similar theft inside what amounts to a bank.

In any case, sorry you BitCoin suckers. Real banks and exchanges still have vaults, but they don’t carry much of their assets in cash. Much of it is commercial paper, bonds, mortgage deeds, promissory notes and Federal Reserve Notes. Whether in paper, assets on an electronic register somewhere, or gold bars in a vault, these assets are quite tangible. Someone with a car loan who defaults on their payments is likely to find their car repossessed. Those who defaulted on home loans during the Great Recession found their houses foreclosed and if they had ready cash assets, they were put under legal assault. BitCoin owners with their BitCoins in Mt. Gox now have nothing and the police just aren’t interested in serving them justice.

This was not supposed to happen to this libertarian currency. Freed of its tie to governments, it was supposed to soar above inflation and always retain a finite empirical value. It was all secure and such through the power of math. After all, exchanging a BitCoin involves keeping a record of who its next owner is. Unless, of course, it just disappears. Undoubtedly these stolen BitCoins were converted into a real currency, just unbeknownst to its owners, and perhaps with the help of some money laundering exchange, perhaps Mt. Gox itself. BitCoin is after all the preferred currency of drug dealers, at least until their fingerprints have disappeared and they can convert the digital money into something more tangible and fungible, like U.S. dollars.

I keep my cash in a couple of credit unions and a bank. It’s unlikely that a credit union like Pentagon Federal, where I have a couple of accounts, is going to go under like Mt. Gox. In the unlikely event that it does, I’ll get my money back because it is backed up by what amounts to the full faith and credit of the United States. Mt. Gox was backed up by the full faith and credit of, well, Mt. Gox. It’s like asking the fox to guard the henhouse.

And there’s the rub with BitCoin exchanges. When you create a currency detached from a government that will assert and protect its value, there is no one to complain to when your BitCoin bank goes bust. The government of Japan is looking into the event, but it is mostly hands off. It never promised to underwrite Mt. Gox, and Mt. Gox never asked it to. In any event, Japan underwrites its Yen, not BitCoins. Japan has a vested interest in keeping its currency solvent. It has no such interest in keeping another currency, particularly one it cannot control, solvent.

An exchange like Mt. Gox could of course seek out local governments for underwriting of their exchanges. Those BitCoin exchanges and banks that want to remain viable are going to have to do something just like this. Good luck with that. In doing so though they are of course defeating the whole purpose of BitCoin. BitCoin is about a libertarian ideal; it’s about money having a value independent of government apron strings. Affiliate the BitCoin currency in a BitCoin exchange with a government, and you tacitly admit that BitCoin is not a libertarian currency after all. In short, you have to give up the notion that money can be decoupled from government control.

It’s unlikely that many governments will be willing to protect BitCoin exchanges. It is reasonable to protect assets that you can actually control: your national currency. For a government to protect a BitCoin currency, it is reasonable to expect that they would also be able to control the amount of BitCoins in circulation and set rules for their use and misuse. They can’t do that, which means that they would be asked to put the good faith and credit of their country against an erratic currency that could prove digitally worthless at any time. This strikes me as a foolish thing to do, but there may be entrepreneurial countries out there, say, the Cayman Islands, that will take the plunge. The risk might be worth the rewards.

I don’t think you have to worry about governments like Germany, England, Japan, China and the United States doing something this foolish. If there is any organization that might see profit in this, it will probably be the Mafia, or other criminal syndicates, many of who are already using BitCoins as a mechanism for money laundering.

Doubtless other BitCoin exchanges will work real hard to sell trust that is now deservedly absent from these exchanges. As I pointed out in an earlier post, it’s going to be a hard sell given that BitCoin’s value is essentially based on faith in its mathematics and algorithms.

Absent from the minds of BitCoin true believers is an understanding that money must be tied to a governmental entity to be real money. It’s tied to governments for many reasons, but primarily because governments are required to govern, and this includes having the ability to enforce its laws and to collect taxes. Money is based on the idea that entities can force everyone to play by the same rules, including using the same currency as a means of exchange within the country for lawful debts. The truth is, there are no rules with BitCoin other than its math. It is a lawless currency. That Mt. Gox’s treasury of BitCoins can be plundered with impunity proves it.

Libertarianism is built on the idea of caveat emptor: let the buyer beware. No warranties are expressed or implied, but even if they are expressed they depend on the trust of the seller. No one can force the seller to do squat. The best a buyer can hope for is to track the thief down and take justice with his fists or a gun. That’s no way to run an economy, which is why libertarianism is an ideology that simply does not work in the real world.

Again, a word to the wise: just say no to BitCoins.

 
The Thinker

Bitcoin is libertarian bit nonsense

Are you intrigued by Bitcoin? It’s a digital currency much in the news these days. It even got a hearing on Capitol Hill last month. Surprisingly the foundation overseeing Bitcoin came out relatively unscathed. Some places are accepting Bitcoins as payment for actual goods and services. They do so on the assumption the currency has value. Like any other currency it has value because some people assert it has value.

Which raises the question, what is its value? There are clearly things you can do with Bitcoin that are convenient. It’s a sort of digital cash for our electronic age. Only it’s not really cash. Real cash doesn’t leave fingerprints. You make a Bitcoin transaction and the transaction is recorded in the coin itself.

If there is value in Bitcoin, maybe it is from the faith we place in its math. There is not much we trust anymore, but you can still trust math, and Bitcoin depends on math, not to mention encryption algorithms, to assert its value. The number of Bitcoins has a finite limit because of the power of math and algorithms. Each attempt to mint a new bit coin requires lots of computers to spend lots of time and use lots of energy. For all its electronic novelty, it’s hardly an environmentally friendly currency. In fact, it’s bad for the environment.

You can’t say that about gold. Granted, the process of getting gold out of the ground is often bad for the environment, but once you have it, there it is, probably to sit in highly protected bank vaults and never to be actually moved or for that matter seen. A Bitcoin is entirely virtual but it depends on lots of computer hardware to mint and to assert its value. You won’t be creating one of these with a pad of paper and a slide rule. In fact, a Bitcoin is entirely dependent on computers and high speed networks. No wonder then that it was abruptly devalued last week when China blocked Bitcoin transactions. Keep it from being used in the world’s most populous country and it has lot less utility. Of course, it’s useless to anyone without a computer or some sort of digital device, not to mention some network so you can trade the currency. So it’s not even universal. You can’t say that about the U.S. dollar.

The larger question is whether a currency built on nothing but math really can have value. It does have value at the moment, as I can actually trade Bitcoins for U.S. dollars, which in my country is what everyone accepts as currency. In the long run though I think Bitcoins are going to be worthless. I don’t plan to own any of them and maybe I can make a case why you shouldn’t either.

First, there is whether counterfeit Bitcoins can be created. New ones can be minted if you have the computer horsepower and these are “legal”, but if they can be created for virtually no computer time then they would be counterfeit. Call me suspicious but I bet either the NSA has already figured out a way to hack it or will soon. In short, to trust a Bitcoin you must buy into its assumption that it can’t be hacked. Since the dawn of the computer age, hackers have demonstrated their ability to hack anything. They love the challenge. It’s reasonable to believe that Bitcoin is going to be hacked one of these days.

Second, there’s the question of what its value represents. I’ve discussed the value of money before. My conclusion is that money essentially represents faith that the country coining the currency will remain solvent and viable. I based this conclusion on the observation that currency value falls whenever these assumptions are shaken. Having a currency based on the gold standard doesn’t seem to make any difference, as the United States has been off the gold standard since the 1970s. Printing new currency doesn’t seem to be that big a deal either, providing the new currency is used to acquire assets of value. This is what the Federal Reserve has been doing since the Great Recession: creating money (none of it actually printed, apparently) and using it to buy long term securities like mortgage-backed securities. Curiously, just printing money is not inflationary when it is used to buy tangible goods. This is providing that the institution printing the money is trusted, and the Federal Reserve is trusted. In any event, investors can value or devalue a currency based on examining its monetary system and the country’s economy. With Bitcoins, you can’t do this. It is backed by no country, which is its appeal to its adherents.

What is Bitcoin really about then? It’s about a political idea; more specifically it’s about libertarianism. It’s trying to be a means by which libertarianism becomes institutionalized. If you are not familiar with libertarianism, it’s all about freedom, buyer beware and minimal (and ideally no) government. Libertarians (at least the committed ones) are vesting their wealth in Bitcoins because it’s how they show loyalty to the cause. They want money to be frictionless and outside governmental control. Arguably, Bitcoin does a good job with this, providing buyers and sellers will accept it as having value.

But libertarianism is an idea, not a thing. Libertarianism is really more of a verb than a noun. A currency though has to be based on something real. The U.S. dollar is essentially backed up by the collective wealth of all of us who possess dollars, or assets valued in dollars, or really any property within the United States. It’s based on something tangible. You buy a house in dollars instead of Bitcoins because everyone in the transaction has faith that those dollars mean something. This is because everyone else is trading in dollars too to buy real goods and services. If the U.S. dollar gets too low, there are things we can do about it. We can petition Congress or the White House to take action. There is no one to go to to complain about the sinking value of your Bitcoins. Assuming the currency cannot be counterfeited, its only value is its finiteness, enforced by math and increasingly expensive computational processes to make new coins. That’s it. As those libertarians say, caveat emptor (buyer beware). Bitcoin buyers, caveat emptor!

This tells me something important: Bitcoin is a bogus currency, at least in the long term. Yes, you can buy stuff with it now, but only from a very limited number of sellers: those who have faith in the idea of a libertarian currency. It’s obvious to me that libertarianism is just not doable as a sustainable way of governing. I have no faith it in whatsoever because its philosophical underpinnings do not actually work in the real world.

I would like to see it in Glenn Beck’s libertarian community, however, if it ever gets built. One thing is for sure, no one is going to build it for Bitcoins. They are going to demand U.S. dollars.

 
The Thinker

The feminization of Yahoo News

I have nothing against females as CEOs. It’s clear that females make up a tiny minority of corporate CEOs and members of corporate boards. Recent news reports suggest that progress on this front has stalled. But there are a few of them. Few have been more prominent in the news lately than Marissa Mayer, the relatively new 38-year-old CEO of Yahoo. She has been making a splash for herself, not just for leaving a job at Google to take on the troubled Yahoo, but also for her many changes to the relatively staid and unprofitable Yahoo, Inc., something of a great grandfather on the World Wide Web.  These included bringing a nursery into the office for her now year old son and requiring her employees to actually come into the office instead of telecommute.

Mayer though has a track record of success with Google and she’s proving adept so far at changing the dynamics at Yahoo. Her old employer Google must really miss her because she successfully led a number of divisions at Google, including some of its principle products: its search engine and GMail. Yahoo had been losing revenue and market share, but things are quickly turning around with Mayer in charge. Yahoo now gets more web traffic than Google again, no small feat and while not quite profitable again, it is making strides toward profitability. She has purchased the blogging site Tumblr and Yahoo’s stock price is rebounding. It has more than doubled during her brief tenure as CEO.

So she is doing good for stockholders and with her reputation she can probably turn around Yahoo, which is good because a World Wide Web mostly overseen by the benevolent Google overlord is not a healthy dynamic. She is getting more eyeballs and more interest from advertisers. Yahoo stockholders should be happy with her performance to date, and hope that they can keep her around.

I was a Yahoo fan from early on. At one time it was the only destination worth going to on the web. It was my home page for many years. It attempted to index the Internet, and actual humans were categorizing content. I’m old enough to remember what Yahoo really stands for (Yet Another Hierarchical Officious Oracle). It was the first web site to do a really good job, in a 1995 kind of way, of helping us find stuff on this new medium called the Internet. For many years I had a Yahoo email account. But Yahoo proved not very agile as it aged, and various ineffective CEOs tended to make things worse.

I don’t go to Yahoo more often than I used to and use Google and its services even more than I used to, although I often feel guilty about it. But I do keep Yahoo News as my principle news page, or did until recently. It was a habit hard to break. The page was edited by actual human beings, rather than Google News, which is edited by a human-programmed computer algorithm. Considering Mayer also ran Google News, I expected Yahoo News might look a lot more like Google News. It is taking on some of its characteristics, including more personalization options. It is also, I am sorry to say, loading up the news site with a lot of fluff. This is making me very unhappy.

Stockholders are probably applauding this move to add these “human interest” stories. If you go to Yahoo News, you can’t possibly miss them, as they comprise about one of every three stories on its main page. It’s not quite National Enquirer stuff, but it’s a lot of Good Morning America-like stuff. In fact, Good Morning America (ABC) is one of their featured content providers. What do I mean by fluff? Well, there was the recent live broadcast of The Sound of Music on TV, and Yahoo News was all over it (it was mostly dissed by the critics). Is this really news? It probably gets a lot of clicks so it surely must be interesting to a lot of people. But no, this is not really news, except possibly in the category of entertainment news. It would be fodder for Variety’s web site. It’s not news in my book.

Perhaps it is just me. News to me is a newspaper like the Wall Street Journal or the Los Angeles Times. I expect to learn, not just what is happening right now that could affect my locality, my country, the world and me. I expect some in-depth reporting on an issue so I can understand the dynamics of the many pressing issues of the day. In short, I read news not to be entertained, but to gain knowledge. I need lots of facts and I need unbiased, in-depth understanding of these facts by reporters who sift through these issues and talk with leading authorities. I seek knowledge because to change the world I must understand not just how it behaves but why it behaves the way it does. News should have its pulse on the planet and should tell citizens like me who are reasonably informed more of what we need to know to stay informed.

I’m not getting much of this on Yahoo News anymore, and I hold Marissa Mayer to blame. I get lots of popcorn articles like this Sound of Music piffle, which today includes an ancillary story about the von Trapp’s mountain lodge in Stowe, Vermont. I get Dear Abby, now available only online at Yahoo but linked daily through its “news” page. I get stories about the lottery. And when I do deign to read an article that looks like real news, it is often short when I want depth. Worse, I get articles that aren’t articles at all, but you don’t know that until you click on it. Instead, it’s video that starts loading even if you don’t want it to load, and for which you have to “pay the freight” of an annoying commercial first. Expect more of the same because one of Marissa Mayer’s recent ideas is to hire Katie Couric as its “global anchor”. I expect lots of little fluff pieces like this and “lite-news” interwoven into its news site during the course of the day. It’s all part of the Yahoo experience, or something that Mayer is planning.

It may be successful for Yahoo and Mayer, but it’s not what I’m looking for in a news site because most of this is not really news. It’s marketing designed to attract eyeballs, perhaps making it a somewhat toned down version of Huffington Post, another site designed by a female overlord full of sauce but little relevant news.

I don’t like where this is leading. It will probably lead to profitability for Yahoo, but as far as leaving us citizens better informed, it’s a poor effort at best. There are plenty of other news sites out there including CNN and all the major networks, but most of these are becoming less newsworthy and saucier as well. Which leaves me looking for a real news site. There is the reliable and local washingtonpost.com site, but I get most of that content from my newspaper subscription. Ironically, I find myself getting most of my news from one of Mayer’s old projects: Google News. For the most part, unless you choose to delve into an area like Entertainment, its news is topical, relevant and in-depth articles tend to get priority. I find I like the algorithmic approach better than Mayer’s approach on Yahoo. I’m just hoping Google doesn’t try to sauce up its news algorithms.

Marissa, consider that public service may be part of Yahoo’s mission as well as enriching shareholders. How about a version of Yahoo News that is just news, instead of so much fluff, like maybe real.news.yahoo.com? And while I am making suggestions, please get rid of the cutesy Yahoo News animated image in the top left corner of the site. And surely you have noticed that since your top menu bar is stuck on the top and you can’t avoid it, when you page down it hides some content, which means you have to cursor up or drag the window up a bit to read it. And you often have the same article, or a variation of it, on the same page. Can’t these be cleaned up?

It seems moot to me. I like your old product better, so I’m hanging out now on Google News.

 
The Thinker

How healthcare.gov failed: the technical aspects

(Also read parts 1, 2 and 3.)

A lot of how healthcare.gov works is opaque. This makes it hard to say authoritatively where all the problems lie and even harder to say how they can be solved. Clearly my knowledge is imperfect and thus my critiques are not perfect either. I am left to critique what the press has reported and what has come out in public statements and hearings. I can make reasonable inferences but my judgment will be somewhat off the mark because of the project’s opacity.

It didn’t have to be this way. The site was constructed with the typical approach used in Washington, which is to contract out the mess to a bunch of highly paid beltway bandit firms. Give them lots of money and hope with their impressive credentials that something will emerge that is usable. It was done this way because that’s how it’s always done. Although lots of major software projects follow an open source approach, this thinking hasn’t permeated the government yet, at least not much of it. Open source projects mean the software (code) is available to anyone to read, critique and suggest improvements. It’s posted on the web. It can be downloaded, compiled and installed if someone has the right tools and equipment.

It’s not a given that open sourcing this project was the right way to go. Open source projects work best for projects that are generic and used broadly. For every successful open source project like the Apache Web Server there are many more abandoned open source projects that are proposed but attract little attention. Sites like sourceforge.net are full of these.

In the case of healthcare.gov, an open source approach likely would have worked, and resulted in a system that would have cost magnitudes less and would have been much more usable. It would still have needed an architectural committee and some governance structure and programmers as well, principally to write a first draft of the code. Given its visibility and importance to the nation it would have naturally attracted many of our most talented programmers and software engineers, almost all of who would have donated their time. Contractors would have still been needed, but many of them would have been engaged in selecting and integrating suggested code changes submitted by the public.

If this model had been used, there probably would have been a code repository on github.com. Programmers would have submitted changes through its version control system. In general, the open source model works because the more eyes that can critique and suggest changes to code, the better the end result is likely to be. It would have given a sense of national ownership to the project. Programmers like to brag about their genius, and some of our best and brightest doubtless would have tooted their own horns at their contributions to the healthcare.gov code.

It has been suggested that it is not too late to open source the project. I signed a petition on whitehouse.gov asking the president to do just this. Unfortunately, the petition process takes time. Assuming it gets enough signers to get a response from the White House, it is likely to be moot by the time it is actively taken up.

I would have like to have seen the system’s architecture put out for public comment as well. As I noted in my last post, the architecture is the scaffolding on which the drywall (code) is hung. It too was largely opaque. We were asked to trust that overpaid beltway bandits had chosen the right solutions. It is possible that had these documents been posted early in the process then professionals like me would have added public comments, and maybe a better architecture would have resulted. It was constructed instead inside the comfort of a black box known as a contract.

After its deployment, we can look at what was rolled out and critique it. What programmers can critique is principally its user interface because we can inspect it in detail. The user interface is important, but its mistakes are also relatively easy to fix and have been fixed to some extent. For example, the user interface now allows you to browse insurance plans without first establishing an account. This is one of these mistakes that are obvious to most people. You don’t need to create an account on amazon.com to shop for a book. This was a poorly informed political decision instead. Ideally someone with user interface credentials would have pushed back on this decision.

What we saw with the system’s rollout was a nice looking screen that was full of many things that had to be fetched through separate calls back to the web server. For example, every image on a screen has to be fetched as a separate request. Each Javascript library and cascading style sheet (CSS) file also has to be fetched separately. In general, all of these have to complete before the page can be usable. So to speed up the page load time the idea is to minimize how many fetches are needed, and to fetch only what is needed and nothing else. Each image does not have to be fetched separately. Rather a composite image can be sent as one file and through the magic of CSS each image can be a snippet of a larger image and yet appear as a separate image. Javascript libraries can be collapsed into one file, and compressed using a process called minifying.

When the rollout first happened there was a lot of uproarious laughter from programmers because of all the obvious mistakes. For example, images can be sent with information that tells the browser “you can keep this in your local storage instead of fetching it every time the user reloads the page”. If you don’t expect the contents of a file to change, don’t keep sending it over and over again! There are all sorts of ways to speed up the presentation of a web page. Google has figured out some tricks, for example, and has even published a PageSpeed module for the Apache web server. Considering these pages will be seen by tens or hundreds of millions of Americans, you would expect a contractor would have thought through these things, but they either didn’t or didn’t have the time to complete them. (These techniques are not difficult, so that should not be an excuse.) It suggests that at least for the user interface portion of the project, a bunch of junior programmers were used. Tsk tsk.

Until the code for healthcare.gov is published, it’s really hard to assess the technical mistakes of the project, but clearly there were many. What professionals like myself can see though is pretty alarming, which may explain why the code has not been released. It probably will in time as all government software is in theory in the public domain. Most likely when all the sloppy code behind the system is revealed at last, programmers will be amazed that the system worked at all. So consider this post preliminary. Many of healthcare.gov’s dirty secrets are still to be revealed.

 
The Thinker

How healthcare.gov failed: the architectural aspects

(Read parts 12 and 4.)

If you were building a house that you didn’t quite know how it would turn out when you started, one strategy would be to build a house made of Lego. Okay, not literally as it would not be very livable. But you might borrow the idea of Lego. Each Lego part is interchangeable with each other. You press pieces into the shape you want. If you find out half way through the project that it’s not quite what you want, you might break off some of the Lego and restart that part, while keeping the part that you liked.

The architects of healthcare.gov had some of this in their architecture: a “data hub” that would be a big and common message broker. You need something like this because to qualify someone for health insurance you have to verify a lot of facts against various external data sources. A common messaging system makes a lot of sense, but it apparently wasn’t built quite right. For one thing, it did not scale very well under peak demand. A messaging system is only as fast as its slowest component. If the pipe is not big enough you install a bigger pipe. Even the biggest pipe won’t be of much use if the response time to an external data source is slow. This is made worse because generally an engineer cannot control aspects of external systems. For example, the system probably needs to check a person’s adjusted gross income from their last tax return to determine their subsidy. However, the IRS system may only support ten queries per second. Throw a thousand queries per second at it and the IRS computer is going to say “too busy!” if it says anything at all and the transaction will fail. From the error messages seen on healthcare.gov, a lot of stuff like this was going on.

There are solutions to problems like these and they lay in fixing the system’s architecture. The general solution is to replicate the data from these external sources inside the system where you can control them, and query replicas instead of querying the external sources directly. For each data source, you can also architect it so that new instances of it can be spawned on increased demand. Of course, this implies that you can acquire the information from the source. Since most of these are federal sources, it was possible, providing the Federal Chief Technology Officer used his leverage. Most likely, currency of these data is not a critical concern. Every new tax filing that came into the IRS would not have to be instantly replicated into a cloned instance. Updating the source once a day was probably plenty, and updating it once a month likely would have sufficed as well.

The network itself was almost certainly a private and encrypted network given that privacy data traverses it. A good network engineer will plan for traffic ten to a hundred times as large as the maximum anticipated in the requirements, and make sure that redundant circuits with failover detection and automatic switchover are engineered in too. In general, it’s good to keep this kind of architecture as simple as possible, but bells and whistles certainly were possible: for example, using message queues to transfer the data and strict routing rules to handle priority traffic.

When requirements arrive late, this can introduce big problems for software engineers. Based on what you do know though, it is possible to run simulations of system behavior early in the life cycle of the project. You can create a pretend data source for IRS data that, for example, always returns an “OK” while you test basic functionality of the system. I have no idea if something like this was done early on, but I doubt it. It should have been if it wasn’t. Once the interaction between these pretend external data sources was simulated, complexity could be added to the information returned by each source, perhaps error messages or messages like “No such tax record exists for this person” to see how the system itself would behave, with attention to the user experience through the web interface as well. The handshake with these external data sources has to be carefully defined. Using a common protocol is a smart way to go for this kind of messaging. Some sort of message broker on an application server probably has the business logic to order the sequence of calls. This too had to be made scalable so that multiple instances could be spawned based on demand.

This stuff is admittedly pretty hard to engineer, and is not the sort of systems engineering that is done every day, and probably not by a vendor like CGI Federal. But the firms and the talent are out there to do these things and would have been done with the proper kind of system engineer in charge. This kind of architecture also allows for business rule changes to be centralized, allowing for the introduction of different data sources late in the life cycle. Properly architected, this is one way to handle changing requirements, providing a business-rules server using business rules software is used.

None of this is likely to be obvious to a largely non-technical federal staff groomed for management and not systems engineering. So a technology advisory board filled with people who understand these advanced topics certainly was needed from project inception. Any project of sufficient size, scope, cost or of high political significance needs a body with teeth like this.

Today at a Congressional hearing officials at CGI Federal unsurprisingly declared that they were not at fault: their subsystems all met the specifications. It’s unclear if these subsystems were also engineered to be scalable on demand as well. The crux of the architectural problem though was clearly in message communications between these system components, as that is where it seems to break down.

A lesson to learn from this debacle is that as much effort needs to go into engineering a flexible system as goes into the engineering of each component. Testing the system early under simulated conditions, then as it matures under more complex conditions and higher loads would have detected these problems earlier. Presumably there would then have been time to address them before the system went live because it would have been a visible problem. System architecture and system testing is thus vital for complex message based systems like healthcare.gov, and a top notch system engineering plan needed to be have been at its centerpiece, particularly since the work was split up between multiple vendors with each responsible for their subsystem.

Technical mistakes will be discussed in the last post on this topic.

 

Switch to our mobile site