Healthcare.gov and the problems with interactive federal websites

The Thinker by Rodin

Today’s Washington Post highlights problems with the new healthcare.gov site, the website used by citizens to get insurance under the Affordable Care Act. The article also talks about the problems the federal government is having in general managing information technology (IT). As someone who just happens to manage such a site for the government, I figure I have something unique to contribute to this discussion.

Some of the problems the health care site are experiencing were predictable, but some were embarrassingly unnecessary. Off the top of my head I can see two clear problems: splitting the work between multiple contractors and the hard deadline of bringing the website up on October 1, 2013, no matter what.

It’s unclear why HHS chose to have the work done by two contractors. The presentation (web-side) was done by one contractor and the back end (server-side) was done by another. This likely had something to do with federal contracting regulations. It perhaps was seen as a risk mitigation strategy at the contracting level, or a way to keep the overall cost low. It’s never a great idea for two contractors to do their work mostly mindless of the other’s work. Each was doing subsystem development, and as subsystems it’s possible that each worked optimally. But from the public’s perspective it is just one system. What clearly got skipped was serious system testing. System testing is designed to test how the system behaves from a user’s perspective. A subset of system testing is load testing. Load testing sees how the system reacts when it is under a lot of stress. Clearly some of the requirements for initial use of the system wildly underestimated the traffic the site actually experienced. But it also looks like in an effort to meet an arbitrary deadline, load testing and correcting the problems from it could not happen in time.

It also looks like the use cases, i.e. user interaction stories that describe how the system would be used, were bad. It turned out that most initial users were just shopping around and trying to find basic information. It resulted in a lot of browsing but little in the way of actual buying. Most consumers, particularly when choosing something as complex as health insurance, will want to have some idea of the actual costs before they sign up. The cost of health care is obviously a lot more than just the cost of premiums. Copays can add thousands of dollars a year to the actual cost of insurance. This requires reading, study, asking questions of actual human beings in many cases, and then making an informed decision. It will take days or weeks for the typical consumer to figure out which policy will work best for them, which means a lot of traffic to the web site, even when it is working optimally.

The Post article also mentions something I noticed more in my last job than in my current one: that federal employees who manage web sites really don’t understand what they are managing. This is because most agencies don’t believe federal employees actually need experience developing and maintaining web sites. Instead, this work is seen as something that should be contracted out. I was fortunate enough to bring hands on skills to my last job, and it was one of the reasons I was hired. In general, the government sees the role of a federal employee to “manage” the system and for contractors to “develop and maintain” the system. This typically leads to the federal employee being deficient in the technical skills needed and thus he or she can easily make poor decisions. Since my last employer just happened to be HHS, I can state this is how they do things. Thus, it’s not surprising the site is experiencing issues.

Even if you do have a federal staff developing and maintaining the site, as I happen to have in my current job, it’s no guarantee that they will all have all the needed skills as well. Acquiring and maintaining those skills requires an investment in time and training, and adequate training money is frequently in short supply. Moreover, the technology changes incredibly quickly, leading to mistakes. These bit me from time to time.

We recently extended our site to add controls that give the user more powerful ways to view data. One of these is a jQuery table sorter library. It allows long displays of data in tables to be filtered and sorted without going back to the server to refresh the data. It’s a neat feature but it did not come free. The software was free but it added marginally to the time it took the page to fully load. It also takes time to put the data into structures where this functionality can work. The component gets slow with large tables or multiple tables on the same page. Ideally we would have tested this prior to deployment, but we didn’t. It did not occur to me, to my embarrassment. I like to think that I usually catch stuff like this. This is not a fatal problem in our case, but it is a little embarrassing, but only to the tune of a second or two extra for certain web pages to load. Still, those who have tried it love the feature. We’re going to go back and reengineer this work so that we only use it with appropriately sized tables. Still, the marginal extra page load time may be so annoying for some that they choose to leave the site.

Our site like healthcare.gov is also highly trafficked. I expect that healthcare.gov will get more traffic than our site, which is thirty to 40 million successful page requests per month. Still, scaling web sites is not easy. The latest theory is to put redundant servers “in the cloud” (commercial hosting sites) to use as needed on demand. Unfortunately, “the cloud” itself is an emerging technology. Its premier provider, Amazon Web Services, regularly has embarrassing issues managing its cloud. Using the cloud should be simple but it is not. There is a substantial learning curve and it all must work automatically and seamlessly. The federal government is pushing use of the cloud for obvious benefits including cost savings, but it is really not ready for prime time, mission-critical use. Despite the hassles, if high availability is an absolute requirement, it’s better to host the servers yourself.

The key nugget from the Post’s article is that the people managing these systems in many cases don’t have the technical expertise to do so. It’s sort of like expecting a guy in the front office of a dealership to disassemble and reassemble a car on the lot. The salesman doesn’t need this knowledge but to manage a large federal website you really need this experience to competently manage your websites. You need to come up from the technical trenches and then add managerial skills to your talents. In general, I think it’s a mistake for federal agencies to outsource web site development. Many of these problems were preventable, although not all of them were. Successful deployment of these kinds of sites depends to a large extent on having a federal staff knowing the right questions to ask. And to really keep up to date on a technology that changes so quickly, it’s better to have federal employees develop these sites for themselves. Contractors might still be needed, but more for advice and coaching.

Each interactive federal web site is its own unique system, as healthcare.gov certainly is. The site shows the perils of placing too much trust in contractors and in having a federal managerial staff with insufficient technical skills. Will we ever learn? Probably not. Given shrinking budgets and the mantra that contracting out is always good, it seems we are doomed to repeat to these mistakes in the future.

Don’t say I didn’t warn you.

(Update: 10/28/13. This initial post spawned a series of posts on this topic where I looked at this in more depth. You may want to read them, parts 1, 2, 3 and 4.)

InfoWorld peers ten years out into our technology future

The Thinker by Rodin

I had no choice but to give up reading InfoWorld when it ended its print publication. It still has a presence online. Although I still think it is far less useful as a pure web publication, I find myself straying over to the website from time to time. I also belatedly signed up for a few email newsletters, figuring half an InfoWorld was better than none. That is how I stumbled across this InfoWorld article: 10 Future Shocks for the next 10 years. InfoWorld, which is celebrating thirty years, is now looking ahead and imagining what information technology (IT) shocks might occur over the next ten years. Here are the predictions and my critiques as someone who also earns his living in IT.

Shock No. 1: Triumph of the cloud (Brian Chee)

My main prediction is that the high cost of power and space is going to force the IT world to look at cloud services, with a shift to computing as a cloud resource occurring in the next five years.

To me this is a “no-duh”. Everything in the IT stack is moving toward commoditization and optimization. Location is becoming irrelevant. Storage and systems are becoming wholly abstracted and the Internet is becoming a reliable enough medium to make a local data center obsolete. All the messy logistics of moving and securing data services and systems will be transparent when they are hosted in the cloud for a fraction of the cost of rolling your own data center. Many standards will have to be developed first, but the market will drive it, so it should happen.

Shock No. 2: Cyborg chic (Bob Lewis)

By 2018, geek chic will look a lot like what today we’d call a cyborg. The human/machine interface will be ubiquitous, with people walking around giving voice/whisper commands and using earbud audio and an eyeglass display that superimposes a machine-enhanced view of the world on ordinary vision.

I think we will get partly there, but we will not choose to go all the way there. Even with electronics miniaturization, such a vision would require multiple devices with plenty of portable power, probably not enough to walk around with them on all day. This will mean that we cannot be mobile and networked 24/7 because, like our cell phones, these devices would often need to time-consuming battery recharges. In addition, the human-machine interface cannot be only be made elegant to a degree. We will look unfashionable walking around so closely tethered to electronic devices. Instead. we will choose the minimal portable technology we need. Most of the time, our portable integrated systems will be turned off or on standby since most of the time we won’t want to drain their batteries.

Shock No. 3: Everything works (Sean McCown)

The interface is intuitive and sleek. It even changes based off what you’re currently doing so that you can access features of the OS that you need while you’re, say, working with e-mail or editing pics. We’ll call this OS “Windows Sci-Fi” because we’re all dreaming if we think that’ll ever happen.

This one is easy to call too: it’s not going to happen. Expecting that all information technology will work easily, quickly and transparently and all magically integrate together is just fantasy, yes, even if you are using Mac OS/X. Heck, we’ll still be trying to integrate our existing systems to work with LDAP directories. I guarantee you that ten years from now you will still be in password hell because the vast majority of your system will still be stove-piped systems that are just too expensive or specialized to reengineer. Even if it could be done, it will be something like trying to hold a conversation with a man on Mars. The latency of all the system impedance matches will be too much.

Shock No. 4: Nothing escapes you (Savio Rodrigues)

Vannevar’s Memex vision will come to fruition through your next-next-next-generation PDA. The device will continuously capture all audio and video from your daily experiences and upload that content to the cloud, where it will be parsed to succinctly recognize your tasks, interesting information, and reminders — all searchable, of course.

Sorry, no. First, there will be so much conversation going on that it will be difficult to impossible to sift through all the voices, articulate it all, and assign names to the voices. Second, such a device also raises all sorts of privacy issues. Third, there is a point of information overload and this volume of data is simply too much. Fourth, even if all that information could be captured it would be a virtually impossible task to be able to develop software smart enough to automatically and transparently record information like you have a dental appointment next Tuesday at ten o’clock. Human speech is too complex.

Shock No. 5: Smartphones take center stage (Martin Heller)

I see the smartphone evolving into the preferred instrument for constant connectivity, with voice interaction, facial recognition, location awareness, constant video and sound input, and multitouch screens.

This is within the realm of possibility, but it is unlikely we will see all these features within the next ten years. It is doubtful such a device would be widely in demand.

Shock No. 6: Human-free manufacturing (Bob Lewis)

I think the trend will accelerate but I cannot see manufacturing becoming completely human-free. Will computer assisted robots be able to unload a truck packed with supplies? Is any machine so perfect that it will be able to work without any maintenance whatsoever, or could be completely serviced by another machine?

Shock No. 7: Perfect image recognition (Sean McCown)

One day you’ll be able to see a picture of something or take a picture of something, and load it into a search engine and have it scan the pic, search, and tell you what it is. So you see a flower, stop and take a pic of it, and Google will tell you what kind of flower it is.

One day I think this is likely, but not in the next ten years.

Shock No. 8: Big Brother never sleeps (Bob Lewis)

In the next 10 years, perfect governmental tracking and monitoring of each human being will become reality.

Civil libertarians of course would do their best to make sure this does not happen. Technically, I am not sure it can be done. I do not think we have the storage, network or computer capacity to monitor everyone in real time and apply intelligence to it. Thank goodness! Technology like anything else suffers from the triple constraints of time, scope and cost. It may turn out to be technically possible but cost prohibitive or require generations to develop and deploy. Most likely this is one desire that is simply too complex and costly to create.

Shock No. 9: Unbroken connectivity (Curtis Franklin)

Checking to see if you’re connected to a network will seem as old-fashioned as turning on a device to get information in 10 years. From sports scores to friends’ activities, the idea of interrupting your activities to get the news will be a thing of the past.

While it may be possible in ten years to be always connected to a high speed data network, I suspect it will be cost prohibitive to do so, particularly in remote locations. I am also skeptical that data networks will ever be as reliable as say the phone system, because a data network is far more complex. The phone system is reliable because it is relatively simple and has redundancy built in.

Shock No. 10: Relationship enhancement (Jon Williams)

My 2018 prediction is that we use technology to remember and fortify social connections. You’ll get together socially with a friend, geo-locate, take pictures, Twitter, make notes and videos, and so on, and it all gets automatically filed away. There will be no difference between “online friends” and “real friends”. This will be life-altering.

I think we can make on-line relationships better with improved information technology but there will be no substitute for in-person relationships because meeting someone in person is a much richer and more intimate experience.

Obituary for InfoWorld (1978-2007)

The Thinker by Rodin

While the publication InfoWorld was certainly not responsible for my success in the information technology (IT) business, it was arguably the jet fuel that pushed my career into the IT stratosphere. I needed InfoWorld, or something similar, to bridge the gap between IT neophyte and IT guru. I still do not consider myself an IT guru but assuming it could be objectively measured, I am confident that I am in the top 10% of IT talent. InfoWorld was instrumental in helping me get there. That is why I always looked forward to my weekly copy. I never equated reading InfoWorld with a chore. I viewed it as fun. From the irreverent Notes from the Field column by Robert X. Cringely (who is not an actual person, just a trademark) to top notch columnists like Brian Livingston, for much of the last couple decades InfoWorld was my essential publication for staying ahead of the IT curve.

I say “was” about InfoWorld but its staff would say, “is”. InfoWorld is still around. What has changed is that it is now available only online. Its last issue, with “Final Print Issue” all over it, arrived in my mailbox early last week. Dated April 2, 2007, I at first assumed it was an elaborate April Fool’s joke. After all, InfoWorld has been around since 1978, when it was known as Intelligent Machines Journal. When it was sold to the IDG Group (which offers a plethora of IT publications with the “world” in it) a year later, it became InfoWorld.

I stumbled upon my first copy of InfoWorld around 1987. Someone had one lying around the office. Whenever I found upon a copy, I would read it cover to cover. It was not long before I decided I needed to get a subscription, which was free. There was only one problem: it was free only if you qualified. I was a poor computer programmer making $25,000 a year, an IT nobody. Consequently, InfoWorld was not interested in giving me a subscription. They wanted people who made decisions and exercised budget authority. For years, I tried without success to get a subscription. One year I suddenly qualified. I do not know if it was because of the progression in my career or, like many subscribers, I stretched the truth a bit in order to qualify. InfoWorld became a precious gift that kept on giving. It was the best bargain out there for a knowledge craving IT person like me.

Aside from its fabulous columnists, what was special about InfoWorld was that it was always just ahead of the curve. It was a gloriously nerdy magazine, full of detailed product test reviews, IT news, and writers firmly grounded in toughest IT trenches. While it would occasionally flirt with the fanciful, it almost always it kept its focus right where it belonged: on the business enterprise. That is where people like me made our living. We had to succeed in the business sphere to advance our IT career. It was a nuts and bolts sort of publication that told me what I needed to know right now to stay on the pragmatic leading edge of IT. While it could occasionally wax poetic on the virtues of the Mac or the Amiga, most of the time, it was focused on wherever the market was at. That was typically the Microsoft Windows universe. Consequently, reading columns like Brian Livingston’s on InfoWorld was a rush. Brian is the author of many of the Windows Secrets books. I never needed to buy his book though. I learned all sorts of secrets about Windows from his weekly column in InfoWorld. However, Brian was just one of a cadre of top tier IT columnists that InfoWorld hosted. These included luminaries like Bob Metcalfe, the founder of the Ethernet frame-based network. If you are reading this, the data arrived through an Ethernet card attached to your PC. You can thank Bob for his invention. I can thank him for sharing his wisdom in InfoWorld for many years.

In the 1990s, InfoWorld was in its prime. It was a frantic head rush of a publication, stuffed to the gills with incredibly relevant IT material. It overflowed with IT information from a boots on the ground perspective. I held on to my subscription as a lamprey holds onto a ship. When I was required to renew, I never dallied. I could not afford to miss an issue. I needed it to stay on the technology edge. There were many computer magazines out there, but InfoWorld was special. I felt it was in a class by itself. I occasionally flipped through other IT journals, but all of them left me feeling they were missing something. However, InfoWorld was always focused on precisely what I needed to know right now to thrive in my IT career.

When necessary I could read InfoWorld and feel like I was keeping up with IT. There were only so many hours in a day. I simply did not have the time to read everything. The virtue of InfoWorld was that I did not have to. InfoWorld was easy and fun to read. I have to don my academic hat to work my way through the dense verbiage and illustrations in publications like IEEE Computer. You did not have to be a rocket scientist to understand InfoWorld, just an IT enthusiast. All you had to do was keep reading it regularly. Eventually you picked up all the acronyms and buzzwords and could put them into an applied context. That was when you felt you had arrived.

Unfortunately, the real world hit InfoWorld. Two events coincided: the Internet and the end of the technology boom. The Internet pushed more content online. The collapse of the tech boom and the consequential dip in ad revenue pulled out its financial pillars. I was shocked when they let go Brian Livingston. Yet he was one of many columnists like Nicholas Petreley who were unceremoniously shown the door. Some of the new columnists were very good, but most could not replace the shoes they tried to fill. Readers expressed their disgruntlement by letting their subscriptions lapse. Still, I held on, hoping that the InfoWorld I used to know would return. Yet the size of the magazine kept shrinking along with the advertising revenue. I should have suspected something was amiss because InfoWorld was becoming more of a brochure than a magazine.

Now its print version is gone for good. The same content is online, but I am not sure I will make it a habit. The virtue of the print publication, as one of its columnists pointed out some months ago, is that it is finite. The problem with the Internet is also its virtue: it is infinite, as well as constantly changing. It was not that InfoWorld has a bad web site; it is that (a) like most human beings I have trouble absorbing lengthy articles online (b) a laptop computer is too big to retire to bed with (this is where I do most of my technical reading) and (c) I need IT distilled down for me. That is why magazines and newspapers exist. That is how they add value. However, by being required to read it online InfoWorld’s hassle factor increased substantially. Yes, I can search the site easily enough, but I cannot efficiently browse it. I still need to have a surface understanding of IT issues, but the way it is formatted online leaves me with zero interest to dig to get the details.

I should at least be glad that forests are no longer being cleared to make sure I get my print copy of InfoWorld. However, InfoWorld would have ended up in my recycling bin anyhow. The best InfoWorld columnists moved on long ago, not of choice, but because InfoWorld sent them packing. I still track some of them. For example, I subscribe to Brian Livingston’s Windows Secret’s email newsletter. Brian recovered nicely from his firing. Indeed, he got the last laugh. He now has more subscribers to his email newsletter than InfoWorld had subscribers when he was writing for their magazine.

I hope that some other company will pick up where InfoWorld left off. I am sure advertisers still want to target me. Give me its equivalent in print form and I will subscribe, as will many others. I suspect that the reason InfoWorld’s subscriber base shrunk in half was because during the last recession they got pennywise and pound-foolish. In effect, they chopped the legs off their own publication. It would have made much more sense to spend money to get back the fabulous staff and columnists they used to have and rebuild their base. Instead, they looked at their balance sheet, instead of their long-term profitability. As a result InfoWorld, for many years arguably the premier computer magazine is a sad shriveled imitation of itself.

Thanks in part to all those years of reading InfoWorld, I am now precisely the kind of reader that they should crave the most. In effect, they abandoned people like me by discounting my need for a quality printed publication in favor of the cheap production costs of publishing only online. Getting rid of their print publication was a foolish decision. If the InfoWorld website is still around in a year, its content is likely to be of marginal value. Unfortunately, it appears the money managers at IDG are still missing the big picture. I thank InfoWorld for all those years of insight and detail. However, they have lost me as a customer.

The Transformation of the Information System

The Thinker by Rodin

Like many of us in the information technology field, my career has been about creating and maintaining information systems. The techniques and technologies used have varied. Until now the process has stayed essentially the same thing. The process is something like this. Get people to put data into a computer. Store it somewhere. Apply business rules to it by writing a lot of customized code. Then spit it out in the forms wanted by various data consumers.

Really, that’s it. It’s doing with a computer what people used to do in their brains. Computers just have the ability to do these things much more quickly and reliably. But of course you have to tell computers precisely what to do and the order in which it must be done. This logic is what we call code or software. While it has not made me rich it has kept me gainfully employed and enjoying a comfortable lifestyle.

There were classically a couple ways to get information into a system. The most often used method at the start of my career in the early 80s was to stick someone in front of a terminal and have them enter data into forms on a screen. They then pressed a key and off the data went, through the ether and into a database somewhere. But there are other ways to bring data into a system. In the old data processing days one popular way was to load big reels of tapes from somewhere else and read them into mainframe computers. Since then we found more efficient ways of recording some information in a computer. Bar code scanning is probably the best-known way.

Once the information is in the system it is scrubbed, processed, compared with other information and placed somewhere else. In other words, it is sort of assembled. An information system is a lot like a factory. Raw material (data) is dumped in at one end. Out the other end comes data on steroids: information. You know much more about something from the output of the system than you know from the relative garbage of facts that fed it. And this information is typically used to add value, such as to gain a strategic or competitive advantage.

That stuff in the middle between keyboard and printer was a lot of usually hand crafted code. At the dawn of my career it was often written in Fortran or COBOL. During the mid to late 1980s it was more likely to be in languages like C, Pascal or PL/I. During the 1990s object oriented programming languages gained ascendance. Instead of C, it was C++. Businesses that ground out client/server object oriented applications used development environments like Delphi or PowerBuilder. Data and the software used to manage these data began to merge into something called objects. Logically the stuff was still stored separately. But conceptually an object let us programmers get our brains around larger and larger programming problems. As a result we learned the value of abstraction. Our programming languages became more ethereal. It became a rare programmer who could actually write a binary sort routine. Instead we call APIs or invoked methods on software objects to do these things.

Toward the late 90s critical mass developed around the idea that data should be easy to move around. Businesses needed simpler ways to transmit data with other businesses. This was one of those “no duh” ideas that someone should have successfully ran with twenty years earlier. Okay, there were ideas like EDI, but they were expensive to implement. Instead with the Internet finally ubiquitous enough to use as a common data transmission medium, a data standard for the web emerged: Extensible Markup Language, or XML. In the process data became liberated. Whether a field started in column 8 no longer mattered. Tags describing the data were wrapped around each element of data. An externally referenced XML Schema acted as a reference to tell you whether the data were valid or not. Instead of writing yet another unique application to process the data, a generic SAX or DOM parser could easily slice and dice its way through the XML data. Using objects and modules built into the programming language of your choice it became fairly simple to parse and process XML data. As a result there was at least a bit less coding needed to put the system together than in the past.

The newest wave in data processing is called web services. Just as XML became a generic way to carry data with its meaning over the Internet, web services provides protocols for automated discovery, transmission and retrieval of XML formatted data. Figuring out the best way to do this is still being hammered out. Protocols like SOAP are losing favor for simpler URL based methods like XML-RPC and REST. We’ll figure out what works best in time. But equally as interesting as these web services technologies are the XML transformation engines now widely available. The XSLT (Extensible Stylesheet Language) specification, for example, allows XML data going or coming into a system to be transformed in an infinite variety of different ways. It can be something simple like converting XML data into a web page with the XML data embedded inside it. Or XML can be rendered into something more complex, like a PDF file or a MS Word Document.

But what does all this mean? The light bulb finally went off yesterday. I was explaining to a colleague at work why I wanted a system I managed to have web services. My team understood the value. Data and presentation could be wholly separated. With data in XML it could fairly easily be transformed with the XSLT engine of our choice into the format that we chose. The effect of this is to markedly diminish the actual logic needed to set up and maintain an information system. The big payoff? In theory, fewer programmers are needed and it should be faster and easier to manage information. But in addition the system should behave more reliably, since less code is needed to run the system.

For example the system I manage is what we in the computer business call tightly coupled. It works great. But it’s just a pain to maintain. The data of course is stored in a classical relational database. To get it out and present it to the user we have to turn it into HTML. Right now we do this with a lot of code written in Perl. Naturally we get lots of requests to add this and delete that and show data rendered like this. And so once again, as programmers have done for a generation, we perform major surgery on our code and go through extensive testing until we get the results requested. But since we are a government system in a non-sexy agency we are grossly under funded. So most of these requests go into a programming queue. Many of these great ideas will be abandoned due to our tightly coupled system and our limited resources.

So what’s really interesting to me about these XML technologies is that we should be able to put together systems much quicker once we have the architecture in place. In addition we should be able to make changes to our systems much quicker too. We could end up with systems that in the classical sense require little programming. This example on the W3Schools site shows how incredibly simple it can be to take data from an XML data store and render it as HTML. Once the XML schema is defined and the template is written in XSLT then rendering it can be accomplished in just a few lines of code. Of course this is a very simple example. But when I think about what sort of effort and time would have been required to render this same result in those pre XML and web services days I am a little awe struck. The productivity potential is sky high.

So I’m starting to wonder: do XML technologies mean that information systems will no longer require any crafting by programmers at all, but will instead be easily assembled? If so this is revolutionary. But the pieces seem to be there. On the output side of the system XSLT and an XML database work fine together at spitting out information in a useful format. There is little or no coding needed here to make that happen. But what about the input side? There is revolutionary news here too. Initiatives like the W3C XForms project are finding standards based ways to gather form data intelligently. We programmers should not have to struggle too much longer with HTML forms, embedded with Javascript client logic and server based scripting logic. XForms will handle the job in an XML way that will minimize coding and markedly reduce maintenance costs.

And so there you have it: all the components needed to construct basic information systems in a generic fashion are nearly in place. Simple data collection and retrieval systems — what I have been doing my whole career — could potentially be done using open standards and without writing a line of code. With an XForms editor we will draw our forms and push it out to browsers and other web-aware device. Input interface: done. Web services can be utilized for the automated data interchanges needed between businesses. To realize this vision may require putting a SOA (service oriented architecture) in place first. A good application server will be able to get the data and persistently store it without much coding. And an XML aware transformation engine embedded in or talking to the application server will take our template of choice and render it in the format and media wanted.

Will programmers no longer be needed to construct information systems? Not quite, at least not quite yet. Few applications are as simple as the one I suggested. And there are hosts of other variables to be thought through, including quality of service requirements that often require programmers. But I suspect that over time we will see that information systems will require fewer programmers. Instead the emphasis will move toward those on the system administration side. Database administrators will still capture and manage data, but they will also tune the database for rendering content in XML. Business rules will move more into the database or into rules engines attached to the application server. The result should be fewer programmers steeped in the mechanics of languages like Perl. Instead we can expect more time spent tuning databases and maintaining business rules. Form data will be designed in an XForms editors. We will use similar tools to render output using XSLT.

Time will determine whether I am seeing the future clearly. But clearly I am not alone since this is the whole larger point of XML technology. Companies like Microsoft have created packages like BizTalk just for this purpose. (Their Visio product is used to diagram the business rules.) It should get easier and become less costly to create and maintain information systems. And over time we can expect that systems will be able to exchange and process data with each other much more simply.