Posts Tagged ‘Internet’

The Thinker

Good news! You are going to be immortal, sort of!

Google recently announced its inactive account manager. If you have a Google Account, this new manager essentially tells Google, “Assume I’ve died if I haven’t logged in after X days. And if I hit that number of days, destroy all data about me.” You can also tell Google not to destroy its data about you, but to authorize a list of other individuals to then access your Google account.

This might be welcome news to the executor of your estate who has to slog through the odious task of getting your creditors to go away as well as notifying friends, relatives, distant acquaintances and your LinkedIn.com colleagues that you are no more. Assuming Google follows through, if you choose to have its records about you destroyed sometime after your death, not only are you dead in the actual sense, but also dead in the digital sense, at least for data about you in Google.

It’s nice of Google to plan for your demise. Other companies out there are likely not to be so willing to delete data about you. The predominant companies in the online world though are figuring out ways to handle your electronic data after your physical demise. Facebook is trying out a way to let people memorialize a dead person’s Facebook account. Twitter has a convoluted process for decommissioning an account which in its current form will make your executor not even want to bother trying. Doubtless other social media and internet conglomerates will develop their own policies, but is likely your square Instagram pictures will still be out there somewhere in cyberspace centuries after you are dead. Technology is providing a way for us to become immortal, at least in the electronic sense, long after our bodies have succumbed to their finite limits.

Also likely being immortalized about you are many of those digital fingerprints you left. Which ads you clicked on. The dreck you purchased from eBay back in 2003. Your rantings in public forums and comments on Yahoo news articles. Maybe even the porn sites you visited and your account on ashleymadison.com. Also your credit history, your spending patterns as documented on mint.com, your family history as you charted it with Facebook’s family history app and maybe all that stuff you uploaded to your personal cloud. All there for others to pick over. If you think about it, you should feel aghast. I have heard unconfirmed reports that one of my grandfathers snuck out the back door frequently for some booty down the street, presumably unbeknownst to dear old grandma. No one can plausibly confirm or deny it, so I will choose to remember my grandpa as the genial guy who grossed me out when I went fishing with him and he sliced off the fish’s head.

Our generation won’t have plausible deniability. Some enterprising great granddaughter in 2100 may be sifting through open source big data warehouses and be able to trace that message to a lover you made on ashleymadison.com to your IP and computer when five minutes earlier you had sent out an email to a friend. So that’s the downside, but the real bummer is it is probably too late to do anything about it. Being humans we’re bound to have moral failings, it’s just that in the past they did not normally come to light, so the living assumed the best about us. The good news is that if you can keep the researchers from putting all these facts together until after you are dead then it will all be moot. Your ex and children may be shocked when they subsequently learn of your immoral behavior, but it won’t matter to you. I am guessing that an account on reputation.com isn’t going to quite cut it.

So your drunkenness, lecherousness, gambling addiction, wife beating and stash of pornography, or at least some part of it, will be available for those willing to look for it. It is not too hard to envision companies that will do this for profit. In fact, I can see a whole new business model built around electronic blackmail. (The blackmail.com domain, curiously, is owned but parked. I should probably make an offer on the domain.) Something like:

Dear Mr. John Jones,

We are aware that you are seeing two other women on the side, plus you have a gay lover you see on alternate Wednesdays. But no one needs to know because we won’t tell! We guarantee that we will not reveal this information about you for the low price of $1000 a year paid now, or low monthly installments of just $100 a month.

Otherwise we will be sending a summary of the information we have about you to gawker.com and Pastor Vleek at the United Methodist Church where you tithe on May 1st, along with proof of the veracity of certain claims we will make so they are beyond plausible deniability.

We accept Visa, Mastercard and Discover, or you can make payments confidentially with your PayPal account. Please visit my.blackmail.com and enter your special confidential access code 6f7gjk93! to initiate payment.

Sincerely,

Jason Dweeb
Account Executive

What’s the upside? Well, electronic immortality! Because there won’t be just blackmail.com, you will also want to hire memorializedforever.com. In the past you were memorialized with fading photographs and copies of handwritten letters, if that. In the future you will have the ability to let people see you in high fidelity. You will want to buy their high fidelity service, in which you will be recorded in high definition 3-D. The voice quality will be high fidelity too. Your future great, great grandchildren will feel like they really know that guy otherwise known as the carcass planted under the tombstone at Crestview Cemetery. If you want you can expound about your history, your feelings, your concerns or anything you want future generations to know about you. You can even pay for the three way backup service, where your high definition memorial is hosted in redundant cloud servers plus immortalized in a blocks of digital friendly material, which can be readily uploaded in the event of a catastrophic failure.

I hope this is what you want, but it’s all sort of moot. It’s happening and there is not much that can be done to stop it. There will probably be federal legislation at some point to at least regulate this business, but as a practical matter the internet is impossible to really police, so it will all be stored somewhere anyhow and available for a price.

As for me, when I die I would prefer to be really dead, just like dear old possibly lecherous grandpa. I won’t have that opportunity, but I will at least take the time to set my Google inactive account manager settings, as a courtesy to my wife who will probably clean up behind me and really hates paperwork.

 
The Thinker

Anti-government morons

It’s come to this: the anti-government morons are decrying “big government” using the Internet, which would not exist without big government.

Granted, not everyone knows or cares about the history of the Internet. Rest assured it was not spawned as an invention of private industry, or manufactured in someone’s basement. That was sort of tried in the 1980s and failed. Yes, the indispensable Internet that if you are like me you are virtually addicted to (and which also keeps me employed) is a product of the systematic application of your tax dollars chasing what any sound financial analyst back in the 1960s would have called a wild goose chase. As an investment of tax dollars its return is incalculable, but it has connected us as never before, made getting information incredibly simple, and has even help foment revolution in countries like Egypt. It will probably be seen in retrospect as the most brilliant use of government tax money ever and a key enabler of democracy across the globe.

Anyone remember Compuserve? Or AOL? They were private Internet-like networks for subscribers only back in the 1980s and 1990s. Compuserve was bought out by AOL in 2003 and added to their list of “hot” acquisitions like Netscape (cough cough). AOL is no longer in the business of dishing out content only to paid subscribers and sees itself as a “digital media company”. Content equals money so they are eager to get anyone on the Internet to look at their sites, not just subscribers. In part they do that by not associating their sites with aol.com, which is unsexy, and build sites like this one. AOL still frequently loses money and every six months or so it seems to undergo reorganization.

The Internet you enjoy today is a basically a product of the Department of Defense. Back in the 1960s, the Defense Department needed a digital way to connect the department with research arms at educational institutions. It threw research money at the problem through its Advanced Research Projects Agency (ARPA), which takes on great, hard to fulfill quests. Working with a company called BBN under a government contract, the first router was manufactured. It provided a common means to move data electronically over a network through this weird idea of packets. Being able to send packets of data reliably between places on the network in turn spawned the first email systems that also went over its network. In the early 1990s, Tim Berners Lee at a multi-national research institution in Switzerland (which most recently found the God Particle) thought email was too cumbersome for his tastes, and created Hypertext Transfer Protocol (HTTP), which became the web. It was government that created the Internet and arguably it could only have happened because of government. Private industry was not interested in some decades-long research project to build an open network that they might not control. Where was the profit in that?

Arguably the Internet could not have happened without the space program. Huge amounts of government research money were thrown at developing electronic computers, needing to be ever smaller and faster, to facilitate the needs of the space program. The space program also developed a whole host of other valuable products we use today and don’t think about, like Teflon, byproducts of government funded research that were turned over to the commercial sector.

Public investments created our interstate commerce system, a system we now take for granted but which made it so much easier to move both goods and people across the country. This investment stimulated commerce, built suburbs, and made it easier and faster to see our great country. Public investments created and sustained public schools and universities, which allowed minds with lots of potential to reach actualization and be put to work for the enrichment and betterment of all.

For a couple of dollars per person per year, the National Weather Service provides non-biased, accurate and timely weather forecasts available to anyone. One of our most valuable federal agencies is also one of our least known or appreciated: the National Institute of Standards and Technology, formerly the National Bureau of Standards. Not only does it say how to define an inch or a pound, it also defines standards for more complex things, like data security. Defining it once by engaging the best minds on these subjects keeps everyone from reinventing the wheel. Standards save huge amounts of money and promote competition, but we take them for granted. By promoting open standards and interoperability, NIST and other standards organizations allow the private sector to thrive and we consumers pay lower prices and get more broadly useful products.

Does the government waste money? Most certainly. We waste billions in Medicare fraud every year, and arguably wasted hundreds of billions in recent wars in Iraq and Afghanistan. I can understand why some would infer from these examples that that the government simply cannot manage any large problems. However, the government is tasked to manage large problems all the time because lawmakers think those tasks are important. Many times, the tasks are unique and have never been done before, and are inherently risky. For any risky endeavor, there is a likelihood of failure, thus it’s not surprising that government’s record is so spotty. However, by approving these programs, lawmakers are essentially saying they should move forward in spite of the risks.

Oversight is supposed to be the solution, but it works haphazardly. Congress has the responsibility but it seems poor at it. There are other mechanisms in place to audit federal agencies: the Government Accountability Office, inspector generals at every agency, reporting to the Office of Management and Budget and much more. What does not happen often is that a program is held accountable for achieving results, with the penalty that the program goes away if results are not achieved. Some programs have sunset provisions, but these are the exception. (You might want to review my thoughts on how to make a truly accountable government.)

Yes, I can understand that people don’t like to pay taxes. Yes, I can understand that they don’t think the government should be doing lots of things that it does, and want to eliminate huge chunks of the government and pocket the money instead. Doing so may eliminate a lot of waste and fraud by ending a bad program, but it doesn’t eliminate the underlying problems. Eliminate the EPA and pollution is not going to go away. It will get worse. Eliminate the FDA and you run the risk of having unsafe drugs. Eliminate Medicaid, food stamps and welfare and you run the risk of revolution. Eliminate transportation funding and expect more people to die from bridge collapses or find their cars falling into sinkholes.

The real question is whether the costs to society are greater or less because of government, because the costs will get paid either way. They will happen either through taxes or through costs like lowered life expectancies, greater crime, poorly educated children, fouled water and air, unsafe food and a crappy transportation structure. The private sector cannot rush into save us from these problems. They might, if they see some profit in it, but any solution won’t be in your best interest, but in theirs.

The really successful governments these days are those that meld the best of the private and public sectors. Look at Germany, with a progressive government and a huge welfare state that still lives within its means, is thrifty and is innovative in producing products the world needs. Thanks to its government, it is leading the way in getting energy from renewable resources. It did not happen in the absence of government, but because of government. It also happened because Germans believe in their government and support it, unlike large portions of Americans, who are trained to be suspicious of government.

Our imperfect government is a result of an imperfect democracy driven largely by unelected special interests. When it does not truly serve the public good, it becomes ineffective and corrupt. When it works with the public good in mind, as it did for the Internet, it can drive the future and make us world leaders, rather than laggards.

Whether you agree with me or not, that you are reading this at all is due to the fact that you, the taxpayer, invested in a risky venture that networked us together. Without this investment, the United States would now almost certainly be a second world country, because what would we produce otherwise that the world would want? It values our ability to innovate, and our innovation is predicated in part on massive research, far beyond the ability of the private sector alone to attempt. This kind of research can only be done by the public sector and our educational institutions. If we don’t make these investments, other countries will before we will, and we will be a far poorer nation because of it.

 
The Thinker

The Internet is getting too smart

I don’t know if you have had the same experience I have had trolling around the Internet. No matter where I go, the same targeted ads follow me.

From the perspective of web advertisers and sellers this is good news. Why serve me useless ads for places like Popeye’s Chicken when instead they can target me with ads for open source support from OpenLogic? OpenLogic is one of a number of companies that provide support for open source software. It’s a pretty obscure field, which means if I am to get targeted for their ads someone knows a whole lot about me.

For you see, I do happen to know a bit about OpenLogic. One day at work I was so disgruntled with the price increases that Oracle wanted to support its MySQL product that I went hunting for cheaper alternatives. MySQL is a database used everywhere, but mostly on the Internet. It powers most major Internet websites including the Google search engine and photo sites like Flickr. MySQL has gone through a series of acquisitions over the years, first by Sun, and now Oracle, which acquired Sun. It’s pretty much a given that when Oracle acquires a product, it raises prices by about a third, and that’s what I was seeing for Oracle support for MySQL. Larry Ellison needs more yachts. Yes, there is “competition” for MySQL support but most of these vendors are simply reselling support that really comes from Oracle, which means their prices closely match Oracle’s prices. OpenLogic was one of the exceptions, provided we installed a community edition of MySQL. It looked like if we went with OpenLogic we could trim our support costs for MySQL by half. Good deal.

In the last month or so I have seen OpenLogic web ads pretty much everywhere I go on the Internet. It’s not a big enough company to target users indiscriminately. Most people have no idea what open source software is, and if they do they probably aren’t someone who might have authority to buy or recommend it, as I have. But clearly somewhere on the web there is some firm or firms keeping track of this stuff. It doesn’t seem to matter whether I am at work or at home on my personal computer. I can be on the road as well. OpenLogic ads will follow me everywhere. Frankly, it creeps me out. I don’t want the Internet to know this much about me. I want to turn this off. I want anonymity again when I surf the web.

What worries me more is that if the commercial world can piece this together about me, perhaps Big Brother is doing the same as well. Maybe we are all being monitored by the NSA or some other government agencies, maybe when by law we should not be. Who can say? The Patriot Act has been extended way beyond its planned uses, and both the Bush and Obama Administrations think it gives them carte blanche to snoop around on the Internet and put together electronic dossiers on potential terrorists, which theoretically could be any of us. I have a feeling that my NSA file is already much larger than any FBI file accumulated against famous people like Martin Luther King.

I can live in denial about potential government snooping of my private life, but corporations clearly know way too much about me, including stuff I have not divulged online. I am seeing ads for Three Musketeers candy bar (“Now with more chocolate”) most places I go as well. Someone apparently knows I buy a bar once a week or so. When I do, it’s usually in the snack bar at work. I pay cash. Yet the Internet seems to know somehow because I see the Three Musketeers ads served nearly as much as the OpenLogic ads.

It used to be that if you felt paranoid about your online privacy you would go into your web browser and remove all your persistent cookies. Web sites would lose associations with you. Apparently, that is no longer the case. Web cookies are so old fashioned. As best I can figure your internet protocol (IP) address is being tracked and matched in real time against targeted ads, and probably associated with your name and buying habits. This means that removing cookies offers little privacy protection. I am really disturbed though when I find that some company is relating my work IP address with my personal IP address. This must be happening; otherwise I would not see so many OpenLogic ads when I surf from home.

The Internet also knows I am an old fart. Well, not that old. I live in denial at age 54. But it knows that old farts like me want QWERTY keyboards. So I am being targeted with ads for cricKet smartphones because, presumably, it also knows that I don’t yet own a smartphone. And hey, they have QWERTY keyboards for us old farts! I never mentioned online anywhere that I prefer QWERTY keyboards (well, until now) but someone has figured it out, or has figured out I was likely to want one, being an old fart. I embarrass myself trying to type text messages on my cell phone. It can take minutes. Gimme a keyboard, dammit!

Meanwhile, United Airlines is also targeting me, tempting me with flight deals that don’t seem much of a bargain. This is presumably because I am in their frequent flier club, if flying United three to six times a year for business makes you a frequent flier. Jetblue is tempting me as well, perhaps because I gave them a positive review, but also because I bought some tickets from them online recently. Doubtless I am hardly unique and I bet you too are puzzled by these highly targeted ads as well.

The thing that bugs me is that they don’t pay a toll. Oh, I am sure sites like Google charge a toll, but I don’t get any money from it. I’d like to put up a tollbooth on my Internet experience which basically would say “If you want to serve me a targeted web ad, pay up buddy!” I know there are browser Add Ons like Ad-Block that do a decent job of killing most ads, but I have found them annoying because they aren’t one hundred percent effective. I’d like advertisers to bid for my attention. I figure if I charged only a penny per targeted ad per day, I could make between two and five dollars a day. Can someone write software like this? I’d buy it. Maybe I will write it when I retire. I figure at my age and income level, I should be a valuable advertising target for someone. Just why give away the store?

There are anonymizer services out there that would make my web browsing experience less personalized, but you have to pay for anonymity. Running everything through a proxy would also make content appear more slowly. I realize Internet privacy is something of an illusion, but it feels like it has gone way too far in the wrong direction. I want to reclaim some private space online, but it seems impossible at this point.

To start, it would be nice to get rid of all the ads for OpenLogic, Three Musketeers, various airlines and other sites, but I have a feeling there are other targeted ads in the queue waiting for me if I succeeded. It seems that as part of the price to pay for being an online denizen I will have to get used to being watched. I wish it were otherwise.

 
The Thinker

Craigslist Casual Encounters: now officially a complete waste of time

(Warning: This post is rated R.)

Every couple of weeks I log into Google Analytics and check out my blog’s web statistics. A fuller report will come in 2010 but I have noticed a few trends. Visits are down by about a quarter and page views are down by about a fifth. This is not necessarily bad. In the past, my page views were artificially inflated by the less than one percent of my blog entries that discuss pornography, particularly this one and that one. Thankfully, page views for those posts are receding at last.

What is increasing? A simple eulogy I wrote and published when my mother died back in 2005 has received twice as much traffic as the year before (over 4300 page views, averaging twelve page views a day). However, my fastest growing blog entry is one in wrote in late 2005 on the Casual Encounters section of Craigslist. Interest in this topic is up 127% from a year ago and averages more than fifteen page views a day. While I have nothing more to say about pornography, in the interest of getting more traffic I could find something more to say about Craigslist.

So over the long Thanksgiving weekend, I put on my dark glasses and revisited Washington D.C. Craigslist Casual Encounters to see what was new. When I reviewed it in 2005 it was a pretty crass place. I am sad to say that four years later the situation is much worse, which I did not think was possible. If I were Craig Newmark, who founded Craiglist way back in 1995, I would be too embarrassed to host it anymore.

At least Craigslist will take the time to warn you that most of the postings in this area are fraudulent.

SCAM ALERT – scammers posing as potential romantic partners are directing CL users to age and identity verification sites, dating/adult/cam sites (where you can see their “pics” or chat with them), even sites designed to deliver malware — all in hopes of earning affiliate marketing commissions at your expense.

In response to the high volume of spam, Craigslist has taken some steps. It has made it harder to post ads, in that you have to go through the open source reCaptcha system first. (I am using it too to filter comments.) The good news is that this means that whoever posts to Craigslist is a human, rather than a robot. The bad news is that it does not appear to be deterring spammers in the least. There must be enough money to be made trying to sell sex as a “casual encounter” on Craigslist to go through the bother anyhow.

Also in response to the high volume of spammers lurking in the Casual Encounters weeds, Craigslist has provided tools to “vote a poster off the island”. If enough people say that an ad is spam, it is marked as spam and shortly prohibited from display. Craigslist then sends the poster an email, which apparently contains a convenient link which if you click on it lets you repost the message. The result is that it appears that Craigslist Casual Encounters is now largely a flame war between people pissed off by the spam and the spammers.

What is getting lost? Well, casual sex connections on the site, which were probably largely an illusion anyhow. However, there a number of ads that appear to my untrained eye to be wholly legitimate. At least I assume that is true of the many ads posted by “BBWs” (Big Beautiful Women, or judging from their pictures when they post them, morbidly obese women) looking for a good time. Whatever, they are likely to be quickly voted off the island as well. Maybe the BBWers are in reality spammers. Or maybe the Craigslist men just hate fat women. The result appears to be a toxic mess of spam and vindictive people willing to flag everything.

Perhaps you read about the murder that happened in Boston in May to a woman who advertised in Craigslist Erotic Services. Since then Craigslist has tightened up its Erotic Services board, apparently charging anyone a fee to post, and prohibiting ads that suggest you will receive actual sex. The result of this policy seems to be to move the whores into the Casual Encounters area instead. As was true in 2005, there appear to be plenty of “women” whoring over there. Certain words though must be getting flagged because these “women” have developed a whole new vocabulary for asking for money. Mostly they want “roses”. Men are not beyond asking for “roses” either, particularly when they are advertising for their own gender. The typical ad is like this one:

I could use your help with bills. If you could use a good bj, let’s help each other. 100 roses for bj. I can host. Must be clean/ddf.

There are even people out there selling manufactured group sex. If I were interested in group sex, I suspect I would find a local swingers group where, presumably, you can swing safely and with people who are not psychos. I sure would not expect to pay for the privilege, particularly when multi-partner sex with complete strangers can kill you. Moreover, who is to know if you go to some stranger’s apartment you will not end up robbed or worse?

hot gangbang 2nite only!!!! 46DDD, big nipples, wet pussy. horny. TIGHT ASS HOLE 5’8 black I CAN HOST TONIGHT ONLY. $

Even if you can find a legitimate poster for a gangbang, do they want you? No, apparently they are into fantasy, which means you must be very well endowed, not Mr. Six Incher. With our African American president, black must be the new “in” color. Well-endowed black men seem to be in great demand, particularly for group sex.

Seeking 4 to 5 more Black males to join our GB grp.. Requirements is as follows.. Must be clean and dd free. Able to perform in a grp setting. 8′ or better. Must be in shape. Must not be camera shy.

8 feet or better? Good luck with that. Okay, well, Craigslist posters are not exactly known for their spelling skills and can’t seem to be bothered to reread the posts before the make them.

In short, if you want to waste your time, want to catch some sort of deadly social disease, want to get robbed, are into hugely obese but possibly horny women, want to have an encounter with a woman who turns out to be a transvestite or love flagging spammers then Craigslist Casual Encounters is your perfect destination.

To the many horny men out there, I am sorry, but if you want to get laid, it’s time to start frequenting bars and clubs again. At least you can see what you are getting in a bar. Good news: most are non-smoking these days, so it’s easier to discern the good looking women from the not so. I cannot see how you can possibly find what you are looking for on Craigslist.

Back in 2005, I said that surfing Craigslist Casual Encounters was like rubbernecking past an awful accident. In 2009, I can say it does not even have the appeal of rubbernecking. It is the definition of a complete waste of time.

 
The Thinker

The synergy of RSS to Email

Four and a half years ago, I wrote about this new cool technology called RSS. Actually, RSS (Really Simple Syndication) was hardly new in December 2003. It was introduced by Netscape in 1999 as “RDF Site Summary”. This original version is now quaintly referred to as RSS 0.91.

The problem in 2003 was that RSS had not caught on. Who really wants to manually check the same web sites periodically for new content when a solution like RSS was available? It took a couple trillion web clicks but eventually users realized this was stupid and inefficient. Instead, web savvy people like me were noisily petitioning content providers to create RSS feeds. Eventually web publishers took notice. They realized the cost of implementation was relatively small, the underlying XML dirt simple to generate and that it could expand their market for minimal cost. Now, it is hard to find any web content provider without news feeds. This blog, for example, is accessible in two RSS formats as well as the Atom 1.0 syndication format. According to Feedburner, approximately thirty of you access my blog via my RSS feed. Thanks for subscribing, by the way.

So RSS has caught on to the point where it is widely available, but it is still not as widely used as it should be. Only about 10% of us web surfers regularly fetch web content through news feeds. I can only speculate on why this is so. I know I often prefer the rich content available on a web site to the relatively dry text that comes through with RSS. Both Internet Explorer and Firefox let you subscribe to a site’s news feed with a couple clicks, providing the site adds appropriate tags to its HTML.

Syndication formats like RSS and Atom thus serve a different purpose than a browser. We visit web sites for the relative ease of finding the depth of information at a site. We subscribe to news feeds because we want its regular content on a small range of specialized topics. Those of us who are religious about reading content via a newsreader know that it is very efficient at aggregating feeds for us. Yet it lacks the breadth of information that is available on the web site. A newsreader does not facilitate curiosity the way a browser does.

Many of us would probably like to subscribe to hundreds of news sources but really do not have time to read all of them, even with the efficiency built into a newsreader. For example, there may be a site that you only want to read quarterly. In addition, these sites may have pertinent information, but much of it may be irrelevant to our needs.

The problems with email are well known. Given the overwhelming amount of spam, it is hard to legitimate email to make it to your inbox. There is never any assurance that you have received all email sent to you. More email than you think gets lost, but much of it probably ends up in spam folders because spam filters generate too many false positives. As dreadful as missing an important email is to us, many of us fear the alternative even more: having to sift through the dozens or hundreds of spam emails we would get daily if we turned off our spam filters.

I have been wondering if RSS might be an effective solution to broadcasting certain kinds of information. Generally you do not have to worry about an RSS feed containing spam, since you typically verify that the site is legitimate by visiting the site. Once you know it is legitimate, you then can add its RSS feed. However, as I noted, unless you are meticulous about using your newsreader on a daily basis, it is easy to lose these timely notifications.

For those feeds where I need certain information, but only sporadically, it would be nice to get an email with the feed content when the feed changes, or when certain keywords appear in the feed. Moreover, when I no longer need to receive a feed from a particular source, it would be nice to have a fast way of unsubscribing from the feed.

As usual, industry is way ahead of me. A simple Google search eventually led me to the RSSReaderLive site, which I have been testing out. You could also choose one of the many other alternatives out there. Among them are RSSFWD, SendMeRSS, and FeedBlitz. FeedBurner also has a notification service. Using RSSReaderLive, the only thing I had to remember is to program my spam filter to let all emails from it go into my inbox automatically. I just have to hope that the email will not end up dropped in some digital bit bucket on its way to my inbox.

As you might expect these services are not necessarily free. You generally have to either pay a small fee for the service or deal with ads in the email. I hope that email clients will get smarter and start polling RSS feeds for you automatically, and include feed items as emails in your inbox. For those who like to diddle with their PCs, there are programs like rss2email that you can install that will act as an RSS to email proxy for you.

I like it when a confluence of standard web technologies (email, the web and newsfeeds) can be leveraged together to solve a problem like this, minor though it may be. It neatly solves the timely broadcast notification dilemma in a way that works for both content providers and consumers.

 
The Thinker

A bundle of confusion

If you own a horse, you have to let it run regularly. If you own a sports car, you should take it on a racetrack occasionally for the pleasure of being smashed into your seat while you accelerate. Similarly, if you have a high definition television (HDTV), you do not buy it to watch interlaced analog TV signals with only 473 lines of resolution. You want content that will make you appreciate the fact you just spent $699 on a high definition TV.

That is how much we paid for our HDTV. It is an Olevia 37 inch HDTV that comes with more ports and options than we will ever use. Our TV room is small but despite its relatively modest screen size, it still seems enormous to us. The TV it is replacing worked perfectly fine. It is now sitting in our basement queued for a likely donation. While only about seven years old, it was doomed soon after it was bought. The FCC declared that on February 19, 2009 TVs like ours will be obsolete unless we buy a conversion box. Even if we did our picture quality would not have been improved. Neighbors would laugh at us for being so 20th century.

Both our cable provider (Cox Communications) and our phone company (Verizon) have spent years tempting us with their all-digital services. We have our Internet and cable TV service with Cox and an old-fashioned POTS line with Verizon. On a typical month, I pay Cox $93 and Verizon $32. Both Cox and Verizon have been luring us with bundled services. If we bundled all our communications needs with them, we were told, we could save some money.

Verizon has its fiber optic FiOS service. In addition to providing high-speed Internet access, you can also receive a lot of other content, including their version of movies on demand. Cox offers essentially these same services for roughly the same price. How do I know? Well, it is hard to tell. Masters of voodoo marketing are putting together their sales brochures. They excel in obfuscation. Yet they refuse to leave me alone. Roughly once a week I get a solicitation from each company. Typically, they come in the mail, but now and then, they also come attached to my door handle. Verizon has lately been very uppity, sending salespersons to my door to pitch their FiOS service. That was one strike against them; I hate door-to-door salespersons and by implication any company that would send me one. Moreover, I have an unlisted phone number. You would think Verizon would take this as a signal not to call me. You would be wrong. They have given me several calls pitching FiOS. Cox at least has neither knocked on my door nor solicited me over the telephone.

Now that we are HDTV owners it was time to consider their various offerings. As we soon discovered, analog TV on a HDTV looks ridiculous. Either much of the screen is black or if your TV is fancy like ours is, you can put it in a zoom mode. The screen fills up, but suddenly the picture looks fuzzy.

Both Verizon and Cox had mid-tier bundled service packages for $99.99 a month that combined telephone, digital TV and Internet service. At $99.99 a month, either looked like a good deal. Either deal appeared to be about $25 less than we were currently paying. The question became which one to choose? Which was better?

Naturally, both providers claimed they had a superior network, superior content and lower prices. Both though delight in obfuscating the consumer’s real costs. It is almost impossible to determine what you are actually buying and how much the service will cost you. I spent a couple hours on Verizon’s site trying to pick through the details of their bundles. Eventually I gave up. There is probably no way to know for sure without hiring a lawyer to decipher the fine print. Verizon though did have three strikes against them. First, they annoyed me by having salespersons knock on my door and call me unsolicited on the phone. Second, was their stance on network neutrality. Third and probably most importantly, like with their cell phone service if you select one of their bundles they want to lock you in for a couple years. I mean for such a steal as they are giving you they have to make up the difference somehow! I am old fashioned enough to think that if their service is that great it will be obvious to me, so I should not have to be locked into it.

Cox Communications had a few strikes against them too. About a year ago, I inquired about one of their bundles. I asked many questions and I did not like what I heard. I politely said no thanks, not at this time. A few days later one of their digital receivers arrived on my doorstep. That raised my dander. A phone call confirmed that I had not subscribed to their bundle. However, I still had to take an hour out of my life to return the box they sent me. They would not pick it up.

Nevertheless, between their latest brochure, reading their web site and a long conversation on the phone with their sales office, I was able to get a sense of what my bundle would actually cost me. Still, the devil is in the details. Did their $99.99 a month bundle include the rental cost of their digital receiver? Negatory. That was $4.50 a month, so the bundle was really $104.49. Did it include any HD channels? No except for the local HD broadcast signals. However, they did offer 31 HD channels. If I wanted them on top of our digital cable, they were $1.44 a month. What is this free digital tier that comes with the bundle? Apparently, the ones listed in the brochure were incorrect, but I could get the equivalent of their Variety Tier. This is what my wife wanted because she wants to see the latest Torchwood episodes on BBC America. Would there be an installation charge? Not if I install the digital receiver myself. They have to come out to the house to install the telephone interface, but there is no charge for that. Can I get extended local long distance like I have with Verizon? In other words, can I call my father who lives across the Potomac River toll free? No, but you can call the District of Columbia for free. Oh, and to get the bundle you have to choose Cox as your local long distance, long distance and international provider. Long distance rates are fifteen cents a minute, or more than three times what I pay Pioneer Telephone, my current long distance provider. However, this is not much of an issue since we hardly ever call long distance. We do email instead. Moreover, to maintain my unpublished telephone number I have to cough up another $1.71 a month. All totaled with taxes my $99.99 a month bundle would cost me $123.09. Hey, but at least I will only have to cut one check.

In short, I may save a few bucks a month but I will not be supplementing my retirement income with their fabulous bundled savings. On the plus side, we will no longer be stuck with analog TV signals. Digital signals will no longer be interlaced. The picture on these channels will not make them much bigger, but will make the picture smoother. Their 31 HD channels are expected to double soon and there will be no extra fee. We will get channels we do not get now, but that does not mean we are likely to watch them. In addition, as best I can tell I am not locked into a two-year contract.

In fact, the differences between Cox and Verizon are rather marginal, but I chose to go with Cox for these reasons. I may end up regretting my choice. Their eight-hour battery will keep my landline working during a power outage, but what if the outage lasts nine hours? While many of our TV channels will soon be in HD, I am still not sure I will watch any more TV. I largely gave up TV years ago. On the other hand, our daughter will be pleased.

Our next purchase will probably be a Bluetooth compatible DVD player. Apparently, regular DVDs are not good enough for a modern HDTV, which means that we will want to buy some of our favorite DVDs again so we can have a more proper theatrical experience.

Well, someone has to pull this country out of recession.

 
The Thinker

Google hits another home run with Google Analytics

At least a few of the best things in life are actually free. For web site owners like me who want useful statistics on our visitors but do not want to pay for it (in either money, time or advertising) there is a slick solution: Google Analytics.

Until Google Analytics, I had mediocre statistical solutions. I monitor my site with the free versions of SiteMeter and StatCounter. However, both services offer only limited free features. Both allow you to see detailed information on your last hundred page views only. If you want more information, you need to take out your charge card.

On the too much information side, my web server of course logs every hit for all of my sites. My web host like most provides access to free Awstats reports. It does a nice job of summarizing the data in my web logs. However, the information tends to be about a day old. Moreover, since it logs everything it provides statistics that, while valid, are not always terribly meaningful. For example, I get many hits on my RSS and Atom feed links. Most of these are just machines polling my server at periodic intervals. It does not necessarily mean that someone is actually reading my content. In addition, I am too lazy to try to figure out how to tune my Apache web server and Awstats configuration files to split my three domains into separate reports. However, the price of Awstats cannot be beat, and it does give me a picture of the total volume of traffic my site is getting.

What I really care about are those who are actively reading content. SiteMeter provided a close approximation. I could look at its statistics, add in a weighting factor for my newsfeed hits and get an overall picture. Still, without paying for it I had no way to ask questions such as, “Which entry was most popular last month?” and “What search words bring the most people to my site?”

Enter Google Analytics, Google’s free web site statistics package. Finally, I have a convenient way to dig down and see the relevant information I am looking for without having to pay for it or maintain it. I also have a way to get detailed statistics beyond the last one hundred page views. Google provides it as a free service to all but the largest web sites. It is designed to work with your Google Adwords account. However, you do not need to have a Google Adwords account to use Google Analytics.

While not a perfect package, it is slick. First, its drawbacks. It is not as easy to add the metering code to your web pages as it is with SiteMeter or StatCounter. You will need to dig through your web site’s templates and add the appropriate code in the HTML headers and ask it to validate each site. Second, by default you do not get up to the minute information. Google Analytics defaults to showing you statistics through the previous day. Current information is there but you have to change your date range. Third, it cannot track your non-browser related hits. This is good and bad because much of it you would want to ignore anyhow (search engine robots come to mine). Others, like relevant hits on your newfeeds, would be useful. Fourth, it would be nice if it had an API (application programming interface). I suspect this will come soon. With an API, Sitemeter-like features such as counters that appear on your web pages could be implemented. (Some WordPress plug-in authors have already done some clever things.)

With these downsides though, look at what you get. First, there is no money or advertising. Second, it has a super-slick user interface built on top of Flash technology. It allows easy customization of your Google Analytics reports simply by dragging and dropping widgets. You can customize your dashboard to show your relevant statistics. You can also drill down to get relevant statistics easily, either by clicking on the link or by placing your mouse cursor over the relevant items on the graphics. Mouse-over dialog boxes tell you much relevant information without even needing to click. Move easily from one domain to another by selecting the domain from the selection list. Change the date criteria easily by opening up the date control and highlighting the dates you want.

Google Analytics provides a wealth of analytical information. Some of it, while relevant, can be hard to understand. What is a bounce rate anyhow? Convenient links provide more details. Data is organized into four major areas: visitor information, traffic content, sources and goals. The goals area is most useful if you are using their Google Adwords service. With it, you fine-tune your Google Adwords campaigns to help you bring in more traffic. This is where Google makes its money. If by offering you free analytics it can persuade you to open a Google Adwords account, or use it more frequently or effectively, it is good for their bottom line as well as yours.

I wish Google Analytics had a mode that allowed the public to see my statistics too. If it did, it would more resemble SiteMeter and StatCounter’s features. Perhaps this will come in some future version.

I have a feeling that Google Analytic’s free service is worrying SiteMeter, StatCounter and similar services. I got a recent notice from SiteMeter saying they will be rolling out an upgraded statistics package soon. With Google nipping at its heels, I would not be surprised if it offered expanded free services.

If you have been using SiteMeter and similar services, I think you owe it to yourself to add Google Analytics metering too.

 
The Thinker

Why KML may revolutionize the world

Almost two years ago, I gushed about Google Earth. Two years later, this product from the engineers at Google continues to amaze and astound many of us, particularly those of us who are geography geeks. I thought at the time (and still think it is true) that Google Earth is a revolutionary product, every bit as significant as the web browser. Two years later, I am beginning to understand that its underpinnings, something called KML, has the potential to fundamentally change the world as we know it.

Scott McNealy the Chairman of the Board of Sun Computers said some ten years ago, "The network is the computer". This is now their corporate motto. Scott was ahead of his time, but in my opinion, the network did not become the computer until 2005 when Google Earth was released. Here at last was a killer application wherein the network really was the computer. Google Earth could not work at all without the ubiquity of the Internet. It also required Google’s very big and very fast pipes to the Internet. Nor could it exist on computers in somebody’s basement. The staggering amount of imagery rendered by Google Earth was measured in the terabytes. To serve all that imagery to so many clients simultaneously required very big and redundant computer centers. In short, it required the sort of infrastructure that only a few companies such as Google could provide. It also needed software that allowed easy access to geographical data. This was the Google Earth program that you installed on your computer. However, the Google Earth program was useless without the network infrastructure. The network was the computer indeed.

Google assembled and licensed a staggering amount of surface imagery of our planet. Much of the low-resolution imagery was provided free of charge by my employer, the U.S. Geological Survey. Google was also astute enough to realize that people had to have an easy way to describe points on the earth, link those points to URLs, describe geographical boundaries, features on the earth, and topics of interest. Creating this dataset was too big a job even for Google. However, if given the right tools people could describe these geographical points of interest themselves. The trick was to describe these geographical features in a way that Google Earth could render. Google, rather than reinventing the wheel, looked at what was out there. It settled on KML, or Keyhole Markup Language as the geographic markup language that Google Earth would render. (In time, Google bought Keyhole, which was in the digital imagery collection business, and which invented KML.)

If you are a geek like me, KML is just an instance of an XML schema. XML (Extensible Markup Language) is a platform neutral way of sharing data along with its meaning. HTML (the markup language used to describe web pages like this), or rather its modern manifestation called XHTML, is also an instance of an XML schema.

The important thing to understand about XML and KML is that you do not have to be a rocket scientist to write either of them. You can do it in a text editor if so inclined. You just have to know the schema, which amounts to the rules to be followed to mark up data for a particular kind of use. Thanks to the popularity of Google Earth, KML has become a de facto standard for describing many kinds of geographic data. There is now a very large community of KML enthusiasts out there. Many of them are busy marking up their own unique geographic content in KML. Load someone’s KML file into Google Earth and you too can show your friends the precise location of things that interest you, like Aunt Martha’s grave or your favorite hiking trail.

Google Earth then is really nothing more than a rendering engine for geographical information described in a KML syntax. In the same way that HTML describes how web pages should be presented by a web browser, KML describes how applications can describe geographic data. In addition, just as Mosaic (which quickly morphed into Netscape) became the first popular web browser, the Google Earth software just happened to be the first application for rendering geographic data described in KML. Among those now providing competition for the Google Earth program are World Wind and Geoportal.

When you innovate as fast as Google, it is hard to get ahead of them. While you may not have tried Google Earth, you are probably familiar with Google Maps. With Google Maps, you only need a web browser but you still have an amazing ability to intuitively examine the earth and find points of interest. Google Maps of course has competition too, principally from Yahoo Maps and MapQuest.

There is no question that Google Earth is ultra slick. Web browsers are ubiquitous but relatively unsophisticated. Until Web 2.0’s vision is realized, we will continue to need to download and install specialized software for many applications. This places a limitation on KML because to use it effectively you need to install a sophisticated program on your desktop computer.

If the network is the computer then Google Maps itself is really just a mapping application rendered by a web browser. Mashup sites like Frappr allow you to overlay your points of interest to you on top of Google Maps. What if a web mapping sites like Google Maps could display a user provided KML data source? Then there would be nothing to install and you could easily see the location of Aunt Martha’s grave using a browser.

As I discovered yesterday, you can now do this with Google Maps. In its search box, just point it to a web accessible KML file and it will render those points in Google Maps. (If you know the secret, you can pass the KML file as a URL parameter.) To me this is very exciting. I manage this big web site for the USGS. For years, we have been wanting to add a scalable mapping application to our site. It is not that it cannot be done, it is just that providing an interface like Google Earth is hard to do, particularly when your agency is resource constrained, as ours is. We are still hoping to roll our own scalable mapping interface one of these days.

Fortunately, we USGSers in the water business were at least astute enough a year or so back to figure out that we could create KML files that describe the locations of some of our stream gauging stations. You can find some of them here. This was not particularly hard for us to do because we know the exact latitude and longitude of these stations. Moreover, marking up KML is simple. Now you can use Google Earth to find the location of our gauging stations. In addition, the clever folks in our Waterwatch area enhanced the KML to show more than just location data, but actual useful information. They figured out a way to show how current stream flow conditions compare with historical periods of record. You can get a sense at a glance from color-coded dots in Google Earth just how much water is flowing. Black dots, for example, mean the stream gauge is at an all time high for its measured period of record.

All this is great if you have Google Earth, but many will not take the time to download the software. That is why being able to render KML in Google Maps is to me quite exciting. For example, try this link and you can see USGS stream gauges for the state of Virginia where I live. The color-coded dots give an intuitive "at a glance" sense of just how much water is flowing across the state. Moreover, you can zoom in, zoom out, pan and add road and satellite imagery too.

You may find this mildly interesting, but unless you are a hydrologist or a flood forecaster this information is probably only of passing interest. Suffice to say that USGS is not alone in providing data in KML. The amount of data provided in KML is truly voluminous.

Since it appears that KML can be married ubiquitously to a web browser, what is most amazing is what this says about the potential future of KML. Since KML is just an instance of XML, it is extensible. This means that KML can be married with and include all sorts of other kinds of data. Sources of data are everywhere. For example, the U.S. Census Bureau has huge amounts of demographic information about us and much of it could be marked up with KML. If these data sources would publish their data in KML, not only could they display their data on web sites like Google Maps, but also it could push the development of platform independent KML analytic tools. I can see web sites or open source tools that will collect KML from all sorts of locations and do data mining for you, finding interesting and hitherto unseen connections for your consideration. The relevant information could then be exported as KML, displayed, stored and most importantly shared.

Therefore, KML has the potential to foster data analysis for the masses, allowing us each to become unique assemblers of new knowledge by gleaning onto lots of other sources of data, but letting our computers find new and relevant patterns between the data.

Whether my vision will be realized remains to be seen. I would be very surprised if others are not already working to turn my vision into a reality. If this can be done then the simple Google Earth tool may one day be seen as something akin to Nina, the Pinta and the Santa Maria, bringing us to the shores of a new land of knowledge that for now is hard to fathom, but whose realization may now well be within our grasp.

 
The Thinker

.xxx marks the spot

Ever hear of ICANN? Unless you are an Internet geek, you probably have not. ICANN stands for the Internet Corporation for Assigned Names and Numbers. It is not a great acronym, but its obscurity may be something of a blessing, because ICANN’s work can sometimes be controversial.

Its more prosaic work involves establishing and overseeing rules to ensure that two people cannot offer the same domain on the Internet. They also authorize new “top-level domains”. These are the .com, .net and .org domains on the Internet that we have all come to love.

Most of these top-level domains are actually country codes, like .uk for United Kingdom. The bulk of web traffic though goes to those three letter top-level domains, .com and the like. Occasionally ICANN will approve a new top-level domain. I am glad they did. Some years back they approved the .info top-level domain, which I grabbed for my domain.

ICANN has proven to be miserly in approving new top-level domains. Maybe the paperwork is too much of a hassle. Some new generic top-level domains have been squeezed out over the last few years. In addition to .info, these include some you may not have heard about including .travel and .mobi (for mobile products and services). Many others have gone down ICANN’s bit bucket.

For example, there is the .xxx domain, first proposed back in 2000. As the name implies, it is to be used as a logical domain for sexually explicit content. If news reports are to be believed, my government twisted arms at ICANN to ensure this latest proposal got canned too. Last Friday the proposal was rejected by ICANN for the third time. Once more ICANN found dubious reasons for rejecting the .xxx top-level domain. You can read the surprisingly dry details here.

One of the more curious arguments ICANN gave in rejecting the application by ICM Registry, which wants to become the registrar of all things smutty, is that it avoided their “concern for the protection of vulnerable members of the community.” I am speculating here, but I think ICANN was expecting that any .xxx server should have a way to detect whether someone connecting to it was doing so legally. If that was its concern, it is an unreasonably high bar to meet. The Internet is inherently an open medium and authentication over the internet is costly, intrusive and technically challenging. This is no way to stimulate Internet commerce, which is what .xxx domains are about more than sex. Besides, if we wanted a proprietary and managed network, we would all be subscribing to AOL.

However, any site with a .xxx top-level domain should tell the average user plenty. It should tell parents of small children, for example, that they could easily block a lot of smut on the internet with a simple software configuration.

Had the .xxx domain gone through, those adult web site owners who chose not to get a .xxx domain would have been under no compulsion to get one. On the other hand, many adult web site owners would prefer to host under a .xxx domain. Their rationale is not hard to figure out. Using a .xxx domain would shield them from a lot of potential liability. If some child is surfing a .xxx domain, it’s quite clear that they are not their by mistake and Mom and Dad were asleep at the switch. In addition, it gives a clear message to potential customers what kind of business they are in. Not many people would accidentally surf to a .xxx domain.

Furthermore, what is wrong with consenting adults having their own zone on the internet for sexually explicit content? That such content is all mixed up now simply adds to the likelihood that someone will inadvertently see pornography on the Internet. There is no way to reliably determine whether a site is an adult web site without viewing it. There is no way for a computer to make an accurate judgment on whether an image contains sexual content.

A .xxx top-level domain should be a no-brainer. I suspect the real reason why the .xxx domain was rejected for the third time had more to do with certain people’s discomfort with human sexuality in general than anything else. I see parallels with our War on Drugs. Just as it seems politically impossible to declare the interdiction strategy in War on Drugs a lost cause, it seems politically impossible to agree that the Internet needs a .xxx domain. Because to admit that we need a .xxx domain implies that smut cannot be controlled on the Internet and that most humans enjoy pornography.

The reality of course is that smut cannot be controlled on the Internet, except through monitoring by local web hosts. With millions of domains, it is impractical to monitor every domain out there. A .xxx domain though would likely put a lot of peer pressure on the adult industry. In addition to likely giving them additional legal cover it would be seen as the responsible thing for purveyors of adult materials to do. “I’m a good Netizen. My sex site is on a .xxx domain. This means everyone knows what kind of content I offer. If they don’t want to see my stuff, I am easily avoided. I am protecting kids too.” I suspect over time, providing the top-level domain was administered impartially, most adult sites would migrate to a .xxx domain.

Admittedly, if it hit critical mass there would be the temptation to close the top-level domain down and thereby relieve the Internet of pornography. Would the smut problem on the Internet then be solved? This would be unlikely. It would be a simple matter to move smut back into .com domains again. Just because you can zone a red light district in your town, does not mean you can enforce it on the Internet. It is like passing a law that no one may send spam. We have these already and you can see how effective those laws have been.

Generally, if you take three strikes then you are out. ICM has not quite thrown in the towel. They are planning to sue the United States government, whom they alleged illegally pressured ICANN on the issue. None other than ICANN board member Susan Crawford suggested the same thing in her blog.

Smut is not going away. Now that we have the Internet, smut simply found a modern means of delivery. Smut predated the written word. We will carry it with us until the moment our species becomes extinct. Just as we cannot win the War on Drugs through interdiction, neither can we eradicate internet pornography through force of law and the power of public opinion. All we can do is acknowledge what we cannot change and change what we can. A .xxx top-level domain is one of the few tools in our Internet toolbox that can actually scope down the problem.

The only way to truly fix Internet pornography is to get rid of the Internet, which is not possible anyhow. Therefore, we must live with it. If we cannot stop people from using narcotics illegally, doesn’t it make sense to decriminalize its possession and tax it instead? Then why not use the same strategy with adult web site operators? Let adult domains hang out in their own .xxx top-level domain. Let the registrar collect a modest fee above other domains, and use it to fund more enlightening activities.

Perhaps some of the money could be used to bring the Internet to developing countries. If so then perhaps there would be some good come from smut after all.

 
The Thinker

SiteMeter vs. StatCounter: a comparison

As you may have noticed, I use SiteMeter to monitor traffic on this blog. I chose SiteMeter about three years ago because it had name recognition and everyone else seemed to be using it. As I mentioned in this entry on SiteMeter, its hit count is imprecise at best. This is because it can really only monitor traffic on your site served as web pages. (That is why I also use Feedreader for those who prefer to use newsreaders, and offer users the option to subscribe to receive my entries via email.) Moreover, it will not catch all your traffic served as web pages. A surfer may elect to turn off Javascript, not to display any images, or hide details about themselves. There is no guarantee that the SiteMeter’s code in your web pages will successfully report back to SiteMeter. We all get “page not found” errors regularly. A similar error can happen when the SiteMeter code is executed, except it is less likely to be noticed. Even if the tracking data reaches SiteMeter, there is no guarantee that it will actually be recorded in their log. SiteMeter is not alone. Any service like SiteMeter suffers similar limitations.

The basic SiteMeter service is free. It shows detailed statistics for only the last 100 page views. Nevertheless, it suffices to give you an idea of your site’s traffic. Its reports may not be comprehensive, but at least the information is instantly available and up to the minute. If you have to depend on log analysis tools that come from your web host (typically Awstats) your information will be up to 24 hours old.

So SiteMeter mostly works, even if it is imprecise and occasionally slow. It satisfies my curiosity to know how heavily trafficked my blog is and if a particular entry is spurring any interest. Lately though, SiteMeter has been failing me. My statistics are collected on their sm1 server. It experienced problems on March 3rd and SiteMeter is still trying to recover. (It looks like they may lose all my historical data.) As a result, I have not been able to get my daily buzz from examining my metadata.

SiteMeter will probably get their act back together in time. In the interim, I decided to try a similar service. With a little Googling, I found StatCounter. I have been running it for a few days. I am trying to decide if I like it better than SiteMeter. Like SiteMeter, its free version limits detailed information to the last 100 page views.

SiteMeter takes you right to the pay dirt. You are instantly shown a statistics page showing things like the number total visits and page views, along with today’s total number of visits and page views. StatCounter has the same information, but it makes you dig for it, and you frequently have to log in first. This adds a lot of unnecessary clicks and keystrokes. However, StatCounter’s summary page shows more information and includes both graphical and textual statistics on the same page, including textual page and visit counts. SiteMeter has this information in graphs only.

SiteMeter has a convenient “who’s on” link that tells you how many visitors you have had over the last X minutes, as well as some high level details about each visit. (You get to configure the value. The “who’s on” feature is misleading. The World Wide Web is inherently stateless, so there is no way to really know if someone is actually viewing your page at a given moment.) StatCounter has essentially three variations of this report, but with more detail than you probably want. Nor is it quite a “who’s on” feature because you cannot limit the recent visitors or page views to a given time period. Instead, you have to pick one of the “recent” reports.

SiteMeter has a traffic prediction feature. Based on your current traffic it will infer how many page view and visits you will get over the next hour, day, week or month. StatCounter has no such feature.

SiteMeter allows you to view visitors by details, referrals, world map (it places dots on a world map for recent visitors), location, entry pages and exit pages. StatCounter offers similar features but again provides more detail. SiteMeter does offer an out clicks feature. This can be quite useful. Unfortunately, StatCounter does not offer it.

SiteMeter offers handy graphics showing traffic by month, week, day and hour. StatCounter has the same information, but it also shows quarterly traffic. In addition, it provides the exact numbers, rather than just a graph. However, it is harder to find these graphics. You have to select the Summary option, and then look for the links.

SiteMeter offers some “navigation trends” like visit depth and daily durations, but only as graphics. StatCounter has nothing similar. SiteMeter can track usage by continents, countries, distance and time zone. StatCounter cannot do continents or time zones, but instead offers state/region statistics. (These statistics are likely meaningless, since the web host may be in a hosting center in Georgia, but the user may actually be in Virginia.) SiteMeter tracks visitors by their language, operating system, domain and organization. StatCounter does not track language but does a better job of tracking by domain. Both can track browser share, Javascript capability and monitor resolution. Unlike SiteMeter, StatCounter cannot track color depth.

Overall SiteMeter offers more ease of use, but fewer details and features. Stat Counter does offer some unique features. These include reports over date ranges, area graphs, better drill down features, tracking by search engine, icon hiding, export features and IP labeling. It also offers information on how many visitors are returning, a feature I find quite useful. Its recent visitor map is actually a Google Maps mash up, which is more useful and navigable than SiteMeter’s service. These extra features make it more cumbersome to use and navigate. For many people it will be TMI (too much information).

I cannot speak to StatCounter’s reliability and accuracy compared with SiteMeter’s. To be fair to SiteMeter, my recent problems have been the first in three years that have been severe. Its other past problems were annoying, but considering the price, I could live with them. If I do end up losing all my historical statistics, I will be upset with SiteMeter, since I will have lost the yearly history that shows traffic growth for this blog.

If you value simplicity, SiteMeter is the better service. SiteMeter’s categorized links makes it much easier to navigate to essential information. If you value depth of information, StatCounter is probably the better choice, even though its screens are often unnecessarily busy. Either solution is free with upgrade options if you want to track details for more than the last 100 page views, so it does not hurt to add code for both. Now that I have started using StatCounter, I will continue to use it. However, I will not get rid of SiteMeter either. Both have their uses. Some months of experiencing both side by side will give me a better appreciation for the features of each.

 

Switch to our mobile site