The Skinny on Thin Clients
History has a tendency to repeat itself. Sometimes the space between repetitions is long while other times it seems to be short. The time between cycles depends a lot on your perspective. Take, for example, the evolution of language. Before the alphabet, man used symbols as a means of written communications. We call those symbols hieroglyphics. Language evolved to the printed word and improved on written communication until we found ourselves using symbols again. Sometimes these symbols are called icons, but other examples exist on things such as road signs.
Thin client computing, known by many other names in past years, is the current iteration of what we used to call host-centric computing. Well almost. You see, even though we can identify these similarities, there are always evolutionary changes. For example, the thin client has evolved from a dumb terminal with no memory and no ability to process logic to a workstation with a processor and memory that can process logic. The idea here is that in this model, some processing can be off loaded from the otherwise over-taxed host.
In between the dumb terminal and the more politically correct thin client we’ve had the politically incorrect fat client. A fat client is always a PC. Oftentimes we see fat clients essentially running nothing but an emulator thus turning it into a dumb terminal, a mighty expensive dumb terminal.
The differences between thin and fat clients seem few but are deeper than they initially appear. Physically, the only difference may be the presence of auxiliary storage (disk, diskette, CD-ROM), and often that may be the only difference. Really the difference is in how programs are run, where data is stored and network design.
In the old days all data and programs were stored where they ran. This was on the host system, whether it was a mainframe, a System/34 or an AS/400. As time went on, program storage and processing was moved away from the host to the fat client, sometimes with the data. In today's thin client model, everything is again stored on a host, or multiple hosts, but some or all of an application is executed on the client. The advantage here is object management and version control. It’s much easier to send part or all of an application down the pipe only when needed than to worry about who may be running which version of an application.
All that said, the question remains: Where do we go from here? Honestly, I’d love to offer an answer, but I don't really know. What we’ll likely see are continuing improvements in the way languages such as Java perpetuate themselves in the form of downloads. We’ll see more applets and servlets and their functionality will increase and their size decrease. Reusable objects will continue to be utilized. Network speeds will continue to become faster. Memory will continue to grow and processors will continue to become faster at exponential growth rates.
Think of the starship Enterprise and all of its on-board computers. They all communicate with one another. Processing seems to happen at light speed through tactile, voice and other interfaces. All of these things are really not that far away.
I cite Captain Picard’s monitor on his desk. Looks an awful lot like the flat panel monitors available today. Do you remember the retina scan for security authorization? Are you aware that this is a real possibility for ATM's in the near future? And voice technology. If you haven't seen a demonstration of IBM's ViaVoice, you’re missing a real treat.
Thin or fat client aside, the common denominator is microprocessors. Whether there’s locally attached storage media or not is inconsequential if the programming is good and the network can handle the traffic. So next time you see an episode of one of the more current Star Trek iterations, stop and take a look at the computers. Think thin client, GHz processors and ATM speed networks. Tell me that this is not in our future. Don't believe me? Remember Captain Kirk's communicator. Next time you see a Motorola StarTac, tell me it hasn't come true.