The Power of Peer to Peerby Paul Sullivan on October 2, 2001 12:00 PM EST
- Posted in
- IT Computing
Trapped In The Past
Back in the day we had Client / Server. We had dumb terminals tied to a big old mainframe that did all the hard work for us. Then, we got into the world of the PC, where each computer had some computing power of its own and was able to work individually, outside of the mainframe environment. However, as these PC's were integrated in the fabric of business networking, many still adhered to the Client / Server model. But, as time progressed and PC's got more powerful and more capable we began to see the rise of larger and more complicated processing requests on the mainframes. Graphics and 3D modeling became more prevalent thanks to systems from Intergraph and the introduction of AutoCAD and later 3D Studio on the PC. With the introduction and eventual acceptance of Microsoft Windows on the PC platform, mainframes became more and more overloaded. The amount of data being transferred over networks was becoming prohibitive, and it became a real drain to host graphical applications in the Client / Server model.
The Shifting Burden
It was during this evolutionary stretch of time that we began to see the proliferation of empowered Peers, or individual workstations that executed their applications locally and later synchronized with the mainframe data. Applications like dBase, 1-2-3 and Wordstar enabled users on each machine to complete much of their work locally and as personal printers became less expensive and more widely available from vendors like Okidata and Epson, it was a whole lot easier to get your work done without ever touching the mainframe.
While a full-blown Client / Server model may have decided advantages in certain areas, there are also some very serious disadvantages. Performance can be one of them. The more people you have running applications from a Host Server, the slower the response to each individual session seems to be. In addition, if the server crashes, the hosted applications can become unavailable, potentially reducing the entire level of office productivity to zero. It is an extreme example, perhaps, but it is a point well made.
Slowly, we have seen the development of hybrid configurations, where servers no longer host most applications, but they do still host master data files. Now local applications can make a request for a local copy of all or part of the data hosted on a server, work with it independently, then send it back to the server for synchronization when finished. Since the server only has to deal with data hosting instead of application hosting, it is able to respond to more requests with fewer processing resources, thus increasing its rate of transaction processing and in essence, providing more for less. On the Server side, ROI (Return On Investment) goes up, and productivity killing downtime is reduced. Since much of the burden is shifted to individual client machines, you no longer have the "domino effect" to deal with when failures occur. No longer will one hardware failure incapacitate an entire group of workers. Instead, failures are often localized and can be analyzed and repaired without causing downtime for the rest of the workforce.
Post Your CommentPlease log in or sign up to comment.
View All Comments