Home
 > Research and Reports > TEC Blog > Architecture Evolution: From Mainframes to Service-orient...

Architecture Evolution: From Mainframes to Service-oriented Architecture

Written By: Predrag Jakovljevic
Published On: September 18 2006

Software architecture can be defined simply as the design or blueprint of an application or software package. This blueprint describes the layout of the application's deployment, including partitioning its business logic between servers (computers). The architecture thus incorporates protocols and interfaces for interacting with other programs or computers, and all these should vouch for future flexibility and expandability. Conversely, while a self-contained, stand-alone program does have some program logic, it certainly does not have software architecture. This is not to imply that stand-alone programs do not need design, but rather that for the purpose of this article, architecture comes in when one talks about "systems of systems." Over the last few decades, the enterprise applications architecture has certainly gone through several waves of evolutionary steps and improvements, so that even our article from the early 2000s now sounds quite archaic (see Essential ERP—Its Underpinning Technology).

Part One of the series Architecture Evolution: From Mainframes to Service-oriented Architecture.

The rapid pace of global business nowadays places a unique set of challenges on all enterprises looking to improve and automate their operations, and at the same time, forces them to remain poised to adapt quickly to change. With increased competition, globalization, and merger and acquisition (M&A) activity, enterprise applications buyers realize that product architecture plays a key role in how quickly vendors (and their partners, and even the users themselves) can implement, maintain, expand or customize, and integrate their products. Product architecture is going to do much more than simply provide the technical functionality, the user interface (UI) and presentation, and the platform support. It is going to determine whether a product is going to endure, whether it will scale to a large number of users, and whether it will be able to incorporate emerging technologies, all in order to accommodate increasingly evolving user requirements.

Evolutionary Stages of Enterprise Applications

Architecture From an architecture standpoint, the first enterprise applications packages, mostly from the 1980s, were originally written in a mainframe or host computer environment. Mainframe refers to a very large and expensive computer capable of supporting hundreds or even thousands of users simultaneously. In the hierarchy that starts with a simple microprocessor (as in watches or calculators, for example) at the bottom and moves to supercomputers at the top, mainframes are just below supercomputers. In some ways, mainframes are even more powerful than supercomputers because they support more simultaneous programs, but supercomputers can execute a single program faster than a mainframe. The distinction between small mainframes and minicomputers is vague, depending really on how the manufacturer wants to market its machines.

In that setup, the mainframe (or an extremely powerful minicomputer) represents the brain, while so called "dumb" terminals allow the user only to access and input data. In other words, since a dumb terminal is a mere combination of a keyboard and a terminal, it cannot process information itself. The early era was thus characterized by immediate processing of information by the mainframe, but separate, silo departmental applications would reside on separate mainframe platforms. Any integration between, say purchasing and accounts payable applications, would thus mean lengthy transfers in batches—not to mention that the systems' only users were a few privileged (if not brave) information technology (IT) staffers that would spend days or weeks to produce some meaningful report (typically late and after the fact).

As personal computers (PCs) became powerful enough to take on some of the processing tasks that used to be performed solely by mainframes or minicomputers, in the late 1980s the client/server era was born. When PCs are combined with larger computers—either mainframes, minicomputers, or PC-based servers—the system is referred to as a client/server platform, where the client/server moniker means that the processing of work is divided between (at least) two computers. The client is the computer on the desktop (the user's machine), which contains the user interface (UI) and performs display (such as the Microsoft Windows or Mac graphical user interface [GUI]) and some logic functions and applications processing, while the server is the more central computer that contains the database and application programs.

In other words, client/server denotes the splitting of an application into tasks performed on separate, network-connected computers, at least one of which is a programmable workstation such as a PC. The client is thus a software program that is used to contact and obtain data from a server program on another computer. Each client program is designed to work with one or more specific kinds of server programs, and each server requires a specific kind of client. Since the client is a desktop computing device (such as a PC) or a program "served" by another networked computing device (in other words, the "server"), a web browser would be one type of client too.

As for the server, it is a computer or software package that provides a specific kind of service to client software running on other computers. The server term can refer to a particular piece of software (a web server application) or to the physical machine on which the software is running. The main difference between the client and server is that the server responds to requests from many different clients, while the clients usually initiate the requests for information from a single server. A single server machine could have several different server software packages running on it, thus providing many different servers (and services) to clients on the network. For instance, file servers, which range in size from PCs to mainframes, store data and programs, and share those files with the clients, in which case the server functions as a remote disk drive to the clients.

To recap, client/server is an architecture in which the user's PC (the client) is the requesting machine, and the server is the supplying machine, both of which are connected via a local area network (LAN) or wide area network (WAN). Throughout the late 1980s and early 1990s, client/server was the hot buzzword as applications were migrated from centralized minicomputers and mainframes to networks of desktop computers. To be a true client/server environment, both client and server must share in the business processing. For example, a database server processes requests from the client to look up or update data in its database, in which case the server is performing a search at its end to respond to the query received from a client. It is not acting as a remote disk drive, but is rather fully participating in the transaction.

Client/server architecture, one of the first applications of distributed computing, became (and still is) the prevalent choice of many companies today for several reasons. First, using PCs instead of dumb terminals creates an increase in computing power. For instance, graphic processing is highly central processing unit (CPU)-intensive and is impractical to perform at a central computer, and hence it became necessary to off-load that processing to PCs. Second, overall system speed is increased due to the possibility of also using distributed databases, whereas hardware costs are considerably lower compared to using a mainframe system.

The early era was also characterized by immediate reporting possibilities, given that an integrated enterprise resource planning (ERP) system could now be on a single database and operating system (OS) platform, and the users (who are now more numerous and can be found in some corporate-level functions) thus do not have to run to the IT department for a specific report production, time and again. The shift from mainframes to client/server environments occurred because mainframes were perceived as barriers to business processes re-engineering (BPR), which was consequently in vogue during the 1990s. Too expensive and difficult to use by a wider user population, mainframes seemed to be a barrier to having a flexible and agile enterprise, which the ERP and BPR combination touted at the time.

Client/Server Variants

The primary strategies for implementing client/server have been two-tier, three-tier or n-tier, and Internet- or intranet-based. The concept of tiers provides a convenient way to group different classes of architecture.

In a two-tier approach, the client machine connects to a single server machine, and the server usually controls the central database (in other words, data is managed by a dedicated database to improve multi-user performance), the client controls the UI, and the application logic can be in either the client or the server. Two-tier designs usually locate business logic with the data server to centralize control and management, while the designer decides how much of the processing logic should be implemented at the client and how much at the server. If most of the work is done at the client PC, it is called a fat client application, a user's computer that contains its own applications that are run in the machine. New programs are installed on the local hard disk, which is the typical way people use their computers. Conversely, a thin client application means that most of work is done at the server. In other words, the user's computer performs no application processing, but rather functions like an input/output (I/O) terminal, processing only keyboard and mouse input and screen output, and all application processing is done in the server. This thin processing client is typically accomplished using technologies like Microsoft Windows Terminal Server, Citrix Presentation Server (formerly Citrix MetaFrame), or open source X Window System. Thin clients are logically accompanied by "fat servers" that perform most or all of the application processing, with little or none performed in the client.

A three-tier deployment adds a third program to the mix, in addition to a database (in which the server stores its data), so that the business logic can be split onto many computers to improve reliability and spread the processing load. In a three-tier approach, the client machine controls the UI and some processing logic, an application server (a more detailed notion for this will soon be explained) manages the enterprise business application processing, and one or more database servers manage the corporate database. This approach of a three-way interaction in a client/server environment, in which the UI is stored in the client, the bulk of the business application logic is stored in one or more application servers, and the data is stored in a database server, aids management of version releases and enterprise business rules. Most current leading enterprise applications are built with three-tier architecture.

Three-tier is the fundamental n-tier architecture, which further splits the processing load for large applications by distributing pieces of the program onto multiple servers. By definition, n-tier applications can be broken into modules on multiple computers. In the term n-tier, n implies any number of distinct tiers used in the user's architecture. Once the modules are put onto different computers, each computer and module can be optimized for a specific use, such as database, business logic, or UI or presentation. The networked computers can then share components with other computers and applications to eliminate redundancy and further optimize performance. In some n-tier scenarios, modules can be relocated to improve network performance for remote locations without compromising application integrity. N-tier application architecture provides a model for developers to create a flexible and reusable application. By breaking up an application into multiple tiers, developers only have to modify or add a specific layer, rather than rewrite the entire application, if they decide to change technologies or to scale up.

N-tier architecture allows a virtually unlimited number of programs to run simultaneously, send information to one another, use different protocols to communicate, and interact concurrently. This allows for a much more powerful and scalable application, providing many different services to many different clients. It also contains a number of serious caveats that create complexity problems in design, implementation, performance, and load balancing. Many technologies exist that add to this complexity, including Common Object Request Broker Architecture (CORBA), Enterprise JavaBeans (EJB), distributed common object model (DCOM), and remote method invocation/remote procedure call (RMI/RPC), all of which are predecessors of service-oriented architecture (SOA). Generally, if using a distributed-object architecture allows a customer to write programs that are faster, larger, more powerful, and more robust, then it is definitely worth the effort despite the inevitable challenge of controlling such a dynamic environment with many moving parts.

Since customers have been increasingly realizing that architecture plays a key role in how quickly vendors can implement, maintain, expand or customize, and integrate their products with other vendors' modules, more and more enterprise application products developed or enhanced within the last few years incorporate object-oriented (OO) or componentized development environments, as well as n-tier architectures.

 
comments powered by Disqus

Recent Searches
Others A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

©2014 Technology Evaluation Centers Inc. All rights reserved.