Home
 > Research and Reports > TEC Blog > Architecture Evolution: From Web-based to Service-oriente...

Architecture Evolution: From Web-based to Service-oriented Architecture

Written By: Predrag Jakovljevic
Published On: September 19 2006

Extending Applications to the Web

Pre-Internet client/server technology relied on healthy communications between the machines involved, and local area networks (LANs) and wide area networks (WANs) became an expense and a management headache for most companies. Moreover, updating software versions, particularly on the numerous distributed personal computers (PCs), became an almost unsolvable problem. Many information technology (IT) departments have thus moved toward Internet- or intranet-based technology as a solution. In this approach, communications utilities provide the wide area communications backbone, and PCs merely communicate the uniform resource locators (URLs) to reach the servers they need help from. Software nuggets coded in the Java (or any other web-friendly) language that runs on the PC clients gets downloaded when needed, ensuring that it is always the latest version. With an Internet-only enterprise system in place, client-side software upgrades become unnecessary, while web browser-based applications significantly simplify the training. Tying together far-flung locations of an enterprise becomes simpler too.

Part Two of the series Architecture Evolution: From Mainframes to Service-oriented Architecture.

In fact, owing to the Internet phenomenon, terms such as web-based and web-enabled have lately replaced the 1990s client/server buzzword, with client/server now insinuating a reference to the old pre-Web way of doing things. However, most client/server systems have meanwhile been modified to include web access, and the web-based application or architecture is naturally a true client/server architecture. Namely, on the server side, the Web uses a multi-tier architecture with interlinked web servers, application servers, database servers, and caching servers, while on the client side, user machines commonly execute scripts embedded in countless web pages. They also execute Java applets, larger Java programs, rich client applications, and so on, all of which means that both client and server are cooperating in tandem.

The advantage of an Internet-based architecture is that any computer with secure access to the Internet can access the product, given that by simply typing in a URL address, one has access to the system. There are potential savings in terms of deployment, maintenance, support, and upgrades, since the changes on the server side are instantly available to all users.

Still, while this approach might have value from an IT perspective, initially it did not necessarily improve the user experience of the application. In fact, many users even complained about the decreased usability and performance of the early applications in pure Internet mode. Users reported a serious decline in application performance due to the numerous hypertext markup language (HTML) roundtrips, cumbersome hyperlink navigation, and slower networks. Even after upgrading the network, bolstering server farms (as larger server hardware and web server software is required), and redesigning the interface to streamline navigation (via short key combinations, for instance), many users were still yearning for the "rich" client/server interface metaphor.

Early thin client Internet architectures also did little to leverage the handheld wireless devices, mobile phones, and other forthcoming smart technologies. Thus, many vendors have since—within their suites—delivered richer, more dynamic, and higher performance user interfaces (UIs), with tight integration to Microsoft desktop office products for nearly always-connected power users, and pure HTML/dynamic HTML (DHTML) UIs for external and casual users of the system. The latest so-called Web 2.0 technologies like asynchronous JavaScript (AJAX) and extensible markup language (XML) certainly bridge the divide with the best of both worlds (see Software as a Service Is Gaining Ground).

For more background information, see Architecture Evolution: From Mainframe to SOA.

Challenges of Extending Enterprise Resource Planning to the Internet

Extending back-office applications to the Internet stems from the desire of many user organizations not to reinvent the wheel in their scramble to create e-commerce-ready applications. By extending the existing enterprise resource planning (ERP) system to that end, organizations not only leverage their investment in the ERP solution, but can also speed the development of their e-commerce capabilities.

However, traditional enterprise systems have proven difficult to change and extend. Barricaded behind complex, proprietary application programming interfaces (APIs), and based on complex, nearly indecipherable relational database schemas, traditional ERP systems did not readily take to e-commerce. Thus, transport and communication technologies like Web services, message brokers and queues, electronic data interchange (EDI), and XML get all the attention of late, but the inherent problem of old core code and business logic duplication are often hushed up, or not discussed openly (for more information, see Integrating All Information Assets).

The first stage in ERP's conquest of the Web was to allow browser access through support for hypertext transfer protocol (HTTP), HTML, and Java. This stage of adding a web access layer onto existing old client/server systems has largely been completed by the majority of enterprise vendors. This is, however, only a short-term solution, since only the client piece has been rewritten to be accessed over the Web, and because both the nature of interaction and the kind of casually visiting Internet-based users are different compared to traditional in-house power user system interactions.

The next stage, which began fairly recently, is to extend the enterprise applications themselves to the Web, where they can be accessed and run by outside trading partners and mobile employees. These web-based applications are hybrid in form, bringing together proprietary legacy elements, either host-centric or client/server, with thin-client, browser-based interfaces. The trick is to bring the advanced functionality of ERP systems to the Web, but broken into components and without the need for an additional layer of wrapper architecture. Some vendors have also begun to add mobility hooks into their suites so that ERP sessions can be accessed via wireless devices.

In order for traditional ERP systems to be Internet-ready, they will have to be

  1. fully browser enabled (although depending on the user's role or task, the browser, Microsoft Office, or the Microsoft Windows "rich client" may be the best UI);
  2. redesigned to be available to all corporate users, not just the special few;
  3. redesigned to be available to trading partners (i.e., customers, resellers and suppliers); and
  4. redesigned to use standard data interchange language (most likely XML), rather than proprietary protocols.

To do this in an appropriate manner, ERP vendors have to completely redesign their applications for a true e-business environment, on a standards-based Java 2 Enterprise Edition (J2EE), open source, or Microsoft .NET compliant application server, which takes serious resources and commitment. The resulting application will then have to be designed from scratch to be accessed over the Internet by a web browser, and to be extendable by additional components, and managed by an application server with built-in security and integration features.

Concurrently with the processing power being transferred from the mainframe and minicomputers to the desktop, a significant effort was made to make systems quite feature-rich with a huge amount of functionality built into so called "vanilla" or "out of the box" software packages. However, when the core application had to be upgraded to a newer version, all the development around the system needed to be re-tested to ensure that the bolt-on system (with industry-specific modifications and extensions) was still functional. In many instances business processes had to be reverse engineered, and database tables had to be updated directly. Often, this type of development nullified software warranties, and the risks associated with this kind of bolt-on situation were extremely high when compared with the potential benefits. Also, integration between disparate ERP systems (either between different divisions in an enterprise or between trading partners) consisted of rigid and complicated point-to-point integration technologies, so-called enterprise application integration (EAI). In the long term, something had to be done about this situation and the resulting long term delays and prohibitively high costs associated with updating modifications.

To that end, the phase of n-tier and Internet-based architecture has ushered in the era of immediate trading, ever-wider application suites (beyond core ERP and with industry-specific capabilities in a single code base) residing on a single platform, while the user base has expanded into all walks of life in an extended enterprise (including operational workers within sales, distribution, field service, and product development departments).

Application Servers Lead to Service-oriented Architecture

Crucial enablers to these traits were application servers, which consist of the system software used to host the business logic tier of applications (for example, in three-tier client/server applications, the application server manages business logic and enables it to be accessed from the UI tier). These are programs that handle all application operations between users and a user enterprise's back-end business applications or databases. Application servers are typically used for complex transaction-based applications, and to support high-end needs, any application server has to have built-in redundancy, monitors for high-availability, high-performance distributed application services, and support for complex database access.

Furthermore, after the Web exploded in the mid-1990s, application servers became web-based, and the web (application) server term most often refers to software in an intranet or Internet environment that hosts a variety of language systems used to program database queries or general business processing. These scripts and services, such as JavaScript or DHTML, and Java server pages (JSPs) or Microsoft Active Server Pages (ASPs), typically access a database to retrieve up-to-date data that is presented to users via their browsers or client applications.

There is thus overlap between an application server and a web server, as both can perform similar tasks. The web server (also sometimes referred to as an HTTP server) can invoke a variety of scripts and services to query databases and perform business processing, while application servers often come with their own HTTP server, which delivers web pages to the browser. The application server may reside in the same computer as the web server or be in a separate computer, whereas in large sites, multiple computers are used for both application servers and web servers. Examples of commercial J2EE-based web application servers include BEA Weblogic Enterprise, Sun Java System Application Server (formerly Sun ONE Application Server), Borland AppServer, and IBM's WebSphere Application Server, while the largest ERP vendors like SAP and Oracle too have their own version of application servers (see SOA-based Applications and Infrastructure—The Next Frontier? ). The Microsoft .NET platform should be mentioned here as an alternative to the J2EE-based counterparts—see Understand J2EE and .NET Environments Before You Choose.

In fact, many businesses embarked (likely not realizing it at the time) on the road to service-oriented architecture (SOA) and Web services several years ago when they made their first investments in component-based software technologies and in particular, in application servers. SOA (which is typically equated with Web services, although these two notions should not be interchangeably used) is an umbrella term for a standardized interface between software which allows one program to use the functional components (services) of another program. Formerly called a distributed objects architecture, the SOA term was coined at the turn of the century as Web services and Internet standards were evolving (as mentioned earlier in this series, Common Object Request Broker Architecture [CORBA] and distributed common object model [DCOM] are examples of earlier SOAs).

Gartner defines SOA as "an application topology in which the business logic of the application is organized in modules (services) with clear identity, purpose, and programmatic-access interfaces. Services behave as "black boxes": Their internal design is independent of the nature and purpose of the requestor. In SOA, data and business logic are encapsulated in modular business components with documented interfaces. This clarifies design and facilitates incremental development and future extensions. An SOA application can also be integrated with heterogeneous, external legacy, and purchased applications more easily than a monolithic, non-SOA application can."

The Organization for the Advancement of Structured Information Standards (OASIS) SOA Reference Model Group defines SOA as "a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. It provides a uniform means to offer, discover, interact with, and use capabilities to produce desired effects consistent with measurable preconditions and expectations."

That is to say that SOA identifies all functions or services using a description language with interfaces that perform business processes, whereby each interaction is independent of other interactions and the interconnect protocols of the communicating devices. Since interfaces are platform-independent, a client should be able to use the service from any device using any operating system (OS) in any programming language. SOA supports integration and consolidation activities within complex heterogeneous enterprise systems, but does not specify or provide a methodology or framework for documenting capabilities or services. The concept should, in theory, be able to help businesses respond more quickly and cost-effectively to changing market conditions by reconfiguring business processes. It enables agility, flexibility, visibility, collaboration with trading partners (and between functional and IT departments), and so on, by promoting reuse at the service level rather than object levels, thereby departing significantly from the model of object-oriented (OO) programming, which binds data and its processing together. In addition, SOA (again, in theory) should simplify interconnection to and usage of existing IT assets including the legacy assets, even thereby giving a new lease on life (if not full-fledged rejuvenation) to mainframes.

Still, in addition to still maturing (and sometimes even conflicting) commonly accepted standards, challenges facing SOA implementation include managing services metadata and providing appropriate levels of security. Directing and supplying information on the interaction of services can be complicated, since the architecture relies on complex multiple messaging that opens the door to messy code and broken communication, on top of potential non-compliance with government regulations. SOA's flexibility poses security threats, since these applications engulf services, especially those external to company firewalls, and are more visible to external parties than traditional applications, which is why businesses must set policies to protect who can see exactly what information.

Problems can also arise when users try to connect services that were not developed in the exact same manner, which is very likely if this not controlled within a certain vendor's ecosystem. While one of the key goals of SOA is to remove hardwired purpose-written point-to-point links, and replace them with generic links centered around business functions and processes, to achieve this new components like orchestration or workflow engines, communication adapters or translators, and service locators will have to be added to the already complex architecture.

Needless to say, all vendors are painstakingly seeking ways to address these challenges. While more definitions and explanations on SOA can be found in Understanding SOA, Web Services, BPM, BPEL, and More, the fact remains that the next evolutionary step of enterprise software architecture is the one of SOA and Web services, which promises accommodation of almost immediate changes, industry-specific assembled or composite applications using common and consistent services (software components) from repositories, and (re)creation of ubiquitous business processes and industry-specific platforms as required.

 
comments powered by Disqus

Recent Searches
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Others

©2014 Technology Evaluation Centers Inc. All rights reserved.