Richard Seltzer's home page  Publishing home
Articles about DEC
mgmt memo


Volume 7, #5                                                         July, 1988


State Of The Company Issue


On May 5, 1988, over 600 senior managers attended Digital’s State of the Company Meeting in Merrimack, N.H. The theme for the day was "One Company, One Strategy, One Message - Leading the Way to Enterprise-Wide Computing." Many of the presentations focused on "The Desktop and Beyond" and "Digital’s Transaction Processing Strategy, Systems and Uses." The following are summaries of the speeches.


The One Company Theme And Digital’s Evolving Strategy by Ken Olsen, president


Financial Update by Jim Osterhoff, vice president, Finance


Digital’s Evolving Product Strategy by Bill Strecker, vice president, Product Strategy and Architecture


Digital’s Desktop Strategy As It Relates To Work Systems by Dom LaCava, manager, Low-End Systems Engineering


Decwindows Solution by Roger Heinen, Corporate Consulting Engineer


OLTP And Database Products - What Are They And Why Are They Important? by Hans Gyllstrom, manager, Database Systems PBU


Our Transaction Processing Strategy by Bob Glorioso, vice president, High Performance Systems


The Role Of High Availability In Transaction Processing by Fernando Colon Osorio, manager, Advanced High-End Systems Development Group


Meeting Customers’ Needs - Digital’s Distributed Production Systems by Bill Steul, vice president, Corporate Systems Group


The One Company Theme And Digital’s Evolving Strategy by Ken Olsen, president


When we started Digital, we picked areas in which we could be different and unique. They were simple ideas. Wall Street didn’t understand them.


We had the unique idea that we wanted to make a profit. In those days you were just supposed to grow, not make a profit, and then sell your company. We said growth wasn’t a goal. We wanted to do a good job and let growth come automatically. That’s still our goal.


At MIT, we got the idea of interactive computing, enabling people to do the things we see so commonly now with personal computers. That was a hard idea to explain. Finally it sneaked up on people, (we like to think that it was because of us), and it mushroomed 25 years later.


Next came the idea of timesharing, which was first done on a computer we gave to MIT. We were ridiculed for this idea. We stuck with it; and, in time, it became the most impor­tant part of our company and probably still is.


When interactive computing became popular, we realized that making small computers was going to be too easy. It was clear that this was not our place to make a contribution. Our contribution was to do the bigger job — the job of integrating a whole organization around the world, taking advantage of our strengths in VAX, VMS, DECNET and Ethernet products. The press never understood that. We are still criticized because we are still not making personal computers for the home.


Rather than just be one of hundreds of companies making personal computers, or one of dozens of companies making workstations and changing their architecture each time a new chip comes along, I’d rather be different and take all the arrows that come our way.


When my friend, the head of Sony, came up with the idea of the Walkman, he didn’t ask the marketers for advice. Marketers are taught to ask customers what they want, but customers only want what they have seen. That’s not where new ideas come from.


There’s a very important lesson here: market surveys are a serious danger. If you did a market survey a year ago, you’d have found that most personal computers were bought in stores by individual users from companies, that they included all kinds of hardware op­tions, and everybody’s personal computer was different. Such a market survey would guar­antee that that was the way it would go forever. But you know very well that can’t be true.


My friend from Sony knew that his job and position were dependent on being right. That’s true of us, too. We have to be right and then be brave.


Today, the strategy for which we are being so roundly criticized is something we have planned very carefully and developed very well. There are a number of details which we have to improve. There are always ways we can do things better. But with regard to our strategy, I couldn’t think of a better position to be in today than the one we’re in now.


A corporation is a collection of people, and the leaders can only do a very limited amount of the creativity and the work. The work has to be done by the people at all levels and at all jobs. And they have to have the freedom, the knowledge, the motivation, the accep­tance to do creative things. The job of management is not to do the inventing. Our job is to make sure there are goals, to make sure everybody knows where we are going and is in a position to be creative.


From the start of the company we have taken the part of a scientist — to be rational, analytical, calculating, to do the smart thing. A few years ago we developed the strategy that says — one protocol, one architecture, one software system, one networking system. I had little or nothing to do with that except complain a little bit. But it was clearly my job to say that if this is the strategy, we’re going to follow it. That is the part of the manager - not to do the inventing, but to make sure there is a goal. That is an example of what I mean of being analytical, scientific and rational.


We are academic in that good sense of being very critical of our ideas and always driving for the truth. It’s quite common that people fall in love with their own ideas and defend them like they were the only ideas they ever had. But we want to optimize and improve, change, modify, but always drive for the truth. As long as we do that, we have a good future.


When we said four or five years ago "We are one company with one strategy and one mes­sage," we were changing the company from being many product lines competing with each other, to putting all our resources together. We pulled it off better than I ever could have dreamed. The idea of 120,000 people working together was inconceivable. We’ve done a miraculous job. We succeeded because of the quality of the people we have. We should all be very proud of it. We had a simple message, and it worked well.


Now we are being beaten up by the press again — for inconsistent reasons. They criticize us for not doing everything, and then criticize us for not doing one thing only. You can't answer both of those charges at once.


A few years ago, we said a major part of our emphasis was on one architecture and one software system. We then paused and said we would also do Unix*. Now we’ve modified our statement to say the "one message" and the "one strategy" is that we have two software systems. Obviously, our strategy evolves as we strive to improve.


The first VAX-11/780 computer was a completely different machine than the last, when we stopped building that model eight years later. One by one, over the years, we had changed every module. Our VMS operating system has changed drastically over the years and will change much more drastically over the next ten years.


Yes, we change. But we do have one strategy as a company, and we have to be sure we follow it.


We sell Unix and VMS software with equal enthusiasm. That does not mean that they are equal or that they do the same thing. There is a place for one and a place for the other.


You may hear that VMS is an old, obsolete software system. Just the opposite is true. Unix, which we now love, is 19 years old; by the way, over those years, we’ve been the largest seller of software and hardware and service for Unix. The VMS software system — the only other major software system — is just 12 years old. So in years, it’s the newest.


When Unix started, it was never really planned. It was made to be a small, informal, austere system for one user, done on PDP-7, and later on PDP-11 computers. It was never planned to do anything else. That was the delight of it — it was easy to use. In the early 1980s, Unix got to be very popular because it was free and it was easy to learn. We came out with our ULTRIX software to add formal discipline and quality.


From the start, VMS software was laid out to do the whole job and to last almost forever. It is modular so parts can be improved and the rest of it doesn’t change. It is disci­plined, planned and organized. That's been the reason for its success. Therefore, I claim the VMS system is the more modem, both in age and in organization.


You may hear that VMS software is not "transportable" and that Unix is. There are three ways in which software can be transportable — over time, over a range of equipment from very small to very large, and from the equipment of one manufacturer to another.


In terms of time, VMS software that was written ten years ago will play today on modem, better, faster, bigger and less expensive systems; and it will play ten years from now on even better systems.


VMS software is also transportable from very small to very large VAX systems. If you have a branch office that has two people or a branch office that has two thousand people, you pick the appropriate size machine, and the same software is transportable. If your com­pany grows or your taste for computers grows, the software is transportable to bigger computers.


Our software is not readily transportable to other manufacturers’ equipment. This is one of the arguments for Unix. But this argument has problems. Our goal is to make the best Unix; IBM’s goal is the same; so is Hewlett Packard’s, etc. That means everybody’s Unix is different.


But for those customers who want simplicity, who want to take the responsibility them­selves, who want to be able to buy different computer systems and put them together, we offer Unix. Those are our traditional customers, our friends. We offer systems, we take care of those customers, and we’re going to be the best supplier.


That is not in conflict with our main strategy. We can do two things. We’ll satisfy that market and also the bulk of the computer market that wants one manufacturer to do the whole job. We have to sell both those ideas.


Our contribution to the major corporations of the world is to be trusted and competent to do the whole job, worldwide. We have the equipment for tying everything together. We have the line of computers that can do almost everything. We have the sales people who know how to sell to banks and insurance companies. They do it with competence, and they are trustworthy. We have the design people for designing the systems. We have the people for installing them. We have the people for servicing them. And we have by far the best software services organization in the world.


We are doing the whole job, the complete system - all the computers and the software systems, the networking. Our combination of computers will do it all.


We did poorer than planned last year, but in any measure of management we did very well. The way we were able to adjust and cutback from the volumes that we had budgeted was a real accomplishment.


Budget time is when we have to decide what we want to do in the coming year. Of course, we all want to do an infinite number of things; but they don’t add up, and you can’t sell them all. The customers couldn’t swallow them all, and the sales people couldn’t remember them all. So at budget time we have to make decisions and face what we really want to do.


So, just like every other year, when we add up the budget this time around, there’s more that we want to do than we can possibly sell or pay for. Even though in my heart 1 know we did great last year, we did make a mistake in budgeting higher than we should have. We have to be absolutely perfect this year. We’ve got to work hard to make sure we don’t budget one more thing than we can sell. There’s no point in having two computers do the same thing, or in having one more computer than the sales people can remember. And, after the budget is done, we know and are in agreement about where we’re going.


We tend to get all wrapped up in the business, which is important. But remember the other things, too: your family, your wife or husband. You owe it to the people you’re respon­sible for to work hard, but business can’t be everything. So as we push and drive hard for business — which is the excitement, the fun and the responsibility — remember there are other things, too.


Financial Update by Jim Osterhoff, vice president, Finance


Ken recently asked the provocative question. "If we’re doing so well why do I feel so bad?" By most measures we really are doing well: 20% growth, 13% pre-tax profit margins and $2 billion cash in the bank aren’t all that bad. But that doesn’t change the fact that we feel bad.


To describe our financial situation clearly, we have to look at both the static and the dynamic conditions of the situation. Consider an illustration of an automobile straddling a railroad track just a few feet in front of a freight train. That’s a snapshot of a moment in time. It doesn’t show us what’s happening. We can’t tell whether the car is moving, and if it is moving, how fast. We can’t tell whether the train is stopped or moving or in which direction it is going. Likewise, a snapshot of the present condition of the company — how much we owe, how much we own, how much cash we have in the bank — gives an incomplete picture. We can gain a reasonable understanding of the situation only by looking at where we’ve been and where and how fast we might be going.


In 1985 our static condition was okay. Our asset management as reflected in the balance sheet was no better or worse than it had been historically. Our capital structure was sound, but we had just added $750 million cash from additional borrowings. Our dynamic situation was generally poor. Profit margins were in the single digits, and the volume gains we were achieving weren’t returning much to the bottom line. Good asset controls were being implemented, but we hadn’t seen the results of those yet. We were not living up to our potential, and the low return we were providing to our investors was depressing the market value of our company.


In 1986 our static condition was very good. Results of inventory and receivables manage­ment actions had become visible. Cash balances were up to $1.9 billion from $1.1 billion the year before, and we had reduced our debt. Inventory turns had improved by a factor of 50% from historical Digital levels. Our dynamic situation was improving, but the distance between revenue and cost was still too narrow, given the rate at which we were growing.


In 1987, just a year ago, our static condition was triple A, with an exclamation mark; and our dynamic condition was very good. We had made outstanding progress. Our return on investment was the highest in recent history; our stockholders were happy; and the invest­ment community was expecting all of that to continue.


Today, our static condition is still triple A. Our dynamic performance on the other hand isn't what we planned it to be, or what the outside world expected it to be. I would assess it as disappointing.


This might provide a way of answering Ken’s question. A snapshot of the company would indicate that we are indeed doing very well, but we feel bad because our dynamic perfor­mance recently has been disappointing.


Another reason why we feel bad is clear from a graph of the price of our stock. We’re the same company we were a year ago, only 20% larger. We’ve moved up on the Fortune 500 listing from 44th to 38th. We’re still triple A. But a year ago the stock was selling at about $170 a share and was going up. So you wonder what we’ve done to deserve all this rough treatment that we’re getting on Wall Street these days.


A year ago, when we were making plans for 1988 we were in the midst of a 24% growth year - a strong rebound from 14% growth the year before. The computer market was still not particularly strong, but we were out-performing virtually all of the competition. Our operating profit margin was 17% - a giant leap from 11% in FY86, and well in excess of the budget. Inventory turns not only surpassed the record of the prior year, but were among the best in the industry. And our cash balance was a strong $2.1 billion after spending about $800 million to repurchase our stock. Our product leadership position was gaining wide recognition among computer buyers. We could demonstrate our uniqueness and its value to the customer, and we could charge for it. In the financial community as well as in the computer industry our triple A image was gaining recognition. In brief, we had strong, positive momentum.


In that environment we evaluated alternative planning strategies for FY88 — determining the appropriate balance between minimizing cost growth and maximizing marketshare. We evaluated such factors as product strengths, competitive advantage, customer satisfaction and loyalty, image, financial and human resources, and profit momentum, (i.e., whether our profit margins are rising or falling). An evaluation of our condition clearly indicated an aggressive strategy for FY88, tilting strongly toward market growth. The objective was a combination of revenue growth and budgeted profit margin that would provide an excellent profit result for FY88 and advance our market position for the benefit of future years.


Our plans for FY88 were roughly comparable to the results of FY87, with slightly less margin, but higher growth. Our plan provided for substantial investments, building on momentum that began in FY87. This meant investments in new accounts and a higher level of service for existing accounts. It meant investment in U.S. Industry Marketing, DECWORLD the event, and Application Centers for Technology, as well as Computer Integrated Manufac­turing and process technology improvements in our plants. It meant new high, mid-range and networking products, and special efforts to penetrate the financial services industry. Many of these projects were building on investments that were begun in 1987 or even be­fore. In other words, instead of just sitting back and reaping the rewards of prior year investments, we planned to continue to make even heavier investments for the future — paid for. of course, by higher revenue growth.


For the first three quarters of FY88, our revenue is up 21%. That isn’t bad. (It’s close to our average since 1981). But it’s not as good as our plan.


Our expenses in the first three quarters have been less than the budget. The effects of lower-than-planned product sales, higher-than-planned services volume, and changes in the mix of products and services sold account for some of this difference. Compared with assumptions in the budget, the dollar has weakened, increasing the dollar equivalent of cost incurred overseas. Aside from these factors, attention to costs and a cautious attitude, initiated after October 19. have contributed to significant savings from the levels budgeted. We achieved this despite an unexpected increase in the cost of dynamic RAM memory chips that we import from Japan.


Following the pattern for expected revenue growth, we expected that costs would grow by increasing amounts through the first three quarters. We have kept costs under budget in each of the first three quarters, due in part to increased attention to hiring and expense controls. We show the effects of that to a minor degree in Q2, and much more visibly in Q3.


The combination of revenue and cost performance tells the profit story. Despite running below our budgeted cost levels, our rate of cost growth has exceeded revenue growth for Q2 and Q3. Even in Q3, when expense controls helped our cost growth bend downward, we still couldn’t react as quickly as the change in revenue.


Our pre-tax profit margins for QI of FY87 and QI of FY88 were about the same — close to 13-1/2%. But in Q2 our margins were below last year by about three percentage points, and in Q3 they were below last year by about five points.


In absolute terms, Digital had higher profit margins than such competitors as IBM and Hewlett Packard in the latest quarter. We are still doing well, but don’t forget the illustration of the car and the freight train. Direction and speed of movement are the critical factors in looking beyond the present state of affairs into the future. As an outsider looking at Digital’s recent performance, you would be disappointed in the decline from last year and perhaps somewhat concerned about the future direction.


During our strong period in the market last year, the price of our stock just about kept pace with the market in general. But since October 19th we’ve fallen substantially more than other stocks.


I believe there are two key elements to turning this around, to converting "disappoint­ment" back into optimism. First, we have to tell our story more clearly — to convince the public that what we have is on the leading edge of enterprise-wide computing; that we are a major factor in workstation products; that we are very serious about on-line trans­action processing; that we are not going to be stopped by the hot companies in the low end. Second, we have to recover our profit momentum.


To really get costs under control, and still accomplish our new program and our growth objectives, we have to do a better job of re-directing resources to areas where incremen­tal needs are the greatest and manage headcount growth more carefully.


We also have to pay careful attention to capital expenditures. These are investments for the future, and the costs associated with them are spread over future years. They affect future profits much more than they do current profits. The effects of capital expendi­tures are seen directly in the form of depreciation. But they also drive other costs, such as project expense, operating expense, new product or new plant start up, material and inventory related costs and the like. This year our capital expenditures took a sharp jump upward to a running rate of about $1.5 billion from roughly $750 million a year ago.


A portion of these expenditures is for the purchase of office space that was previously leased. The balance represents an eventual net addition to our cost base.


There’s a warning signal here, that the cost pressures that we are feeling today are likely to be with us for awhile longer; that at this level of spending, we may be building in too much cost ahead of revenue.


The challenge is to make sure each investment is supported by a good business plan with a promise of a good return; that each investment makes sense not only on a stand-alone basis, but also in the context of the total investment portfolio of the entire company; and that each investment is backed up by the commitment of everyone whose participation is required to make it successful.


Borrowing on a phrase from Abraham Lincoln, it’s not enough that our capital investments be commitments by the company. They must also be commitments of the company and commit­ments for the company - that is, an integral part of the company’s strategic direction.


Digital’s Evolving Product Strategy by Bill Strecker, vice president, Product Strategy and Architecture


Our past strategy has been very simple and focused. It has produced remarkable products and remarkable market success. It has resulted in: o a broad range of compatible computer systems;


o a very high quality, highly functional software system which covers all the major usage styles (workstations, timesharing, production computing and real time);


o industry-leading networking with wide-area networking capabilities and local-area networking products and solutions; and


o a very rich base of applications from Digital, third parties and customers.


Our strategy has been so well aligned with customers’ need for distributed computing in the enterprise, that its technical and market success has probably exceeded our original expectations. Our customers want to push computing closer to the business and closer to the end user to get responsiveness in application development, application deployment and the actual use of the applications themselves. They also want to increase organizational effectiveness by connecting all the computing resources of the business — including those of multiple suppliers — together in a single enterprise-wide computer network.


The motivation behind our evolving strategy is simple: to strengthen our position rel­ative to IBM and assume the leadership role in enterprise-wide information systems.


Recently, we’ve been focusing on six areas:


o the applications environment;


o transaction processing and databases;


o desktop systems;


o enterprise-wide networking;


o industry standard operating systems and


o the integration of non-Digital architectures into the network.


Applications environment. Over the years, Digital has had the lead in the distributed applications environment based upon its well-architected DECnet, VMS, and ALL-IN-1 com­ponents. However, since these components were designed, technology and application re­quirements have changed. These changes have motivated us to update the Digital applica­tion environment, across the board. The program to provide this updating is called the Applications Integration Architecture (AlA). Our graphics-based, windowed-user interface and compound documents are of central interest to AIA. (Compound documents can simultan­eously include multi-sized text, graphics, images, spreadsheets and so on.)


Transaction processing and databases. Many enterprise-wide applications require very large, high-performance databases. Transaction processing is a style of computing that facilitates secure and formal changes to a database. Digital has initiated a major pro­gram to enhance our capabilities to state-of-the-art levels in this area.


Desktop systems. Digital has a very strong position on the desktop with our terminal­based desktop systems. Sales of our terminals are rapidly increasing, and terminal-based systems will remain important to Digital for the foreseeable future. In addition, Digital has a major program to create leadership 32-bit desktop systems, which build on our exis­ting VAX, VMS and ULTRIX architecture strengths.


Enterprise-wide networking. To support our leadership role in enterprise-wide networking, we are evolving the DECnet protocols to the international standard OSI protocols. At the current time, IBM’s SNA* is the de facto standard for the limited kind of enterprise­networking being done by most customers. However, SNA is a closed, complex, IBM-con­trolled architecture, with an hierarchical structure that doesn’t meet the real needs of enterprise-wide networking. In contrast, the OSI networking standards are relatively simple, are open for multi-vendor attachment to the network and are peer-to-peer rather than hierarchical. In other words, the OSI standards support the style of computing that we think is appropriate.


We also are developing wide-area networking infrastructure and product set, so we can truly say that we offer wide-area networking solutions. In addition, we are developing network management tools, to help our customers manage large, complex networks.


We should keep in mind that DECnet networks are fundamentally easier to manage than SNA networks. We don’t need the level of tools, the level of people, or the level of proces­sing to manage a DECnet network that the SNA network requires. IBM has been quite effec­tive in marketing this liability of their network as if it were an advantage. We should be very sensitive to that when we face IBM on the enterprise networking question.


Industry-standard operating systems. In recent years there has been considerable industry and customer attention to computer hardware and software standards. One area that’s a particular interest to Digital is industry-standard operating systems. This has shown up as a customer demand for Unix. This demand results, in part, from a customer requirement for a standard, vendor-independent operating system interface. Customers would like to write an application once and then run it on different vendors’ equipment. Customers also want to have access to the many interesting applications that have already been written for the Unix interface.


Digital wants to meet this demand for the Unix interface. But we want to insure that the interface is a true industry standard — one that has an industry-wide, as opposed to a single-vendor-dominated process for defining the standard for establishing and evolving the standards.


Digital’s operating system strategy is to develop aggressively both VMS and ULTRIX soft­ware within the framework of a common, single-system architecture. Currently this common architecture includes the VAX hardware, the DECnet communication architecture and the Application Integration Architecture (AIA). We also recognize that VMS and ULTRIX soft­ware have different strengths and weaknesses in their capabilities and their available applications. Therefore, we will market and sell even-handedly both VMS and ULTRIX soft­ware, trying to match real customer requirements with the basic capabilities of the re­spective systems.


In addition, we are working with multiple vendors to insure that Unix interfaces are true industry standard and not a proprietary interface of a single vendor.


We expect to see a large number of common applications across both VMS and ULTRIX soft­ware. The fact that Digital has the two most important computing environments, VMS and Unix, and that we have common applications that run on both, will be a very important message that is unique to Digital. There, of course, will also be some applications that are specific to the VMS or ULTRIX environments.


Integration of non-Digital architectures into the network. Customers have major invest­ments in the computing architectures of multiple vendors and need to integrate all these architectures into their enterprise-wide networks. In the past, Digital has had numerous separate architecture integration efforts. We recently expanded these efforts and linked them together into a more formal overall program.


This program has been variously called Network Application Services or Network Application Support (NAS). There are three classes of network services: applications access; busi­ness communications; and information and resource sharing. We want to provide the ability to:


o access applications anywhere on the network from any desktop on the network;


o communicate with any other user on the network, using any of the styles of business communication, mail, conferencing, videotex, and electronic document exchange; and


o share printers, documents, files, databases located anywhere on the network from any appropriate application.


The NAS program provides these services for the Digital architectures and key non-Digital architecture targets. The non-Digital architecture targets, selected on the basis of their importance to our customers, currently include:


o MS-DOS* and OS/2* targets because of their importance in traditional business personal computing;


o the IBM SNA* environment because of corporate data stored in that environment;


o the Cray supercomputing environment used for high-end technical and scientific compu­ting; and


o the Apple*/Macintosh* environment, which is increasingly attractive for personal com­puting with a modem user interface and ease of use.


Overall, Digital’s mission is to be the leading supplier of enterprise-wide information systems and to assume, as appropriate, full responsibility for that system. That respon­sibility extends all the way from the basic engineering of the system through integration, testing, installing, maintaining and evolving that system. Not all customers will want Digital to assume all these responsibilities at all times. But Digital’s unique position and competitive advantage will result from its broad capabilities. In addition, we have to ensure that we have the necessary product and support architectures, so that this mission is profitable for Digital and cost-effective for our customers.


Our strategy can be summed up in three points:


o a desktop-to-data center family of computer system products based on a single system architecture which includes both VMS and ULTRIX operating system components, and meets all the functional needs for enterprise wide computing;


o complete local and wide area networking products built to the emerging OSI standards; and


o a comprehensive set of network application services to build network business applica­tions that include both the Digital and key non-Digital computing architectures.


Digital today has a huge opportunity in computing from the desktop. It is estimated that revenue associated with this style of computing — desktop devices, servers, networking


Digital’s Desktop Strategy As It Relates To Work Systems by Dom LaCava, manager, Low-End Systems Engineering


Digital today has a huge opportunity in computing from the desktop. It is estimated that revenue associated with this style of computing — desktop devices, servers, networking and services — will make up over 40% of worldwide computer revenue in 1991, almost $60 billion.


This market is changing very rapidly, due in large part to our competitors’ efforts to move their low-end products, primarily personal computers, up the capabilities ladder. They’re trying to make their low-end systems handle more of the mainstream computing of the enterprise. This will create problems for their customers and opportunities for Digital.


Our competitors’ low-end technologies are powerful enough to be used for what we would call "real work," but there’s a catch. They require that customers buy new applications, new operating systems and new communication approaches. Customers are locked into expen­sive and time-consuming conversions or they can scrap what they have. Customers who choose to convert may be in for an unpleasant surprise. For example, an article in the March 7 issue of COMPUTERWORLD indicates that it may costs as much as $8000 to convert an IBM* PC-AT* to run OS/2, when it’s available.


Compare that approach with ours. The same messages we’ve used with such success in the mid-range and the high-end apply to the emerging desktop market: one architecture, with compatibility from the desktop to the data center. With this strategy, we’re already positioned to be the winner in the desktop battle of the 1990s.


In the 1990s, we believe most people will need what we call "enterprise-wide" computing. This is an environment in which all of the computing resources and information of a cor­poration can be easily used by anyone to achieve the goals of the enterprise. Enter­prise-wide computing enables people to use large amounts of information and easily commu­nicate and share information with other people, creating large gains in productivity. Enterprise-wide computing is a change in computing style that will be as important and as wide-ranging as the shift from batch to timesharing, and we are positioned to take advan­tage of it. We’ll be ready with the world’s best networking, the most useful software environment and the best desktop products. We intend that Digital will be the leader in enterprise-wide computing, and the primary vendor of these computing systems during the 1990s.


How will we achieve this ambitious goal? Here is the one-minute version of our desktop strategy:


The large majority of users want to do simple word processing, some electronic mail and perhaps some spreadsheet work. They don't want to be burdened with file backups, changing software or system maintenance. For these customers, we offer a simple terminal. For customers who require graphics capabilities to do presentations or to plot data, we pro­


vide graphics terminals. For those professionals, engineers, scientists and architects who require graphics and local computing, we offer the VAXstation family. For those people who already use and need MS-DOS applications, we offer the VAXmate computer. And, for those customers who have already invested in a desktop device from another vendor, we offer the highest quality award-winning integration products in the industry. We inte­grate these devices into a VAX/VMS environment so users can communicate and share informa­tion throughout their organizations. We intend to be the number one integrator and ser­vice supplier of the major desktop devices.


Our desktop strategy is simply providing customers with the best computing tools to get their jobs done. We will implement our desktop strategy in two ways.


The first and most important approach is all Digital. We’ll offer premiere enterprise­wide* computing systems — a pioneering approach to desktop computing built on our stre­ngths. It will be faster, richer and more productive than anything else on the market. And, of course, we'll continue to offer leadership terminals and workstation products.


Our second approach is a recognition of the fact that many of our customers have already made a large investment in PCs and other desktop approaches. We’ll bring their islands of PC computing into our enterprise-wide computing environment by offering Digital added- value networking and application integration, along with ongoing Digital services. This will help our customers become more productive by communicating and sharing information effectively.


We’ll offer the desktop user the same architectural application and communication advan­tages we offer today in the mid-range and the high-end. The key to this is the phrase "Digital’s Added Value." Many vendors will be selling PCs and local area networks and some sort of server/client arrangement. But only Digital will make the desktop a full member of the enterprise-wide computing environment.


Our Network Application Support (NAS) strategy is the key element of our effort to bring other vendors’ PCs into our enterprise-wide computing environment. We’ve had a very good year in the integrated personal computing business. VMS Services for MS-DOS is the lead­ing integration product in the market. It has won numerous awards.


Our relationship with Apple has generated a lot of publicity. We will do joint develop­ment with Apple. In the context of our strategy, we are simply bringing Apple desktop systems into our enterprise-wide computing environment. We’ve already done it with IBM PCs. and later we'll announce it with a variety of other PC clones. When IBM gets OS/2 organized, we’ll offer support for that as well. The Apple announcement was just one part of our approach to enterprise-wide computing.


For a great many customers, the terminal is still the preferred desktop device. For certain applications it may continue that way forever. It was only several years ago that industry analysts were predicting the death of the terminal market; that terminals would be replaced by personal computers. Well, that didn’t happen. The plain truth is that many people are perfectly satisfied using terminals. Terminals give them all the access to computing they need.


We’ve had a great year in the terminal business, perhaps our best ever. VT320, VT330 and VT340 shipments all exceeded forecasts. We are shipping more than 500,000 terminals this year. The market for our terminals is growing at a phenomenal rate. Our customers want leadership terminals and that’s what we’ll give them. To continue our leadership, we’ll continue our efforts to reduce prices, and we’ll enhance the capabilities of our terminals so they can participate fully in the enterprise-wide computing environment.


We’ve also had a great year with our workstation products. This fiscal year, we became the fastest growing workstation vendor in the industry, surpassing the growth rate of Sun Microsystems. We leaped over Apollo, to become number two in the workstation industry; and we’re heading toward the number one position. We’ve expanded our workstation offer­ings by introducing the lowest-priced workstation in the market and, at the high end, a powerful real-time three-D graphics workstation.


Our workstations have been successful with technical users, but we face a serious chal­lenge if we want to continue our success in the workstation market. This year the vast majority of all the workstations sold run Unix, while 90% of the VAXstation systems we’ve sold run VMS software. We are working very hard now to increase our share of the Unix workstation market.


We face another challenge in the workstation market. There are seven times more commer­cial computing users than technical users. We’ve done well with the technical users; and, of course, we want to continue our success in the technical markets. Our challenge is to keep our loyal technical users, while at the same time winning commercial desktop busi­ness. We will do this by offering a computing environment that merges the power of work­stations with a new generation of personal product applications and greatly improves ease of use in system management. We will deliver the Digital Application Integration Archi­tecture to the desktop.


The new desktop computing environment represents a fundamental change in the way we must think about computers. Computer hardware used to be a scarce and expensive resource. We had to allocate it carefully and share it. We had to change the way people worked in order to get the most benefit from our expensive computing resources. Now computing is relatively inexpensive. It’s the human resource that’s scarce. We can put processing power and huge amounts of storage on the desk at a very low price, but you can only str­etch that highly trained engineer or commodities trader so far.


How do we solve this problem? We use the availability of computing resources to help stretch the scarce resource — the person. We create a computing environment that is so powerful and so easy to use that the person becomes much much more productive. We make it possible for the person to share and interact with information easily and instantaneously. The computing environment helps the person to work naturally and efficiently almost with­out regard for the mechanics of computing. It means that all the work of computing - translations, running applications, exchanging files, loading operating systems, etc. — is going on elsewhere. The users do their jobs, and the computing environment helps them, acting as a silent partner. When you dedicate more power to each user and share important applications through a fast transparent network, the usual obstacles to productive compu­ting disappear.


In this new computing environment, users will do word processing where on-screen fonts look exactly like the printed page. They will work in a completely integrated application environment. They will use applications under VMS, Unix and MS-DOS — the three most popular operating systems. They’ll do this simultaneously, and they won’t need to know which operating system they’re using. Users will do real work. They won’t need to spend any time thinking about operating systems, interfaces or file transfers. Systems manage­ment will become a non-issue as far as the user is concerned.


In this computing environment, resources out on the network are exactly like resources at the desktop. The user won’t even know what’s running locally and what’s running remotely.


This computing style is a long range strategy. We’ll continue to enhance the capabilities of the system. As independent software vendors bring more of their applications into this environment, as DECwindows becomes a de facto standard, and as the cost of processing power continues to drop, the environment will become that much more productive.


To conclude, I want to leave you with three key points. First, we have a winning desktop strategy for the 1990s. It builds on our strengths and expands our offerings in ways that will help our customers succeed in their enterprise. Second, our desktop products, most importantly the VAX computing environment, will help customers save money and run their businesses more efficiently. Third, our desktop computing products will be easy to sell, easy to install and easy to use.


Decwindows Solution by Roger Heinen, Corporate Consulting Engineer


DECwindows is a company-wide program to enhance all of our VAX/VMS and VAX/ULTRIX inter active applications, to exploit VAX workstations and provide the same easy-to-use visually sophisticated user interface. It is also a good example of the work that the Applications Integration Architecture Program is doing for us.


Today’s customers have to deal with a very complicated network and need many computing systems to interact in that network. Our answer to that customer problem is Digital’s Network Application Support (NAS) — the highest quality application environment in the industry. The Application Integration Architecture (AIA) is a set of software standards that help unify that environment.


The primary purpose of the AIA program is to provide a management and technical forum that focuses our efforts and insures the highest possible quality of our software standards. A quality software architecture tracks and anticipates changes in technology, while at the same time ensuring a level of stability in the Digital product set that customers can depend on as they make their computing system investments.


We want every VMS and ULTRIX workstation to bear a strong resemblance to one other. They should look and feel the same and have identical applications for all day-to-day needs. To achieve this goal we organized an engineering and marketing effort called the DECwin­dows program and outlined a four-part workstation software strategy: o provide a common user interface, o use a common programming interface, o use industry standards, and o deliver it fast.


The result of this strategy will be a family of DECwindows workstations.


The first and most important of these is our common user interface, the DECwindows en­vironment, which will give our products the same look and feel. As a result, users will be able to move from one application to another, or from one workstation to another, without being retrained.


To be successful in the workstation market, we need as many applications as possible; thus the second element of our strategy is aimed at attracting application designers. We have to make programming easy for them. Therefore, we provide a single application programming interface for both VMS and ULTRIX software. This programming interface provides all the software tools and mechanisms that are necessary for designers to construct applications which have our common look and feel.


Saying that the DECwindows programming interface is the same for VMS and ULTRIX software is terrific, but saying that it’s based on an acknowledged industry standard is even better. We picked a standard called "X Windows," which emerged from our collaborative work at MIT with other computer vendors at Project Athena. Frankly, we picked the winner. X Windows is clearly the standard in the industry for windowing and graphics on worksta­tions. Today the specifications for X Windows are managed by an MIT-sponsored group called the X Windows Consortium, of which we’re a member, along with 23 other prominent computer vendors. As with the strategy for a common programming interface, this element of the strategy — using industry standards — is aimed directly at attracting application developers.


The fourth element of our strategy is quick delivery. To make sure that our DECwindows plans work to our advantage, we told our customers and application designers our inten­tions so that they could prepare. Their initial reactions were overwhelmingly positive, but they need quick delivery. We immediately formed a DECwindows program team, with a coordinating manager and a lead technical architect. This team works with all the engin­eering groups, the marketing groups and Field teams to coordinate the entire program.


Applications are the key to success in the DECwindows program, so we set the highest priority on the needs of the application developers who actually write the software. We decided to make the first release a benchmark for functional capability and performance, and we gave ourselves twelve months to deliver the initial software to the developers. Once the applications were ready, we could ship the product to the end user. We also decided that if workstations were going to be commonplace in Digital networks, then the DECwindows environment had to be a standard, basic component of VMS and ULTRIX software.


The DECwindows programming interface provides a set of software tools, which make it easy for an application to display all the ingredients of our DECwindows look and feel. These tools supply pop-up menus, boxes on the screen to capture user dialogue, clip boards for moving data between applications, scroll bars, text and so on. In addition, our program­ming interface exploits our workstation hardware to deliver very high performance gra­phics. An application can use and intermix a variety of programming interfaces, ranging from industry standard graphics libraries, all the way down to some very primitive fun­ctions for high speed drawing, text display and color support. Each of these capabilities represents something different in the DECwindows system, which plays an important role in supporting our visual user interface.


The most unique aspect of the DECwindows architecture is that it works across a network, and the user’s display need not be on the same computer as the application. The DECwin­dows approach provides a way for users to separate application execution from application display. A DECwindows application can run on a VMS or ULTRIX application engine using a variety of display devices spread throughout the DECnet OS I network, including our own DECwindows workstation products or industry standard PCs and workstations. Users most often will choose to run their application on the same computer as their display; and, in fact, the DECwindows environment is optimized for this case. The user has the freedom to run an application on a machine that has more computing resources than the office work­station without giving up the benefits of the common user interface. For example, a cus­tomer who has a graphics application, which requires access to a large data base, can run the application on the machine with the data base and run the display on a local work­station.


DECwindows applications have to work for all customers in our worldwide marketplace. We’ve designed a very simple and easy way for DECwindows applications to come in different natural languages. We’ve separated the program of an application from the description of what it looks like on the screen. Then we supply a special editor which allows you to replicate that description in various languages, such as English, French, German, Italian, Spanish, Dutch, Japanese and so forth. We ship all of the replicates to the customer. When users log on, they can select the language in which they’d like to see all their applications appear. This is a very powerful and easy way to insure that we have a com­pletely international product.


When you sit down at a DECwindows workstation, the first thing you’ll notice is that the user interface allows you to simultaneously see the display of many applications. Each window on the screen represents an application at work somewhere in the network. The windows can be positioned to overlap like pieces of paper on your desk, or they can be positioned like tiles in a mosaic. You set up the screen, and you direct the action.


Many people are already using DECwindows software today. We have over 50 internal soft­ware projects implementing DECwindows applications and hundreds of active users. Many people are working to convert our workstation and interactive applications, including software development tools, ALL-IN-1, network management tools and electronic publishing software to the DECwindows environment. We also have a whole new set of DECwindows appli­cations.


In addition, we have an extensive program to support outside application developers. We’ve already trained hundreds of people to use the DECwindows interface and develop programs for it, and we’re training more each month. These designers are now converting their existing applications and developing new visually sophisticated applications to exploit DECwindows workstations. These developers are interested because the DECwindows approach is new and wonderful. But, they’re also interested because it is based on the X Windows industry standard. This means that they can leverage their DECwindows development investment on other platforms that supply the X Windows standard. In fact, we’ve even had interest from competitors in adopting our extensions to the X Windows standards as their software standard.


DECwindows is a product we can all be proud of. It’s an important new Digital software architecture. It’s a design center for world-class interactive applications. It’s a superior implementation of the X Windows standard. In summary, it’s a key opportunity and competitive advantage for Digital.


OLTP And Database Products - What Are They And Why Are They Important? by Hans Gyllstrom, manager, Database Systems PBU


To describe what a transaction processing system is and how it works, I’ll use the analogy of a metropolitan library system, with a central library and a number of branches. The total system includes the libraries themselves, the books, the roads that connect the libraries, vans that go back and forth among the libraries, and people. People work in the various libraries, and people borrow and return books. In addition to those things and people, the relationships among them are also important. For instance, a specific road connects a branch library with the central library or a specific person is now trying to borrow a specific book at a specific branch. Those objects and their relationships constitute the data base, which is an integral part of any TP system.


In addition to the "things" and their relationships, thousands of changes and actions are occuring all the time. For instance, vans are transporting books, and people are coming to various branches to borrow and return books. All those "actions" can be viewed as trans­actions in a transaction processing system. Hundreds and thousands of those actions are occurring all the time. We should note that most of the actions are isolated from each other.


Let’s zoom in on one of the specific actions: a customer wants to borrow a specific book. The customer in this case is the TP application. The librarian wants to service that request and can do so in any one of different ways. If the book is in this library, the librarian simply comes up with the book. If the book is not in this library, the libra­rian will issue an inter-library request, send a van to get the book from another library, and give it to the customer. Once the librarian has the book and is about to hand it to the customer, something important happens: it is documented that the book has been bor­rowed on the library card. Think of that documentation as a contractual commitment, as the completion of this particular action. In a crude way, this scenario illustrates what constitutes a specific transaction.


We can consider the collection of librarians at each branch as analogous to the transac­tion monitor or transaction server in computerized transaction processing. The whole picture - the data base, the transaction server and the applications - constitutes the transaction processing system.


Most transaction processing systems in the past and most in existence today use "flat files" rather than data bases for storing information. The reason for this is primarily one of performance. Data base systems have been around for a long time, but they weren't able to fill the performance requirements of transaction processing systems. We are changing that.


Flat files will remain faster; and, in some situations where the performance needs are extra high, we may still want to offer flat file systems. But there is definitely a trend in the industry toward using data bases for storing the information in transaction processing systems.


We use TP systems in our real-world, day-to-day activities. They become critical compon­ents of our lives. For example, transaction processing systems often take care of the reservations and the cargo systems for airlines. They are key parts of the airlines’ operations. If the TP system stops, the planes are grounded. The same thing happens with banks. There have been examples where the TP system failed and the consequences were disastrous. Several years ago, the TP system of a bank in New York was erroneuosly hand­ing out billions of dollars; and for a period of three days the bank had to borrow money to cover the shortage.


Since these TP systems become part of our real-world activities, they can’t be slow. They have to work as fast as the real world works. There’s heavy demand for fast response times. Typically, about 90-95% of the actions have to be finished in one to two seconds. That means that data bases are in a total state of change. They’re updated in near real time. At any one time, they represent that particular part of the business or part of the real world that they’re implementing. Because they can be a critical part of a com­pany, we refer to them as "bet your business" systems.


For example, look at money. Back in history, gold and silver coins represented value. Later, we abstracted that to paper money. Now, money has become electrical impulses on the data base. It’s very easy to see that if the data base loses its data, you lose money.


Consider your personal checking account. When it comes to your bank balance, you don’t accept 99% accuracy. It has to be 100% accurate all the time.


If there is one common theme in the transaction processing market, it is that mistakes are not accepted. Period! In computer language the term is "data integrity."


Now we know that computers are computers, and software is software. We blow bits. We have bugs. Mistakes do happen. So, what do we do about that?


We translate that into: errors happen, but they can’t go undetected. We’re building better hardware and better software that help us to find the errors when they do occur. And we have a lot of computer solutions to help solve the data integrity problem. You hear about recovery, shadowing, concurrency control, etc. I’m not going to go into those in detail. But I do want to give you a visual image of a TP system at work.


Imagine a data base that has good, accurate information. A transaction comes in at time TI, and some kind of a change happens to the data base. You could say that the data base has changed its state; all we did was to update some information in the data base. An­other transaction comes in later at time T2, and the data base gets updated again. Then something bad happens — a big error of some kind. It might be an overflow of a variable in the program of it could be that lightning struck the system. The question is, what do you do when the error happens?


We can’t let mistakes go through. This is where the transaction processing system has an advantage over real-world situations. We can stop the transaction processing system, rewind time and replay back to the point just prior to where the mistake occurred. So, we go back to a place and time where we knew the data was accurate. We also have put these transactions on something like a log, to remember them. Once we go back, we are able to replay the transactions against the data base and make sure we don’t have that error happen again. Then we can continue on our merry way. Now, obviously to have these errors occur, to rewind time and to replay it, is bad news, because the real world is halted while you’re doing this. The planes are grounded. The money is stuck on the data base. This replay can take anywhere from seconds to minutes to hours to days, depending on the complexity of the transaction processing system and the error that occurred. This is the basis for the very strong requirement that transaction processing systems must be highly available — with down times on the order of minutes per month or even year, on systems that are running 24 hours a day, seven days a week.


Basically, the picture of a transaction processing system is relatively simple and easy to imagine. The complexity comes in making sure we have the integrity, that we have the performance and the capacity to capture a major share of that market, which, it is esti­mated, should be worth about $60 billion in 1991.


Our Transaction Processing Strategy by Bob Glorioso, vice president, High Performance Systems


Transaction processing differs from timesharing in style and in metrics. In a timesharing system, there are often many users, but each user typically has his or her own data base and own application. In a transaction processing system there are also many users, but the data are not owned by an individual. The data are common to everyone, and the appli­cations are typically very few — for example, buying and selling stock.


The metrics are different too. We’re familiar with MIPS (millions of instructions per second), VUPS (VAX unit of processing — referenced to a VAX-11/780), cost/performance, dollars per MIPS and the number of users that a system can have in timesharing. In trans­action processing the key metrics are transactions per second, dollars per transaction per second and response time.


Transaction processing implies "bet-your-business" applications. When you realize that people are betting their livelihood on your system, you take a different approach to the way you design it.


The marketplace is big - about $26 billion - and it’s growing at about 20 to 30% per year. By 1991, we expect it to be about $60 billion.


We approach transaction processing, as we do other complex problems, with an architecture. An architecture defines interfaces, which are what makes things compatible, but not the underlying implementation. Therefore, it forms a basis for product development strategy and for the way groups interact to implement the strategy.


The Digital Distributed Transaction Architecture complements and extends our VAX, VMS, DECnet and other current architectures. It consists of two parts — a front end and a back end. The front end provides the interface to the user. Often, in transaction proc­essing (TP) systems, the interface to the user consists of the forms that users fill out to enter data. It also handles interfacing the forms to the rest of the system. The rest of the system consists of a transaction server and the resource manager. The transaction server (sometimes known as a "transaction monitor") deals with the information in the application itself. It queues requests from multiple users so that they will be dealt with in order. It also interfaces to the resource manager — the database system.


We’ve defined the TP system architecturally and have defined interfaces between the key pieces. That allows us to split those pieces off and implement them wherever we like. For example, we might split the front end of the two pieces and put them on separate processors, such as MicroVAX computers. Or, in an application that involves heavy inter­action with the forms system, we can put the forms processing next to the user. Now that


does a couple of things for us. That approach off-loads the main system and makes the communication between the front end and the main system fairly compact. It also cuts down the communication costs.


I’m sure that you have all heard horror stories in the industry about the cost of down­time. The only way we’re going to get a system to be reliable and available is through use of redundancy.


There are two ways of using redundancy. First, there is redundancy in time. In that case, you roll back to the time before the error and correct things. We do that in VAX- cluster systems. If a node goes out, we configure the cluster around the failed node. That solution is hardware and software based, and it takes time. There is also redundancy in space. In that case, you use more space for the computer. You use more parts; so if one of them fails, the others can carry on with the task.


Digital’s Distributed Transaction Architecture provides a number of advantages. It’s fully distributed in the network. It puts processing power where it is needed. If your application requires multiple database activity per transaction, you put your power at the database side. If you have complicated forms processing, you put your power in the forms at the front end. It also minimizes communications overhead.


Let’s take a look at the way TP systems are implemented today and give you a glimpse of where we’re going in the future. Today, they’re typically layered on top of the hardware in the operating system in the database. The pieces required to satisfy the most strin­gent requirements of a TP system typically reside in the TP monitor.


Because transaction processing is a business where people are betting their business it requires additional focus on the kind of support that we provide our customers. There are pieces in place to do that. We have a design consulting group within the TP systems engineering organization to help design large TP systems. We are disseminating informa­tion to the Applications Centers for Technology in demos. We’re developing competency centers. We have training programs, such as TP University, under way right now, training people to be real experts who can train other experts.


We have application development environments that fit the customer style of developing applications. We have the lowest application development time. We have the best dis­tributed transaction processing system. We can integrate things that other people don’t even think about. We can integrate transaction processing, office, decision support and timesharing in a single environment. We have the widest range of systems without changes in the software. And we have competitive cost of ownership.


A recent survey done by DATAMATION asked customers what they expect to do in transaction processing over the next year. The survey says that today 1.3% of their systems are from Digital, and by the end of next year they expect 10.4% of their systems will be from Digital. They seem to want us.


In summary, the TP opportunity is now. We have market demand. We have products. We have performance leadership. We have price leadership. We got here with the hard work of a lot of people. Multiple groups have put together what we have in transaction processing in record-breaking time.


The Role Of High Availability In Transaction Processing by Fernando Colon Osorio, manager, Advanced High-End Systems Development Group


I’d like you to leave today with the following message: Digital is a leader in trans­action processing with enterprise-wide networking and its family of VAXcluster systems. To do that I’ll be covering four basic topics. These are:


o the importance of VAXcluster systems in Digital’s Product Strategy,


o attributes of VAXcluster systems,


o the role of VAXcluster systems in our transaction processing strategy and offerings, and


o the evolution of VAXcluster systems and their impact on Digital’s Distributed Trans­action Processing Architecture.


VAXcluster systems provide flexibility, availability and expandability. They give the customer the ability to configure a system that has no single point of failure and, hence, the ability to meet the most stringent requirements of their business critical applica­tions. For example, VAXclusters allow the customer to design a system that has multiple copies of a data base transparently, and updates are executed to all copies simultaneously via shadowing. In a similar fashion, the customer can design a system with n + 1 redundancy of processors, each processor executing part of the overall task; and in the event that there’s a failure in one of the processing elements the task can be shifted to the spare processor.


With regard to expandability, a customer purchases a VAX 8700 computer to solve today’s problems, and, as the business grows, can add more computing resources, storage resources, or hierarchical storage controller resources to meet changing application needs. In effect, customers are not restricted to the set of resources purchased on day one, but rather can expand their computing environment, protecting their investment over time. Our customers say that this is an important competitive advantage.


VAXcluster systems are the vehicle that Digital has used in the past and will continue to use in the future to build highly available systems. Today, we have an installed base of about 6600 VAXcluster systems worldwide. About 16% of all VAX nodes are in clusters. About 64% of our top 200 accounts have VAXcluster systems in their installations.


The key attributes that VAXcluster systems provide that are important in transaction processing are high availability and data integrity. In simple terms, high availability means that when you have one opportunity to access a critical application or a critical piece of data, you want that application or that data to be operational. In order to accomplish this, and given the fact that failures are an inevitable state of nature (Mur­phy’s Law), a system can be designed to guarantee that applications or data be operational when needed. Typically, such systems are designed in such a way that the time that the system is down is minimized as much as possible. This involves, naturally, the use of redundancy, checking mechanisms to detect and isolate failures, and protocols that allow for the re-start of applications.


One such approach is known as "redundancy in time," which involves the execution of dif­ferent actions on multiple components. Upon the detection of a failure the execution of the action stops, the problem is isolated, and fail over to another component that can do the same application. For example, say you have an airline terminal reservation system, consisting of two processors. One is handling flight 44 to San Francisco; another is handling flight 41 to Miami. If there’s a failure in securing a seat on flight 44 to San Francisco, you detect the failure, isolate it, and fail over the handling of that parti­cular booking to the other computer. Then you restart the application and complete the transaction.


To differentiate Digital’s offerings from those of the competition, I focus on four key attributes: availability, data integrity, service and price/performance.


We offer conventional systems, such as a VAX 8700 standalone computer. The downtime of that system can be measured in tens of hours per year. If you have a failure of a hard­ware component, you call Field Service, and it may be two hours before you can replace that component.


Our high availability VAXcluster offering reduces that time significantly. With specific configurations, we can reduce the downtime in a VAXcluster system to just minutes per year. We do that with redundant components, such as multiple processors.


We build our high availability offering around our conventional systems and guarantee that they provide higher levels of data integrity. For example, we provide a reliable inter­process communication mechanism.


In terms of cost, of course, the conventional system is the least expensive. A high availability system, because of the built-in redundancy, always increases the cost.


In the 1990s and beyond, with our Distributed Transaction Processing Architecture, we will separate the basic functions associated with the execution of every transaction, that is, stimulus capture (front end), application execution and data base access (back end), and associate as much computing resources as possible with each element of the transaction. To understand the differences between stimulus capture, application running, and data base access I will use an example. Consider the typical purchase in a department store of a videocassette recorder (VCR). This simple event (transaction) invokes all of the elements of a Transaction Processing system. First there is the stimulus capture, this could be "forms'' manipulation, that is, the clerk interacting with a video monitor on a prescribed form where the number and price of the unit is entered. On the other hand, the stimulus capture can be of a different nature on other applications, like credit card verification, where the stimulus is simply running the credit card through a credit verification mach­ine.


The second step in the transaction involves the running of the application. In the example above, most likely the application is a warehouse and inventory control application that will make sure that the desired article (videocassette recorder of the particular brand and model) is available.


Lastly the execution of the transaction will involve a data base access, namely modifying the data base to reflect the purchase that just occurred.


Every one of the steps in this transaction will be distributed in Digital’s Transaction Processing architecture. This distribution of functions to computing resources will pro­vide us with a competitive advantage.


In 1988, VAXcluster systems provide high availability and data integrity for all func­tions. That is, the VAXcluster system does the whole job: stimulus capture, application execution and database access. In future years, VAXcluster systems will execute the backend functions only; therefore, supporting high availability; and data integrity for the application execution and database access.


Our ability to distribute stimulus capture, transaction servers and data base access will allow us to cover a much wider range in performance going from the low end of 16 trans­actions per second to the hundreds of transactions per second.


In summary, high availability systems are important in the transaction processing environ­ment for three reasons:


o They meet the most stringent requirements of our customers - downtime of less than three minutes per year.


o They insure data integrity.


o And they allow you to have a large range of performance.


Meeting Customers’ Needs - Digital’s Distributed Production Systems by Bill Steul, vice president, Corporate Systems Group


Our customers are trying to manage the same kinds of environments that we are in our industry. They’re trying to manage change, to become more competitive and to do a better job at servicing their customers. They’re trying to improve their quality and to reduce their costs. They’re trying to be leaders in their technology. In some cases, they’re going through radical shifts in technology, trying to get high quality products and ser­vices to the market much faster. Many of them are dealing with de-regulation or re-regu­lation.


When organizations recognize they need to change, typically, their first inclination is to change the organization or change the people. The products and the services are more difficult to change. And the information systems that are used to run the business are very difficult to change. And all of these elements are interrelated.


The information systems approach that customers are using today in an IBM environment can help set the stage for describing our advantages.


Mission critical, transaction processing production systems are "you bet your business" applications. Examples include reservations systems, inventory control systems, funds transfer, a lot of banking applications, order processing and billing. These systems have to be efficient, flexible, reliable, and highly available. Historically, batched produc­tion systems were primarily focused on accounting. They were used to measure the business after the fact. What our customers want now is on-line transaction processing systems that still take care of the accounting, but that also allow them to run their businesses more effectively.


Some of us remember the IBM environment of the 1960s. We remember standing in line with our card decks trying to get computations done, only to find out, a day or two later, that we made one simple syntax error. We also went into the computer room between 2 and 5 AM to get computer time to do testing and debugging because the rest of the time the system was running production, and there was no way that the MIS department was going to upset that.


The mainframe computing environment hasn’t changed much since then. The punch cards have been replaced by terminals, but the act of writing programs and compiling is still very much a batch process. There are batched compiler queues that always get last priority on a mainframe, after production. The Finance Department, which usually runs the MIS Depart­ment, is focused on making overhead people more productive. Hence, the focus is on acc­ounting for the business after the fact.


We know our customers want distributed responsive systems and want to put computing out on the front lines of their organizations, but this 1960s environment is where they’ve come from, and change comes hard.


Their present environment is very costly. It’s a model that worked well in the 1960s, when computers were very expensive and networking was relatively inexpensive. But these mainframes are very complicated to operate, and require large staffs of operators and system programmers to keep all of the software working. The communication costs are very expensive and increasing. Growth is very expensive and disruptive. When you want to get more computing power, you have to swap a CPU or add more CPUs; and the cost of adding them is very high because they have to be added in large increments. It really gets messy when you add the number of data centers that a large company typically has: multiple main­frames, hundreds and even thousands of terminals, and all the standalone PCs that employ­ees have purchased because they got tired of working with that difficult computing envir­onment.


Because the mainframes aren’t easily accessible, users bought PCs or terminals connected to minicomputers. IBM’s TP software doesn’t work on PCs or on workstations. It only works on the mainframe. Therefore, people who want to do end-user computing and trans­action processing need two sets of user devices and two networks, at the minimum.


Now, the economics of computing are also changing. The cost of putting the computing resources closer to end users is becoming lower and lower. The costs of networking are becoming higher and higher. So the mainframe terminal-network model is becoming econo­mically as well as technologically obsolete. That’s why the mainframe business is not growing.


For example, consider a manufacturing company that has separate data bases, multiple networks, isolated local area networks, PCs and workstations, limited-function terminals, and huge investment in old software and training. By separating its computing resources, that company is not tied together; rather it is split. Imagine what happens when such a company wants to make dramatic changes in the way it does business. Their information systems often limit how quickly they can change.


Contrast that picture to Digital’s internal Easynet network, and the way we conduct our business worldwide. We have one network around the globe that supports 445 locations in 32 countries. We have over 28,000 computers and 80,000 users on that network. The same network is used for end-user computing, production computing and software development. We do our order processing, inventory management, accounting, and mail on the same network. We have integrated data bases, so users can get the information they need about orders, shipments, revenues, etc. And it's all one uniform network. The system doesn’t care whether we’re doing TP or end-user computing or software development.


When our customers see what we have, they say that’s the direction they want.


Customers really want to concentrate on their businesses and not on their computers and networks. They’d like systems that change easily as their business conditions change. They’d like highly productive systems that allow them to focus on and change their busi­ness environment. They want systems and networking capabilities that will help them tie


their organizations together. They want investment protection. They want to know that we’re going stick with them for the long term. They want a wide range of applications, but they have limited resources for development. So they want to acquire applications that have already developed. They want to know that service and support is available when they need it.


Other computer companies such as IBM, Hewlett Packard and Tandem also claim to have these capabilities.


IBM has the major marketshare in on-line and batch production systems. Their "on-line" style means remote terminals connected over telephone lines to mainframes. They have fast, batch TP and build large systems that can handle large data bases. They also have the ability to handle very complex systems projects, both inside their company and for their customers. They can provide the customer support that’s required, when it’s re­quired and where it’s required.


Their problem is that they have multiple operating systems, multiple TP systems, multiple data base systems, multiple networks and poor software development tools. The cost of ownership — from the standpoint of people, equipment, software or facilities management — is higher than anybody else in the industry. And IBM does not have any distributed TP products today.


Hewlett Packard has a very loyal customer base that has stuck with them through the years. They do a good job of customer support. They have an image of supporting industry stan­dards. They recently brought out a new line of RISC architecture computers with good price/performance. They are selling those machines into the laboratory and science mar­kets where they’ve always been a strong competitor of ours. And they’ve told the world that they’re going to be a factor in TP as well.


But Hewlett Packard has four operating systems that they are trying to support. They emphasize Unix, but there are no TP applications on Unix today. They have not developed a range of enterprise-wide solutions, and their networking capability is limited. They lack expandability, and they also lack the equivalent of clusters or symmetric multiprocessing.


Tandem more than any other company has been recognized as a leader in TP. They’ve done a magnificent job of marketing. They offer low-cost, high-performance, expandable TP sys­tems, and have a good distributed relational database technology. They also do well at IBM interconnect: so their systems are often used as front-end systems to IBM mainframes.


But Tandem has a proprietary development language and operating system, and is narrowly focused on TP. Except for connections to IBM, they have a poor multi-vendor network approach, and application development tools are also lacking.


How do we win? We will win with our single well-integrated architecture, our networks and network management capability, our distributed TP and data base products, and our effi­cient application development tools; we will win with the interoperability that we offer among desktop, mid-range, departmental and data center computing. We have the broadest range of applications in the industry. We have a commitment to industry standards and open systems. We’re the only company that can deliver all of this today.


We have a simple goal: to be number one in distributed production systems and to tie those systems together well with our end-user computing environment and our software development environment; to work well with the mainframes and central computing resources customers have today; and, eventually, to replace those mainframes as our systems become better at handling the large jobs.


*  Unix is a trademark of AT&T Bell Laboratories.


*  IBM, PC-AT, OS/2 and SNA are trademarks of International Business Machines Corporation.


*  MS-DOS is a trademark of Microsoft Corporation.


*  Apple and Macintosh are trademarks of Apple Computer, Inc.


A summary of Gary Eichhorn’s State of the Company presentation on "Research as a Market — a Window of Opportunity" will appear in the next issue of MGMT MEMO.  privacy statement