Jump to ContentJump to Main Navigation
Systems, Experts, and ComputersThe Systems Approach in Management and Engineering, World War II and After$

Agatha C. Hughes and Thomas P. Hughes

Print publication date: 2000

Print ISBN-13: 9780262082853

Published to MIT Press Scholarship Online: August 2013

DOI: 10.7551/mitpress/9780262082853.001.0001

Show Summary Details
Page of

PRINTED FROM MIT PRESS SCHOLARSHIP ONLINE (www.mitpress.universitypressscholarship.com). (c) Copyright The MIT Press, 2018. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in MITSO for personal use. Subscriber: null; date: 24 October 2019

Introduction

Introduction

Chapter:
(p.1) Introduction
Source:
Systems, Experts, and Computers
Author(s):

Thomas P. Hughes

Agatha C. Hughes

Publisher:
The MIT Press
DOI:10.7551/mitpress/9780262082853.003.0001

Abstract and Keywords

The systems approach enjoyed a top spot among the minds of engineers, scientists, and managers during the early Lyndon Johnson administration. However, this trajectory of advocacy declined together with the reverses of the Vietnam War and the rise of a counterculture that associated large systems with the military, industry, and university complex, and with the Vietnam predicament. The decline in the popularity of the systems approach also resulted from the frequent failure of its practitioners to cope with complex urban problems involving political and social factors. Before this downward turn, however, an articulated systems approach unfolded that spawned new academic fields, new “sciences of management,” and new modes of engineering practice. This, in turn, brought to the surface a number of forms, including operations research, systems engineering, systems analysis, and system dynamics. Generally, it can be said that the systems approaches had their origins in the military realm and only that their application was only emphasized in the civil realm after 1960.

Keywords:   systems approach, Lyndon Johnson administration, Vietnam War, counterculture, complex urban problems, operations research, systems engineering, systems analysis, system dynamics, military realm

After World War II, a systems approach to solving complex problems and managing complex systems came into vogue among engineers, scientists, and managers. In 1964, the Engineering Index had no entry for “systems engineering” and only two pages for “operations research,” both variations upon a systems approach. By 1969, the number had jumped to eight pages of citations for “systems engineering” and to ten for “operations research.” Enthusiasm for a systems approach peaked during the early Lyndon Johnson administration (1963–1969), after which the trajectory of advocacy moved downward in step with the reverses of the Vietnam War and the rise of a counterculture. The counterculture associated large systems with the military/industry/university complex and with the Vietnam quagmire. The decline in the popularity of the systems approach also resulted from the frequent failure of its practitioners to cope with complex urban problems involving political and social factors.

Before this downward turn, however, an articulated systems approach unfolded in rich and unprecedented ways during and after World War II. The approach spawned new academic fields, new “sciences of management,” and new modes of engineering practice. It effloresced into a number of forms, including operations research, systems engineering, systems analysis, and system dynamics.

Operations research usually referred to a systematic analysis of operating systems, or operations, such as military bombing raids during World War II; systems engineering mostly designated the management of the design and development of technological systems, such as the intercontinental ballistic missile system in the United States in the 1950s; systems analysis often dealt with the comparison of systems that offered alternative solutions to problems, such as the use of long-range bombers versus intercontinental missiles; and system dynamics offered models that could be used in policy making by predicting and comparing the downstream consequences of outcomes of alternative policies.

(p.2) In general it can be said that the systems approaches had their origins in the military realm in the period 1939–1960. After 1960, proponents of the systems approach increasingly emphasized its possible applications in the civil realm. While physicists, mathematicians, and engineers were its early practitioners, social scientists, including management specialists, started adopting systems techniques after World War II.

Practitioners and proponents embrace a holistic vision. They focus on the interconnections among subsystems and components, taking special note of the interfaces among the various parts. What is significant is that system builders include heterogeneous components, such as mechanical, electrical, and organizational parts, in a single system. Organizational parts might be managerial structures, such as a military command, or political entities, such as a government bureau. Organizational components not only interact with technical ones but often reflect their characteristics. For instance, a management organization for presiding over the development of an intercontinental missile system might be divided into divisions that mirror the parts of the missile being designed.1

The Conference

Participants in the Dibner conference on the systems approach explored the common and contrasting characteristics of the approach as it was developed in the military and civil realms by various disciplines and practitioners. They also considered the ways and the places in which the systems approach was applied, with a particular focus on applications to the social problems of urban areas in the 1960s.

The papers and the subsequent commentaries and discussions made clear the increasing dependence of the systems approach upon the digital computer, especially in the 1950s and 1960s. Conference participants stressed the fact that experts based their claims for decision-making authority in matters both military and civil upon their putative mastery of a specifically computer-based systems approach.

Historians presented papers, and engineers, scientists, and managers who played leading roles in the spread of the systems approach acted as commentators or participated in panel discussions. The conference included a roundtable presentation on the history of the International Institute for Applied Systems Analysis. Those participating in (p.3) the roundtable had helped make the history they presented. Because the historians presented analytical narratives that sometimes involved their commentators, exchanges during discussion proved lively and enlightening.

Summaries of the papers as well as several of the commentaries can suggest the essential character and spirit of the conference.2 They are organized according to several themes that emerged from the conference.

Origins of the Systems Approach

David A. Mindell, in “Automation’s Finest Hour: Radar and System Integration in World War II,” finds the roots of a systems approach in communities of engineers and scientists who dealt with problems of gunfire control, which has been a prime source of cutting-edge technology in this century. In World War I, Elmer Sperry, an American inventor of feedback controls, sought means to combine optical target-sensing and target-tracking devices, mechanical analogue computers, and manual operators into gunfire control assemblage for dreadnought battleships. During the interwar years, scientists and engineers at MIT, Bell Telephone Laboratories, and elsewhere used detailed analyses of practice to place control engineering on a rational theoretical basis.

Ivan Getting, a young engineer and scientist with an MIT degree, a Rhodes Scholarship, and membership in the Harvard Society of Fellows, stands out among gunfire control experts of World War II. He was strongly motivated by a holistic vision of systems integration stimulated by his work in controls. He did research and development at the MIT Radiation Laboratory, which had been given responsibility for developing microwave radar during the war. He also worked with Division 7 of the National Defense Research Committee, which presided over the development of new fire-control systems.

Getting advocated viewing fire control as a system rather than as an assemblage of separately designed radar and fire-control sensing and computing devices. Perceiving the human operators as the weak link, or “reverse salient,” in gunfire control, he also pushed for increased automation. He embedded his concepts in the design of the Mark 56 Gunfire Control System.

Getting’s influence helps explain why engineers and scientists experienced in radar and gunfire control became advocates of the (p.4) systems approach following World War II. Their advocacy reinforced the efforts of others proposing a systems approach on the level of operations research and systems analysis.

The war provided a variety of opportunities for systems approaches to flourish. Erik P. Rau, in “The Adoption of Operations Research in the United States during World War II,” discusses the activities of individuals and organizations nourishing the transfer of the British systems approach, known as operational research, to the United States, where it became known as operations research (OR). This transfer was not without bumps. Vannevar Bush, who mobilized civilian scientists and engineers for the war effort through his Office of Scientific Research and Development (OSRD), did not see operations research as a hard science. Never an advocate of social, or soft, science, he resisted diluting the research and development activities of OSRD divisions with an influx of social scientists. He also sensed that the high-ranking military officers with whom he desired to maintain good relations were ill informed about the successes of British operational research units and distrustful of taking advice from civilians on operational matters. So he tried to situate nascent operations research endeavors in the organizational framework of the nongovernment, civilian National Academy of Sciences.

What Bush failed to comprehend was the extent to which precisely quantified information from operations research personnel concerning the success and nonsuccess of operations involving innovative weapons systems designed by OSRD divisions could assist OSRD engineers and scientists in modifying existing weapons or introducing new ones. Despite his indifference, U.S. operations research groups were established. For example, a U.S. Army Air Force wing stationed in Great Britain used American operations research civilians who had learned British techniques to analyze bombing operations. Plagued by the successes of German U-boats, the U.S. Navy established an effective Antisubmarine Warfare Operation Research Group under the guidance of MIT physicist Philip M. Morse, who later became a major promoter of operations research in the civil sector. Even within Bush’s OSRD, there were some advocates of operations research, including John Burchard, an MIT architect and professor who headed an OSRD division studying the effects of artillery and bombs on structures.

After a Bush-organized study group recommended that the OSRD play a major role in fostering operations research, especially if the research group could be staffed by people deeply informed about (p.5) scientific method and engineering practice, Bush relented. In 1943 he established an operations research division in OSRD that was designated the Office of Field Service (OFS). It was intended to extend OSRD’s reach into the front lines, especially in the Pacific theater. Rau follows the fortunes of the Office of Field Services and concludes that it failed to fulfill its mission of extending OSRD’s research agenda to the military front. The OFS mistakenly chose problems, for example, that were mathematically intractable or inherently unanswerable because they involved values or computations beyond the state of the art. The reasons for this “utter failure” beg explanation.

Military organizations in the field, including that of Army General Douglas MacArthur, often failed to use—even to cooperate with—the Office of Field Service units. The Army Air Force and the navy preferred their own OR units. Nevertheless, the OSRD venture into operations research plowed ground that would be cultivated during the Cold War by civilian scientists intent upon shaping military strategy.

Organizations and Individuals

Several postwar organizations cultivated the systems approach. The RAND Corporation, founded in 1948 with U.S. Air Force funding, developed systems analysis techniques for evaluating weapons systems; the MITRE Corporation, founded in 1958 with air force funds, concentrated on a systems-engineering approach to designing command and control systems; the Ramo-Wooldridge Corporation, organized in 1953, developed advanced systems-engineering management for large military projects; and the International Institute for Applied Systems Analysis, established in 1966, took a systems approach to the study of civil problems common to advanced industrial societies. RAND, MITRE, and Ramo-Wooldridge also applied the systems approach to civil problems. The Bechtel Corporation and Parsons Brinckerhoff were among leading engineering design and consulting firms deploying systems engineering techniques for managing large construction projects in the 1960s and later.

Not only organizations but also several individuals emerge from the history of the systems approach as influential innovators and practitioners. Philip Morse, as noted, transferred operations research from a military to an industrial context. Time magazine cover stories featured Simon Ramo and Dean Wooldridge, engineer/scientists, and Bernard Schriever, an air force general, after they used systems engineering to (p.6) manage the Atlas Intercontinental Ballistic Missile Project during the 1950s. Charles Hitch and Alain Enthoven moved from the RAND Corporation to Robert McNamara’s Pentagon to introduce a systems approach to budgeting and strategic planning, which then spread to other government agencies. Jay Forrester, after having led the development of the Whirlwind computer system for the 1950s SAGE air defense project, moved to the Sloan School at MIT and published Industrial Dynamics (1961) and Urban Dynamics (1969), both of which stimulated widespread discussion of the application of computer modeling to the implementation of a systems approach for problem solving and decision making.

Systems and Projects

Developments in the 1950s prepared the ground for the spread of the systems approach in the 1960s. During the latter part of the presidential administration of Harry Truman, as well as during that of Dwight Eisenhower, the military funded three major defense projects—the Intercontinental Ballistic Missile Project, the SAGE Air Defense Project, and the Polaris Intermediate Range Missile Project—that provided rich learning experiences for thousands of engineers and scientists. The Polaris Project, managed by the U.S. Navy Special Projects Office, introduced Program Evaluation Review Technique (PERT), a computer-scheduling program later widely used in conjunction with a systems approach to project management. Academic scientists and engineers analyzed, codifed, and rationalized information garnered from these projects in order to develop system sciences, which they disseminated through teaching and publications.

The U.S. Air Force and the Office of the Secretary of Defense (OSD) in the 1960s also rationalized and applied systems-management techniques developed at RAND and during Atlas and other large-scale military-funded projects. Stephen B. Johnson’s paper, “From Concurrency to Phased Planning: An Episode in the History of Systems Management,” explores the efforts of the Defense Department to refine and revise the systems approach by shifting from a concurrency approach to phased planning. This shift followed Defense Secretary Robert McNamara’s decision to emphasize cost reduction rather than rapid deployment of weapons systems as the Cold War attenuated.

Schriever and Ramo had introduced concurrency during the Atlas Project. Straining to deploy the missiles as quickly as possible, they (p.7) called for simultaneous, or concurrent, development of the missiles, the facilities to manufacture and test them, and the command and communication system for their operational control. As Johnson shows, concurrency often increased costs because design changes in a system component being developed often required costly changes in others developed concurrently and already being manufactured.

The phased-planning approach introduced by the Defense Department required a preliminary design phase that would produce enough details about a proposed weapons system to allow cost estimates to be projected for its development. This would enable the Defense Department to review the design and the cost projection before authorizing a project. Schriever, who had become head of the Air Force Systems Command, strongly objected to this sequential approach because it would lead to time delays and because with such an “overly conservative approach … the timid will replace the bold and we will not be able to provide the advance weapons the future of the nation demands” (p. 106). Others in the air force feared “creeping centralization” under McNamara’s civilians. Nonetheless, phased planning became a cornerstone of the military research and development system.

Glenn Bugos, in “System Reshapes the Corporation: Joint Ventures in the Bay Area Rapid Transit System, 1962–1972,” also explores the management of large projects. Stressing that large construction projects require the use of a systems approach to coordinate and control the activities of numerous semiautonomous contractors, he points out that firms responsible for such a multifaceted task have often resorted to joint ventures in order to pool the needed organizational and technical resources. Joint ventures also spread the risks of the heavy financial liabilities for nonperformance that companies must carry in guaranteeing and fulfilling performance specifications.

Bugos uses the history of the Bay Area Rapid Transit System (BART) to exemplify the joint-venture approach to system building. BART is a seventy-five-mile-long rapid transit system that rings the lowlands around San Francisco Bay. It includes a Bay tunnel connecting Oakland and downtown San Francisco. Begun in 1966, the system was completed in 1975 at a cost of $1.6 billion.

BART serves as an instructive example of joint venturing. Not only did a managerial joint venture (JVCO) parented by Parsons Brinckerhoff Hall & McDonald, an engineering firm specializing in design, and by the Bechtel Group, specializing in managing construction, become BART’s project manager, but also a number of (p.8) the project’s construction contractors formed joint ventures to pool resources.

Bugos concludes that the BART Project provides an outstanding example of the need in an increasingly complex human-built world for organizational forms such as joint venturing. They provide vast networks, or systems, of diverse organizations able to cooperate for a specified period to carry out a defined project. He calls this a “postmodern” turn in management, one comparable to the spread of scientific management early in the twentieth century.

Experts and Expertise

The systems approach spread in the United States in the 1950s and early 1960s concurrently with the increasing deference of the public, the military, and the civil government to experts and expertise, especially to engineers, scientists, and managers who practiced systems sciences. Even though a few prominent engineers insisted that systems engineering as a commonsense approach had a long history extending back at least to those who organized the building of the Egyptian pyramids, others involved in the application of operations research, systems analysis, and system dynamics often associated their approach with a new variation on the scientific method. This bound the systems approach ever more tightly to expertise.

Imitating the physicists who had garnered enormous prestige as designers and developers of atomic and other weapons, the expert practitioners of system sciences touted their reliance on theory, method, and mathematics. People making these claims are sometimes described as having physics envy. Like so many physicists, they looked rather askance at the soft social sciences, making an exception only for economics as an auxiliary tool for problem solving. As a result, the systems approach as taken by many practitioners became less holistic and more specialized.

Experts rose to positions of influence not only in the United States but also in France. Gabrielle Hecht, in “Planning a Technological Nation: Systems Thinking and the Politics of National Identity in Postwar France,” writes about French engineers, economists, and professional administrators during the French Fourth Republic (1946–1958) and the early years of Charles de Gaulle’s Fifth Republic (1958–1969). Graduates of the prestigious grandes ecoles, they cultivated an (p.9) elitist ideology of public service. They believed that they were the ones who should preside over national reconstruction and modernization.

The experts argued that France needed to cultivate technology and applied science if it hoped to establish itself once again, after the humiliating defeat of World War II, as a great power from which French political influence, cultural traditions, and style could radiate. Many public intellectuals agreed that the country needed to rely on engineering, managerial, and scientific experts, though they disagreed on the extent of such reliance. Should the experts make high-level policy by crossing the boundary presumed to exist between technology and politics, or should they only offer alternatives to the political decision makers who would take into account factors other than the rational, quantifiable ones with which the experts dealt.

Humanistic and social science intellectuals who opposed a policy-making role for the experts pejoratively labeled them “technocrats.” Technocrat to these critics connoted an elite who mistakenly and destructively believed that social problems, as well as technical ones, could be solved by a technically informed, reductionist systems approach based on simplifying assumptions and the discounting of human values. Those engineers, managers, and scientists who believed that the modernization of France and restoration of French prestige rested in their hands countered that well-trained and responsible experts took an all-embracing systematic approach (vue d’ensemble) to solve national problems compounded of technical, political, economic, and social factors. Further, they contended that because of their scientific training, their policy recommendations would be value free.

The experts demonstrated their systems approach through the work of a Planning Commission, a state agency founded after World War II to develop multiyear plans to reconstruct and modernize the nation. Hecht characterizes these plans as producing

a set of models that described the national economy as a heterogeneous system composed of technological and economic artifacts, people, and social relationships, and driven by the interactions of these various components. (p. 151)

She suggests that the technological experts “sought to erode” the boundary between technology and politics, and that “systems thinking provided an ideal means for describing and legitimating this goal” (p. 154).

(p.10) Hecht describes negative French reactions to the extreme claims of the experts; similar criticism arose in the United States, even within the ranks of systems-approach practitioners. Russell Ackoff, previously a leading academic theorist and consultant in operations research, published critical articles in the 1970s with titles like “The Future of Operations Research Is Past” and “Resurrecting the Future of Operational Research.” He asserted that American operations research “is dead even though it has yet to be buried.”3 Academics without field experience, with only “textbook” acquaintance, he argued, taught most operations research courses, and, as a result, their students had little contact with “real world” problems.4 Because operations research had become mathematically sophisticated but contextually naive and value free, managers pushed OR practitioners down into the bowels of their organizations. In Ackoff’s opinion, most humans seek value-laden goals. For this reason, advisers to policy makers needed to incorporate in their recommendations a humanistic point of view.5 Ackoff spoke less of problem solving and more of “managing mess.”6

Computer Mystique

Experts not only used the systems approach; they also drew on the arcane mystique that surrounded early computers. In the 1960s, manufacturers, especially IBM, introduced mainframe computers into military, academic, and business organizations. Popular imagination envisioned room-sized computers as great brains tended by white-coated specialists. Enthusiasts painted a halcyon picture of computers masterminding society’s future. Programmers developed software supposedly able to take over management functions. Graduates of management schools trained in the systems approach presented themselves as masters of computer tools.

The computer program PERT greatly enhanced the mystique of computer-empowered expertise. Booz, Allen & Hamilton, a management firm with operations research experience, took the lead in developing PERT for the U.S. Navy’s Special Projects Office (SPO), which had the task of managing the development of the submarine-launched Polaris intermediate-range missile. Prepared over a period of several months in 1958, PERT made a large and lasting impression on contemporaries in engineering and management.7

Programmed by PERT, a mainframe computer generated flow charts that portrayed a network of actions and events, thereby (p.11) improving on the nondynamic bar charts long used for scheduling activities in the construction industry. By mid-1959 the Special Projects Office had applied PERT to much of the Fleet Ballistic Missile Program. Interest in PERT spread through the industrial, military, and management communities. The peak year for the introduction of PERT-like management programs, 1962, became known among aerospace firms as the “Year of Management Systems.”8

Harvey Sapolsky, a historian of Polaris, finds PERT, as used by the SPO, an effective management tool, but not for the reasons advanced by PERT advocates. Referring to it as “an alchemist combination of whirling computers, brightly colored charts, and fast-talking public relations officers,” he sees PERT as persuading potential critics of Polaris management that its experts used an arcane, high-tech tool of near miraculous capabilities.9

Donald MacKenzie’s essay, “A Worm in the Bud? Computers, Systems, and the Safety-Case Problem,” also raises questions about the efficacy of computer software, as well as about the reliability of large computers. Such inefficiency and unreliability create serious problems because computers, he reminds us, are at the heart of “nearly all the large technical systems of the late twentieth century” (p. 161).

The SAGE system designed in the 1950s to provide a defense against bomber attack was one of the first of the large, computer-dependent systems. A few years later IBM installed a complex computerized reservation system for American Airlines. Within a short period computerized systems to manage the financial transactions of large organizations became common. Computer switches for the telephone networks and computer-based information systems followed. The list is long and is still growing.

Even though the computer can be programmed to respond to society’s needs, this may, at the same time, cause considerable unease among knowledgeable people who realize that if crucial computerized systems go awry, social catastrophe may ensue. MacKenzie notes that computer-based command and control systems poised ready to launch intercontinental ballistic missiles are a special cause for concern. In 1986, C. A. R. Hoare, a world-renowned British computer scientist, remarked that “nobody trusts a computer; and the lack of faith is amply justified” (p. 163). Many of his fellow experts expressed similar concerns, thereby deflating computer mystique and eroding the prestige of computer experts.

(p.12) Responding to the risks posed to large systems by computer software failures, experts developed several methods of assessing the likelihood of “bugs” causing a computer crash. Yet no “silver bullets” have been found. “Program testing,” one expert acknowledges, “can be a very effective way of showing the presence of bugs, but it is hopelessly inadequate for showing their absence” (p. 175). Program testing brought the development of such abstruse techniques as reliability-growth modeling and random testing. Computer experts trained in mathematics attempted the difficult task of proving software reliability by deductive proof, as well as by inductive testing, but with less than satisfactory results.

Paradoxically, doubts about the safety of computer systems may have kept the number of highly disruptive failures low: “Beliefs that ‘bugs’ are pervasive … that computer systems can be dangerous,” MacKenzie concludes, “have been one of the factors that have kept the death toll in computer-related accidents modest” (pp. 182–183).

Commenting on the MacKenzie essay at the conference, Michael Mahoney stressed the continuation of the software crisis. Drawing an analogy between computer programs and industrial production systems, he observed that the essential process taking place in a programmed computer is a computation that transforms input data to output data. Like production engineers, software designers conceptualize a program as a system composed of many interacting computational units. Data being processed, however, do not move serially—and comprehensibly—from one processing unit to another, as is the case in a mechanical production process. Because the data move in feedback loops among the processing units, tracing the flow of production is not feasible. Software designers have tried various methods of analysis, including object-oriented programming, to control, localize, and fathom the flow, but the programmed computer shares the problems of the complex processes and organizations it simulates. Both the programmed computer and the human-built world defy control and analysis, Mahoney pointed out, because they push complexity to the limit.

Despite “the worm in the bud,” computer systems made their way into government agencies. Atsushi Akera, in “Engineers or Managers? The Systems Analysis of Electronic Data Processing in the Federal Bureaucracy,” analyzes the argument among agencies and experts about who should preside over data processing. His account begins in the 1950s, when a mass of data-processing responsibilities threatened to inundate the work force of the Patent Office, the Census Bureau, and (p.13) the Old Age Bureau (social security records). These agencies, which employed hundreds of clerical workers using punch cards and other mechanical tabulation methods, were likely candidates for computer processing.

By 1950 the National Bureau of Standards, which presided over a number of governmental research and development activities, numbered among its staff engineers and scientists technically informed about computers. Overburdened government agencies turned to these experts for advice about procuring computers and using them for data processing. Mary Elizabeth Stevens of the Bureau of Standards took the lead in providing these advisory services. Her reports for various agencies, the Patent Office among them, revealed that the introduction of computers usually required changes in management structure. This raised a question that Akera explores in detail: should the technically trained (engineers and scientists) or the managerially trained (professional managers) have prime responsibility for the use of the computer?

Akera describes the struggle among committees of experts and organizations, including the Bureau of Standards and the Bureau of the Budget, to claim the contested high ground of expertise. Would-be advisers insisted that not only the data-processing function but also entire management systems needed reorganization to take advantage of the computer. They made general and vague claims for the then-arcane technique of systems analysis, which had been honed at the RAND Corporation, as the means for computer-grounded managerial reform. (David Hounshell, David Jardini, and Roger Levien discuss the RAND approach in their essays.) Akera astutely observes that the claims of the would-be expert advisers fell mostly on deaf ears and barren soil because experienced bureaucrats presiding over the agencies had no intention of surrendering their political power to advising experts.

Besides serving as data-processing centers for large systems, computers can simulate the worlds in which they function. Paul N. Edwards, in “The World in a Machine: Origins and Impacts of Early Computerized Global Systems Models,” chronicles and analyzes the history of digital computer-based modeling in the thirty years following World War II. He argues that the development of computer models of worldwide phenomena has created among experts and the informed public a concept of the world as a system of interacting, dynamic forces that includes the physical, such as climate, and the social, such as population.

(p.14) To develop his thesis, Edwards discusses the histories of climate models and of Jay Forrester’s “system dynamics” models of global sociotechnical change. In these models, Edwards finds a dynamic interaction between the computer modelers’ hunger for data and the menu of data supplied to them. He shows that some of the most influential model conceptualizers, including Forrester, emphasize thinking through and designing a model that embraces numerous nonlinear feedback interactions of broad-ranging phenomena rather than the virtually impossible task of obtaining full and accurate data. As a result, their models are long on arbitrary standardized assumptions.

Edwards also explicates a paradox. In the case of climate models involving parameters and long time-series, such as the energy transfer from sun to earth and the behavior of the ocean as a heat sink, scientists using the global models depend heavily on data generated by interpolation of intermediate values from the scarce observed data. Programmed computers do the interpolation as well as the smoothing out of anomalous data, and so computers running and testing the models depend heavily on data generated by other computers.

Modelers check their representations by empirical data. For example, Forrester tested his model of urban development by checking to see if a computer model supplied with available data about the condition of a city around 1900 generates by iterating sets of nonlinear feedback equations the condition of the city a half century later.

Edwards characterizes the bold model builder:

Forrester believed that sorting out the structure and dynamics of a system using a computer model was the key to understanding. Data could come later, in part because a systems model could help reveal which data might be most important. (p. 239)

Forrester’s approach was not alone in being criticized. Critics argued that the RAND Corporation’s quantifiable models eliminated too many variables, and they questioned RAND’s use of systems analysis dependent upon these models. David A. Hounshell analyzes these reactions in “The Medium Is the Message, or How Context Matters: The RAND Corporation Builds an Economics of Innovation, 1946–1962.”

In the 1950s RAND broadened its systems approach by including more economists and social scientists among its 400 or so researchers. Ironically, in view of RAND’s former wholehearted commitment to a systems methodology, criticism of the systems-analysis approach began (p.15) to fester among a small group of RAND economists, Armen A. Alchian and Burton Klein prominent among them. They decided that RAND’s studies for the air force, which were designed by the systems analysis experts, depended on too many assumptions about the future conditions under which projected weapons systems would be deployed.

The air force studies in question also specified the ultimate characteristics of weapons systems before extensive research and development had been undertaken. Making such predictive assumptions led RAND’s systems analysts and the air force to decide prematurely which characteristic of envisioned weapons systems should be optimized. In short, critics believed that an overconfident RAND and air force tended to rely upon the development and deployment of prematurely and rigidly specified weapons systems for an infinitely flexible and unpredictable future.

Alchian, Klein, Kenneth Arrow (a future Nobel laureate from Stanford who served RAND as a consultant), and several other economist critics argued that RAND studies were erroneously recommending procurement decisions before adequate research and development had been carried out for the weapons systems under consideration. The air force might well depend on systems analysis to choose among available weapons systems, but it should not depend on systems analysis to choose among systems that had not passed through a research and experimental development phase.

The essence of research, Klein and others asserted, is asking a variety of questions about as-yet-undiscovered territory, then taking a number of different routes to explore and map the territory. Without the map, decision makers could not accurately decide which research sites on the map should be mined.

Reinforced by research and development case studies done by Richard Nelson, a young RAND economist, critics of the way in which systems analysis was being used wrote papers advocating funding many small exploratory research projects instead of just a few linked closely with future procurement and deployment. The Sputnik crisis in 1957 reinforced their criticisms because a panic-prone America decided that faulty military research and development had led to a missile gap.

The RAND economists published their argument in various quarters. They stressed that private industry would not do the exploratory research and experimental development needed because the fruits of such basic, or pure, activity are not easily appropriated as profits on investment, the patent system notwithstanding. It followed, therefore, (p.16) that the government, including the military, should subsidize diverse research projects in universities and nonprofit research centers, projects that would offer both the military and private industry a multitude of options for mission-oriented development and deployment.

From Swords to Ploughshares

In the 1960s the systems approach spread from RAND and militaryfunded projects into the government and industrial sectors. Systems enthusiasts, especially management experts and social scientists, successfully promoted the introduction of the systems approach into the administrative hierarchies of the Defense Department and the civil government. The Great Society programs of the Johnson administration adopted the systems approach.10 Such transfer from the military to the civil realm has been common in the United States. The early nineteenth-century “American system of production” and early twentieth-century “scientific management” are outstanding examples.”11

Gullible enthusiasts as well as experts asserted in the 1960s that the systems approach would make possible the creation and control of technological and social systems in the vast and complex human world.12 “Man must learn to deal with complexity, organized and disorganized, in some rational way,” argued one pair of advocates.13 So emboldened, the experts and their disciples confidently, even rashly, tackled a multitude of the complex technological and social problems plaguing society in the 1960s.14

Among these problems none seemed more pressing than the social and physical dilemmas of the large cities. Referring to the space program, which depended heavily on the systems approach, Vice President Hubert Humphrey eloquently captured a widespread faith when he said in 1968:

The techniques that are going to put a man on the Moon are going to be exactly the techniques that we are going to need to clean up our cities: the management techniques that are involved, the coordination of government and business, of scientist and engineer … the systems analysis that we have used in our space and aeronautics program—this is the approach that the modern city of America is going to need if it’s going to become a livable social institution. So maybe we’ve been pioneering in space only to save ourselves on Earth. As a matter of fact, maybe the nation that puts a man on the Moon is the nation that will put man on his feet first right here on Earth.15

(p.17) The RAND Corporation and TRW Inc., a successor to the Ramo-Wooldridge Corporation, took leading roles in the transfer of systems techniques. David R. Jardini, in “Out of the Blue Yonder: The Transfer of Systems Thinking from the Pentagon to the Great Society, 1961–1965,” follows RAND’s move to civil projects. Davis Dyer, in “The Limits of Technology Transfer: Civil Systems at TRW, 1965–1975,” traces the fortunes and misfortunes of TRW as it makes a similar move.

Jardini finds that by the 1960s the relationship between RAND and the air force had begun to turn sour. General Curtis LeMay, a blunt and domineering air force chief of staff, made clear that he expected RAND to do “narrowly purposive” studies and to follow the “Air Force party-line.” This directive undermined the substantial independence and objectivity that RAND researchers highly valued. RAND responded by accepting contracts from other Defense Department agencies, NASA, and the Atomic Energy Commission, a step that further widened the rift.

Threatened by air force control and colonization, some top-level researchers at RAND advocated turning to the civil sector for contracts. This recommendation caused considerable stress among RAND researchers, RAND management, the RAND Research Council, and the RAND Board of Trustees. Reactions clustered in three categories: air force loyalists; those favoring a closer alliance with Robert McNamara’s civilians at the Office of the Secretary of Defense; and others wishing to take contracts from civil government agencies whose budgets had been swollen by Johnson’s Great Society Programs on education, poverty, and the plight of the city.

After Franklin R. Collbohm, an air force loyalist, was replaced in 1965 as RAND head, the organization took a number of civil contracts, including ones from the Office of Economic Opportunity, the Department of Transportation, the National Institutes of Health, the Department of Housing and Urban Development, the Ford Foundation, and the Russell Sage Foundation. In addition, it signed four contracts with New York City to do studies of the police, fire, health, and housing departments. RAND thus moved into a world far messier than that of strategic studies. Newly focused, RAND remained a leading nonprofit research center of its kind, but there were a growing number of competitors that challenged its preeminence. Nevertheless, RAND played a path-breaking role in systems sciences and nurtured a host of eminent scholars who used and spread its approach.

(p.18) Dyer describes the efforts of TRW in the 1960s to transfer into the civil realm management skills honed by its predecessor, Ramo-Wooldridge, during the Atlas Project. TRW believed its expertise enabled it to solve problems associated with managing large and complex bodies of information, coordinating urban traffic flows, improving the physical environment, providing mass housing, deploying health systems, and raising the efficiency of energy generation and usage. Dyer concludes, however, that despite TRW’s optimistic effort most of its civil ventures “lost money or eked out meager returns during brief lifespans” (p. 360). He also delineates the troubles that TRW, especially its Systems Group, encountered in beating swords into ploughshares. These difficulties, however, did provide positive learning experiences.

Simon Ramo, a founder of Ramo-Wooldridge and a developer and articulator of the systems approach, as TRW executive vice president spurred on the firm’s technology and management transfer efforts. He became, according to Dyer, “a kind of self-monitored radar to anticipate coming perils or opportunities” associated with “the imbalance between accelerating technology and [the nation’s] lagging social maturity” (p. 363). Accelerating technological change brought the computer, space exploration, and the breaking of the genetic code; lagging social maturity resulted in crowded cities, a polluted environment, and depletion of energy sources. The systems approach, Ramo contended, offers a happy combination of solutions for society and profits for corporate enterprise.

California state and local governments awarded contracts to TRW to plan regional development, organize waste management, rationalize transportation systems, and process information about crime. The federal War on Poverty Program brought contracts for the company to train government personnel in systems management techniques. Small by defense contract standards, these civil contracts nevertheless stimulated a bold civil-systems initiative. With defense and NASA contracts on the wane in the 1970s, TRW formed joint ventures with other corporations to do community planning, housing, pollution control, data processing, and financial-investment analysis.

But TRW found civil governments far more difficult to deal with than the Office of the Secretary of Defense. With numerous initiatives showing losses, the company in the late 1970s began to liquidate its ventures in civil systems.

Dyer provides several overarching explanations for TRW’s frustrations and disappointing performances in transferring the systems (p.19) approach. The approach proved neither nimble nor flexible enough to deal with small projects, especially those involving disputatious and jurisdiction-guarding local governments. Entrenched bureaucracy resisted the deskilling impact of the systems approach. And some TRW managers and engineers experienced in aerospace projects just found the civil realm too messy. Recently the company has drawn upon its systems-approach experience to move once again into the civil sector with its “enormous, complex problems where there’s a highly litigious and emotional set of characters who are influencing the particular problem” (p. 380). These characters inhabit a realm commonly called “politics.”

Other Nations

Even though the conference concentrated on the spread of the systems approach in the United States, the approach transferred to other countries, too. The systems approach took root in Japan immediately after World War II when the U.S. occupational authority encouraged the transfer of American engineering and managerial techniques. In France, the Commissariat à 1’Énergie Atomique and the Électricité de France encouraged the deployment of a systems approach to preside over the construction of nuclear power plants. The influence of the systems approach can also be traced in Germany and Sweden, especially in the design and construction of large-scale projects.

Arne Kaijser and Joar Tiberg chronicle one case of transfer in “From Operations Research to Futures Studies: The Establishment, Diffusion, and Transformation of the Systems Approach in Sweden, 1945–1980.” Immediately after World War II the Swedish military cultivated operations research, depending on physicists and mathematicians, especially those in academia, to serve as consultants. In time, the military established small systems-research units, placing emphasis on systems analysis as a means of weighing the merits of prospective weapons systems. They also decided, with some guidance from visiting RAND researchers, that the quantitative emphasis of the physicists and mathematicians needed broadening to include a social science approach to messy problems—as had been done at RAND. A core of systems people, most of them civilians, located at the Sweden’s National Defense Research Institute, acquired so much influence with policy makers that Kaijser and Tiberg call them the “spider” in Sweden’s defense network.

(p.20) As in the United States, operations research and systems analysis spread from the military to the civil realm. The systems approach took root because of a scientific management tradition in Swedish industry. Engineers and management experts used operations research techniques to rationalize existing production processes and the organizational structure of industrial corporations. The Royal Institute of Engineers (IVA) set up a section for operations research, which promoted the spread of the techniques throughout the industrial sector.

The parallels in development of the systems approach in the United States and Sweden are striking, in part because of the longstanding Swedish reliance on technology transfer. Much as Jardini and Dyer in their essays on RAND and TRW document a move of systems experts into public policy, so Kaijser and Tiberg recount the increasing emphasis by Swedish systems experts in the 1960s and 1970s on civil public-policy issues, such as those arising in the health sector. The systems approach became politicized when the governing Social Democratic party established in the prime minister’s office a Secretariat for Future Studies staffed by systems experts who applied their tools to highly controversial issues, such as the future of nuclear energy. Kaijser and Tiberg suggest that seeming political bias reduced the expert status of the practitioners. They conclude that in Sweden the systems approach has now been incorporated into broader academic disciplines and that systems thinking is taken for granted. “The systems approach community,” they believe, “has dispersed and lost its identity” (p. 407).

In his commentary on the Kaijser and Tiberg essay, John Staudenmaier argued that the systems-approach community lost influence because of overreaching. When the experts began to advise policy makers on broad social issues such as health care, their advice focused on technical and economic factors that did not really reflect the messy complexity of the issues. Staudenmaier drew a parallel between the loss of momentum of the systems approach, both in Sweden and the United States, and the comparable loss of influence of Frederick W. Taylor’s scientific management approach early in this century. Both schools of management proved, according to Staudenmaier, “too rigid, too abstract, too clean.”

International Framework

The systems approach also embedded itself in an international framework. A conference panel on the history of the International Institute of (p.21) Applied Systems Analysis (IIASA) included members—Harvey Brooks, William C. Clark, Roger E. Levien, Alan McDonald, and Howard Raiffa—who had participated in the establishment and operations of IIASA. The essays by Brooks, McDonald, and Levien summarize the panel’s presentations and enlarge upon their remarks.

Brooks and McDonald, in “The International Institute for Applied Systems Analysis, the TAP Project, and the RAINS Model,” note that scientific organizations from twelve countries, including the United States and the Soviet Union, founded IIASA in 1972. The founders wanted IIASA to apply techniques of systems analysis to solve urban, industrial, and environmental problems that transcend international boundaries. Those who presided over the birth of IIASA also saw it as a “bridge building” initiative to reduce East-West tensions. Located near Vienna, IIASA invited researchers to stay in residence for varying periods, the average being about two years. They came primarily from the countries of the scientific organizations supporting IIASA.

IIASA’s governing council, usually headed by a Russian, and its director, customarily an American, often chose problems associated with population, economic development, global warming, and other global environmental issues. This focus brought social scientists as well as physical scientists to Vienna. Brooks and McDonald believe that IIASA has succeeded in advancing both the theory and application of systems analysis as responses to these problems. This achievement reminds us of the comparable success of the RAND Corporation in cultivating the systems approach, a comparison explored by Levien in his essay.

Brooks and McDonald discuss five hallmarks of the IIASA approach to problem solving, which include an international and interdisciplinary emphasis and the maintenance of credibility with both scientists and decision makers. To illustrate this approach, Brooks and McDonald offer a case history of one of the organization’s most successful ventures, a transboundary air-pollution project using the RAINS (Regional Acidification Information and Simulation) model of the impact of acidification in Europe. This informative case history focuses mostly on the changing nature, validity, and limitations of the RAINS computer model. Brooks and McDonald stress the ways in which the model has been adapted to the realities of policy making without losing its scientific credibility. The case history recalls Paul Edwards’ analysis of global systems models.

(p.22) Levien, in “RAND, IIASA, and the Conduct of Systems Analysis,” provides a comparison of the systems approaches of these organizations. Having held leadership positions in research and management at RAND from 1956 to 1974 and having been director of the IIASA from 1975 to 1981, Levien is well qualified to discuss conditions at RAND conducive to “first class” systems analysis and to draw lessons from its history. He is also well positioned to analyze the reasons why he and others found the challenge of nurturing impressive systems analysis at IIASA so much more difficult than at RAND.

At RAND an interdisciplinary approach impressively met the standards of the academic community while also responding to the need of governmental agencies—especially the air force—for realistic operational and policy recommendations. A major reason for its success was an environment at RAND that supported a fruitful exchange among people with diverse disciplinary commitments, including some who felt most comfortable with traditional scholarly research and others who preferred to focus on problem-oriented projects and interactions with clients.

Levien’s account of RAND provides an enlightening context for the essays by Hounshell and Jardini. He follows the early evolution of operations analysis, systems analysis during the Cold War, and policy analysis during the 1960s. Like Jardini, Levien suggests some of the reasons for the limited success of RAND’s ambitious effort at knowledge transfer. Levien also carries his overview through the rise of an international systems approach and policy analysis. The lessons he draws from RAND’s history help explain its successes. He stresses the advantages resulting from an interdisciplinary approach and from the granting of free choice to researchers in choosing problems. Levien contrasts this history with the problems encountered by IIASA leadership, including diverse points of view with regard to problem choice and expectations, as well as variations in competence among transient researchers.

Spread by Analogy

Two books by Norbert Wiener, an MIT mathematician, accelerated the spread of the systems approach to fields other than engineering and project management. Cybernetics, or Control and Communication in the Animal and the Machine (1948) influenced a professional audience, while The Human Use of Human Beings: Cybernetics and Society (1950) reached a general audience. Wiener’s views stimulated scientists in various dis (p.23) ciplines to see analogies between the systems they studied and the feedback systems he described. Within a decade, seeing the natural and human-built worlds as communication and control, or information, systems became commonplace among social and natural scientists, especially among those exploring molecular and developmental biology.

The information theory of Claude Shannon, an electrical engineer who worked at Bell Laboratories, complemented Wiener’s ideas. Shannon’s “Mathematical Theory of Communication,” published in 1948, also encouraged scientists and social scientists to find information and control analogies. During World War II, both Shannon and Wiener had drawn on technological practice, the former in telephone communications and cryptography and the latter in gunfire control, in formulating theoretical concepts of information and control. After the war, increasing understanding of the workings of digital computers reinforced their closely related theories of information, communication, and control.

Lily E. Kay, in “How a Genetic Code Became an Information System,” explains the ways in which concepts borrowed from Wiener and Shannon spread by analogy into molecular biology. The transfer took place mainly on the level of theoretical concepts, ones that were incorporated into laboratory practice by 1960. In search of theory with greater power to explain life processes, especially heredity, scientists used a host of metaphors from communications/information theory. Molecular biology began to represent itself as “a communication science, allied to cybernetics, information theory, and computers” (p. 463). Hereditary transmission came to be thought of as behaving in ways analogous to a guidance and control system. Such vocabulary as information, feedback, messages, codes, words, alphabets, and texts soon constituted the discourse of molecular biology.

Nobel laureate Jacques Monod argued that the human organism had a cybernetic, feedback system controlling its chemical processes; he even saw an analogy between gene-enzyme regulations and the automation circuitry of ballistic missiles. Another Nobel laureate, François Jacob, compared the genetic code to a computer program. Heredity functions, he believed, like “the memory of a computer.” In 1952 another advocate of information-science-based biology offered a scriptural representation of the protein paradigm of heredity. Proteins became for some scientists messages, amino acid residues an alphabet.

James Watson and Francis Crick’s 1953 description of DNA structure stimulated scientists cultivating an information-based biology (p.24) to attempt to break the DNA code by using cryptoanalytic techniques and mainframe digital computers. Communication engineers, biologists, mathematicians, and physicists attacked the DNA problem over the next decade. Their persistent and ingenious code-breaking endeavors, however, failed to unravel the enormously complex behavior of DNA. But the decade of metaphoric transformation of molecular biology, Kay concludes, provided a shaping context in which subsequent research in the field and the human genome project developed.

In his commentary on the Kay paper, Timothy Lenoir, argued, like Kay, that information and cybernetic theory as articulated by Shannon and Wiener fell short of explaining the coded messages controlling hereditary because information theory is about syntax and not semiotics, or meaning. He also agreed with Kay that the early information models representing molecular biology amounted to little more than metaphors, analogies, and tropes. On the other hand, Lenoir faulted Kay’s paper for not coping with the problem of explaining how biology has become, nonetheless, an information science.16

Conclusion

Taken together, the essays in this volume provide a history of the early decades of an innovative style of management and complex process analysis that took root after World War II. The systems style transferred from the military into the civil realm where it was often frustrated by political considerations and other factors that can be subsumed in the category “messy complexity.”

Nevertheless, a more flexible systems approach to management, especially of technological systems, has, as Arne Kaijser and Joar Tiberg suggest, now been incorporated into a general repertoire of managerial practice. The influence of the systems approach is comparable to that of Frederick W. Taylor’s scientific management earlier in this century.

Acknowledgment

We are indebted to Fred Quivik of the University of Pennsylvania for the information on citations.

Notes

Notes:

(1.) The resulting compromise, called “black-boxing,” allows local research and development contractor teams to fulfill system specifications for components in (p.25) various ways. When, for example, systems engineers specify the thrust required from a missile propulsion engine, the propulsion design team has leeway in choosing the mechanical, electrical, and chemical means for achieving these specifications. The systems engineers need not look into the “black box” within which the team is working and micromanage the means used to achieve the specified ends. Overarching systems management coordinates and schedules the numerous black-boxed subprojects.

(2.) Most of the commentators made informal oral presentations that have not been included in this volume.

(3.) Russell L. Ackoff, “The Future of Operations Research Is Past,” Journal of Operational Research Society 30.2 (1979): 93–104; and “Resurrecting the Future of Operational Research,” Journal of the Operational Research Society 30.3 (1979): 189–199. For the earlier views of Ackoff, see “The Development of Operations Research as a Science,” Operations Research: The Journal of the Operations Research Society of America 4.3 (1956): 265–295.

(4.) Ackoff, “The Future of Operations Research Is Past,” 93.

(5.) Ibid., passim.

(6.) Russell Ackoff, interview, 4 November 1993, in Philadelphia, Pennsylvania. See Russell L. Ackoff, “‘The Art and Science of Mess Management,’” Interfaces 17 (1981): 20–26.

(7.) Erik Rau, “Polaris,” unpublished essay, University of Pennsylvania, 1993.

(8.) Harvey Sapolsky, The Polaris System Development (Cambridge: Harvard University Press, 1972), 112.

(9.) Author’s interview with Robert Fuhrman, 25 June 1995, at the headquarters of the Lockheed Martin Corporation in Crystal City, Arlington, Virginia.

(10.) In this section we draw on a chapter entitled “Spread of the Systems Approach” in Thomas P. Hughes, Rescuing Prometheus (New York: Pantheon Books, 1998), 141–195. We are especially indebted to Fred Quivik, Atsushi Akera, and Erik Rau, research assistants on the Mellon Systems Project at the University of Pennsylvania, for research, essays, and insights helpful to us in following the spread of the systems approach. David Foster, formerly an undergraduate student at the University of Pennsylvania, and Elliott Fishman, a graduate student at the University of Pennsylvania, also provided valuable research assistance.

(11.) See the introduction by Merritt Roe Smith and the essays in Military Enterprise and Technological Change: Perspectives on the American Experience (Cambridge: MIT Press, 1985). See also the discussion of the American systems of production in David A. Hounshell, From the American System to Mass Production, 1800–1932: The Development of Manufacturing Technology in the United States (Baltimore: Johns Hopkins University Press, 1984); and David W. Noble, Forces of Production: A Social History of Industrial Automation (New York: Alfred A. Knopf, 1984). For a detailed military-history bibliography emphasizing technology in the military, see Barton (p.26) C. Hacker, “Military Institution, Weapons, and Social Change: Toward a New History of Military Technology,” Technology and Culture 35 (October 1994): 768–834. On military influences on science and engineering, see Paul Forman, “Behind Quantum Electronics: National Security as Basis for Physical Research in the United States, 1940–1960,” Historical Studies in the Physical and Biological Sciences 18.1 (1987): 149–229, and Stuart Leslie, The Cold War and American Science (New York: Columbia University Press, 1992). See also the perceptive remarks in Alex Roland, “Science and War,” in Sally Kohlstedt and Margaret Rossiter, eds., Historical Writing on American Science (1985).

(12.) Author’s interview with Alexander Kossiakoff, 3 April 1995, at Johns Hopkins University Applied Physics Laboratory.

(13.) John N. Warfield and J. Douglas Hill, A Unified Systems Engineering Concept, Battelle Monographs, ed. Benjamin Gordon (Columbus, Ohio: Battelle, 1972), 1.

(14.) These remarks on the spread of the systems approach are taken from Hughes, Rescuing Prometheus, 141–195.

(15.) Quoted in Melvin S. Day, “Space Technology for Non-Aerospace Applications,” 1968 Wescon Technical Papers, vol. 12, pt. 5, session 23, paper 4, 1968, 5–6.

(16.) Kay responds that molecular biology became an information science only in the colloquial sense of the word, as, for example, referring to library sciences as information sciences. Molecular biology, she continues, has never become an information science in the 1950s’ sense of the word.