# Transforming Finance

# Transforming Finance

# Abstract and Keywords

This chapter demonstrates how significant changes have been introduced in the syllabus of financial economic studies at universities across the United States. It finds that financial economics has been transformed from a predominantly descriptive and institutional academic discipline to a more analytical, economic, and mathematical model-based subject being taught at different universities across the country. These changes have been introduced in academic financial economics due to the extensive efforts of prominent economists, financial researchers, and random-walk and efficient-market assumptions. Research works of economists and financial experts have also played a key role in transforming financial economics into a relevant modern analytical and mathematical academic discipline at universities across the United States. This analytical academic discipline has helped students of financial economics to adopt a more practical approach to analyzing and studying developments within stock markets.

*Keywords:*
financial economics, United States, academic discipline, financial researchers, economists

In the 1950s, finance was a well-established part of the curriculum in the business schools of U.S. universities. A typical course would cover matters such as “institutional arrangements, legal structures, and long-term financing of companies and investment projects” (Whitley 1986a, pp. 154–155). What Weston (1966, p. 4) calls “the chief textbook of finance” had for several decades been *The Financial Policy of Corporations* by Arthur Stone Dewing, a professor of finance at Harvard University. The text had first appeared in 1919, and by 1953 its 1,500 pages required two separate volumes.

Dewing began by presenting corporations as institutional and legal entities. He then discussed the ways in which corporations raise money by issuing securities, and described the different varieties of such securities. One main category was (and, of course, still is) stocks: those who buy a corporation’s stock gain rights of ownership in it, and if circumstances are favorable they receive periodic dividend payments. The other main category was and is bonds, a tradable form of debt. (A corporation’s bonds normally commit it to pay a set capital sum at a given date, and until then to pay set amounts in interest.) Dewing discussed the overall valuations of public utilities and of corporations, the basic techniques of accountancy, the causes and forms of business expansion, and the necessity sometimes to “remold the capital structure of the corporation” (Dewing 1953, p. 1175).

Dewing’s chapters contained many historical asides and some allusions to psychology. His view was that the “motives [that] have led men to expand business enterprises … on the whole … are not economic but rather psychological … the precious legacy of man’s ‘predatory barbarism’” (1953, p. 812). What his book did not contain was mathematics beyond simple arithmetic. His primary focus was on institutions and financial instruments, rather than on markets, and his account of those institutions and instruments was descriptive rather than analytical.

(p.38)
Dewing’s textbook had been criticized by one of his Harvard colleagues even in 1943, but the main thrust of the criticism was that it was no longer fully up to date in the topics it covered, not that it should have been much more analytical or more mathematical (Hunt 1943).^{1} In the 1950s, much research in finance remained descriptive and untheoretical. Peter Bernstein (1992, p. 46) notes that “at most universities, the business school and economics faculties barely greeted each other on the street.” The *Journal of Finance*, which began publication in 1946, was the field’s leading periodical. “Most of the articles the *Journal* published,” Bernstein writes, “had to do with Federal Reserve policy, the impact of money on prices and business activity, taxation, and issues related to corporate finance, insurance, and accounting. The few articles that appeared under the rubric ‘Investments’ dealt with topics like liquidity, dividend policy, and pension funding. In issues up to 1959, I was unable to find more than five articles that could be classified as theoretical rather than descriptive. The rest contain plenty of numbers but no mathematics.” (1992, p. 42)^{2}

The topic of this chapter is the move from this predominantly descriptive and institutional approach to the academic study of finance to the analytical, economic, and increasingly mathematical viewpoint that is the focus of this book. The shift had three main early strands. (A fourth, somewhat later strand is option-pricing theory, to be discussed in chapter 5.)

One strand was the work of the economists Franco Modigliani and Merton Miller. It was the most direct early challenge to the older approach, and, in the words of a scholar whose work straddled the two approaches, it was the most important exemplar of the transformation of “the study of finance from an institutional to an economic orientation” (Weston 1989, p. 29).

A second strand of the transformation of the study of finance was the research of Harry Markowitz, William Sharpe, and others in “portfolio theory”: the theory of the selection of optimal investment portfolios and of the economic consequences of investors behaving rationally in this respect.

The third strand was the random-walk and efficient-market hypotheses. These hypotheses offered iconoclastic accounts of the statistical form taken by stock-price changes and of the way prices incorporate relevant information. They can be traced back into the nineteenth century, but they came to decisive fruition in the United States in the 1950s and the 1960s.

# Modigliani and Miller

The work of Modigliani and Miller emerged from one of the crucial cockpits of the emerging management sciences in the mid-twentieth-century United (p.39) States: the Graduate School of Industrial Administration at the Carnegie Institute of Technology (later Carnegie Mellon University). In 1948, William L. Mellon, the founder of the Gulf Oil Company, gave Carnegie Tech $6 million to establish the school.

The new business school’s three leaders were Lee Bach, its dean; Bill Cooper, an operations research scholar and economist; and Herbert Simon, an organization theorist who became a pioneer of artificial intelligence.^{3} Bach, Cooper, and Simon saw “American business education at that time as a wasteland of vocationalism that needed to be transformed into science-based professionalism, as medicine and engineering had been transformed a generation or two earlier” (Simon 1996, p. 139).

The ambition to make Carnegie Tech’s business school “science-based” implicitly raised a question, one that as early as 1951 saw the school sharply divided: on what sort of science should research and education in business be based? Herbert Simon helped inspire a “behavioral” account of firms, which was based in empirical studies and in organization theory and which differed radically from the traditional economic portrayal of firms as rational maximizers of profit. He “heckled” his economist colleagues at Carnegie Tech “about their ridiculous assumption of human omniscience, and they increasingly viewed me as the main obstacle to building ‘real’ economics in the school” (1996, p. 144).

Simon’s views could not be ignored. He was the “decisive influence” on the Graduate School of Industrial Administration (Modigliani interview), and he had the ear of its dean. Cooper, the third of the school’s leaders, disagreed sharply with Simon. Cooper even tried to have him step down from chairing the department of industrial management, accusing him of “intimidating” (Simon 1996, p. 144) the economists.

The economists in the Graduate School of Industrial Administration organized to defend themselves institutionally, but also responded intellectually. Thus John Muth, the initial proponent of the theory of rational expectations referred to in chapter 1, formulated the theory at Carnegie Tech and presented it explicitly as a response to Simon’s accusation that economists presumed too much rationality in individuals and firms. The hypothesis of rational expectations “is based on exactly the opposite point of view” to Simon’s, wrote Muth (1961, p. 316): “dynamic economic models do not assume enough rationality.”^{4}

Among Carnegie Tech’s economists were Franco Modigliani (1918–2003), a rising star of the discipline, and Merton Miller (1923–2000), who was to become one of the leading scholars taking the new approach to the study of finance. The two men differed intellectually and politically. Modigliani, a
(p.40)
refugee from Italian fascism, was broadly a Keynesian. Miller’s mentor—“a great influence in my life and in bringing me … to serious modern economics” (Miller interview)—was George Stigler, a University of Chicago colleague, ally, and close friend of Milton Friedman. (For example, Stigler and Friedman had traveled together in 1947 to the initial meeting of the Mont Pèlerin Society, crossing the Atlantic by ocean liner and stopping off in London and Paris.)^{5}

Despite their differences, Modigliani and Miller, who had adjoining offices, found they had enough in common to build a productive, albeit temporary, partnership. Its products were not a direct response to Simon’s “behavioral” critique of economics. Their first joint paper “was meant to upset my colleagues in finance,” Modigliani recalled (1995, p. 153), not to upset Simon, whom they both respected despite the fact that they were both on the opposite side to him in the intellectual dispute that split the Graduate School of Industrial Administration.

Modigliani and Simon were collaborators and remained friends, but Modigliani, Simon recalled (1996, p. 271), “never mistook me for an ally in matters of economic theory.” Miller told me that he saw Simon as “very hostile to economics.” While organizational scholars focused on observable behavior, a pervasive concept in the financial economics to be described in this chapter was the expected rate of return on a stock. Simon frequently pointed out to Miller that this rate was not observable. “He’d say, ‘well, I don’t understand you finance people. How can you hope to build up a science of finance when the basic unit of your field is not observable.’ … I had a lot of that from Herb…. But he was such a towering figure that I guess you put up with it.” (Miller interview)

Modigliani and Miller’s work addressed Carnegie’s divide implicitly rather than explicitly. They tackled central topics in finance, but unlike much existing scholarship in the field they did not do so in an institutional fashion, and they were theoretical rather than descriptive in their approach. Modigliani and Miller argued that economic reasoning showed the essential irrelevance of what apparently were crucial from the viewpoint of an institutional or behavioral perspective on finance.

Their separate routes to their first joint paper are described below and in an endnote.^{6} In the paper, they agued that in what they called a “perfect market” (Modigliani and Miller 1958, p. 268) neither the total market value of a corporation (the sum of the market values of its stocks and bonds) nor the average cost^{7} to it of capital was affected by its “capital structure”—that is, by the extent to which it finances its activities by borrowing (issuing bonds) rather than by issuing stock.

(p.41) A second paper (Miller and Modigliani 1961) similarly dismissed as irrelevant another apparently major issue: the choice of how much of a corporation’s earnings to distribute as dividends to its stockholders, and how much to retain within the corporation. “In a rational and perfect economic environment” it should not matter “how the fruits of the earning power [of a corporation’s assets] are ‘packaged’ for distribution” to investors, Miller and Modigliani argued (p. 414).

High dividends would reduce investors’ capital gains; low dividends would mean higher capital gains. However, if the firm’s substantive activities were unaltered (as Miller and Modigliani assumed), to change dividend policy would be to affect only “the distribution of the total return in any period as between dividends and capital gains. If investors behave rationally, such a change cannot affect market valuations.” (1961, p. 425)

Let me concentrate on the first of Modigliani and Miller’s claims: the irrelevance of capital structure. Stock and bonds have very different characteristics—as noted above, the first is a form of ownership, the second of debt—so the balance between the two looked important. Bonds were seen as a safer investment than stocks (in the 1950s, the reputation of stocks had still not recovered fully from the disasters of the interwar years), yet taking on too much debt made a corporation look risky.

It therefore seemed plausible that there should be an optimum balance between the issuing of stocks and of bonds. One might expect that this optimum balance would depend on investors’ attitudes to risk and on matters such as the “psychological and institutional pressures” (Modigliani and Miller 1958, p. 279) on investors to hold investment portfolios of bonds rather than stocks. Those pressures were still strong in the 1950s.

The argument (outlined in more detail in appendix A) by which Modigliani and Miller sought to sweep aside such behavioral and institutional issues was as follows. Suppose that two investments are “perfect substitutes” (Modigliani and Miller 1959, p. 656), in other words that they are entitlements to identical income streams. If the prices of the two investments are not the same, then any holder of the dearer investment can benefit by selling it and buying the cheaper. Nothing need be assumed about investors’ willingness to take on risk, or about any “psychological” or “institutional” matters, other than that “investors always prefer more wealth to less” (Miller and Modigliani 1961, p. 412). “The exchange” of the dearer investment for the cheaper, Modigliani and Miller wrote, would be “advantageous to the investor quite independently of his attitudes to risk” (1958, p. 269).

Imagine two firms with identical expected earnings, identical levels of risk associated with those earnings, but different capital structures. Modigliani and (p.42) Miller argued that if the total market values of the two firms differed, then “arbitrage”—the above switch from the dearer to the cheaper investment— “will take place and restore” equality of the two firms’ market values (1959, p. 259). By conducting one or other of the switches of investment described in appendix A, investors could take advantage of any discrepancy in market values while leaving themselves with a future income stream of the same expected size and same level of risk. “What we had shown was, in effect, what equilibrium means,” said Miller. “If … you can make an arbitrage profit then that market is not in equilibrium, and if you can show that there are no arbitrage profits then that market is in equilibrium.” (Miller interview)

As was noted above, Modigliani and Miller’s claim of the irrelevance of capital structure (and also their claim of the irrelevance of dividend policy) rested on the assumptions of a “perfect market”:

… no buyer or seller (or issuer) of securities is large enough for his transactions to have an appreciable impact on the then ruling price. All traders have equal and costless access to information about the ruling price and about all other relevant characteristics of shares…. No brokerage fees, transfer taxes, or other transaction costs are incurred when securities are bought, sold, or issued, and there are no tax differentials either between distributed and undistributed profits or between dividends and capital gains. (Miller and Modigliani 1961, p. 412)

Were any of these, or any of Modigliani and Miller’s other assumptions (for example, that investors can buy stocks on credit) not to hold, then capital structure or dividend policy might no longer be irrelevant.

Modigliani and Miller knew perfectly well that they were assuming a world that did not exist. Taxation was the most obvious difference between their assumed world and empirical reality. American corporations were (and are) allowed to set interest payments on their bonds and other debts against their tax liabilities, but cannot do so for dividends on their stock. Until 2003, almost all individual investors in the United States faced a higher rate of tax on dividend income than on capital gains, and they can postpone the tax on capital gains until they actually sell their stock, so they may have good reasons to receive the benefits of a firm’s earnings as capital gains rather than as dividends. Modigliani and Miller were fully aware that matters such as this could invalidate their “irrelevance” propositions.

Modigliani and Miller’s intellectual strategy was to start with a highly simplified but in consequence analytically tractable world. Miller’s mentor, George Stigler, had been one of three economists whose “helpful comments and criticisms” were acknowledged by Friedman at the start of “The Methodology of Positive Economics” (1953a, p. 3). Stigler agreed with Friedman that “economic theorists, like all theorists, are accustomed (nay, compelled) to deal with (p.43) simplified and therefore unrealistic ‘models’ and problems” (Stigler 1988, p. 75).

Having shown the irrelevance of capital structure and of dividend policy in their simple assumed world, Modigliani and Miller then investigated the consequences of allowing some more realism (especially in regard to taxes) back in. Just how far to go in adjusting their model to “reality” and the exact consequences of doing so became matters of dispute between Modigliani and Miller. Attuned, as Keynes had been, to the imperfections of markets, Modigliani was the more cautious. Indeed, when first describing the irrelevance of capital structure to a class at Carnegie Tech, he distanced himself: “I announced the theorem and said ‘I don’t believe it.’” (Modigliani interview)

Miller, in contrast, was prepared more radically to set aside the question of the validity of assumptions, in the manner advocated by Friedman (Miller interview). Modigliani and Miller’s published joint work trod a middle path— they explored the consequences of the simple assumptions of a “perfect market,” but also attended carefully to the effects of relaxing those assumptions—but private disagreement between them emerged over the range of conditions under which their propositions would hold, in particular in respect to the tricky question of the effects of corporate and personal taxes (Miller interview; Modigliani interview).

Despite these incipient disagreements, Modigliani and Miller found themselves on the same side with respect to traditional finance scholarship, just as they had with respect to Herbert Simon’s critique of orthodox economic reasoning. Their sharpest dispute was with David Durand, a prominent finance scholar of a more traditional, institutional bent who held a professorship of industrial management at MIT. Durand had himself examined what was in effect Modigliani and Miller’s proposition about the irrelevance of capital structure, and had at least hinted that arbitrage might in principle enforce it. Ultimately, however, he had rejected the proposition.

Institutional restrictions had seemed to Durand to be sufficiently strong to make capital structure relevant, in particular to favor bonds over stocks:

Since many investors in the modern world are seriously circumscribed in their actions, there is an opportunity to increase the total investment value of an enterprise by effective bond financing. Economic theorists are fond of saying that in a perfectly fluid world one function of the market is to equalize risks on all investments. If the yield differential between two securities should be greater than the apparent risk differential, arbitragers would rush into the breach and promptly restore the yield differential to its proper value. But in our world, arbitragers may have insufficient funds to do their job because so many investors are deterred from buying stocks or low-grade bonds, either by law, by personal circumstance, by income taxes, or even by pure prejudice. These restricted investors, including all banks and insurance companies, have to bid for (p.44) high-grade investments almost without regard to yield differentials or the attractiveness of the lower grade investments. And these restricted investors have sufficient funds to maintain yield differentials well above risk differentials. The result is a sort of super premium for safety; and a corporation management can take advantage of this super premium by issuing as many bonds as it can maintain at a high rating grade. (Durand 1952, pp. 230–231)

Hearing Durand’s paper at a June 1950 conference on “Research in Business Finance” had led Modigliani to his initial interest in the problem. A significant part of the argument of Modigliani and Miller’s 1958 paper had been aimed precisely at showing that the institutional matters of the kind invoked by Durand were not sufficient to make capital structure relevant.

What divided Modigliani and Miller from Durand was at root whether market processes, in particular arbitrage, were strong enough to overcome the effects of institutional restrictions. Modigliani and Miller held that they were. Durand did not believe that, and he published an extensive critique of Modigliani and Miller’s claim of the irrelevance of capital structure. He did not deny the logic of their basic reasoning—as noted above, he had himself explored the path they took—but he claimed that their analysis held only in a “limited theoretical context.” The situation of what Durand called “real corporations” (1959, p. 640) was quite different, and the market they had to interact with was far from Modigliani and Miller’s assumption of perfection.

For example, investors could not buy stock entirely on credit: to do so was prohibited by the Federal Reserve’s famous “Regulation T,” introduced after the credit-fueled stock-market excesses of the 1920s. Regulation T restricted the extent to which brokers could lend investors money to buy stock. From time to time the Federal Reserve altered the percentage of the cost of a stock purchase that could be borrowed, but in the period discussed here it was typically no more than 50 percent, and sometimes much less. The “arbitrage” operation of switching between investments that Modigliani and Miller invoked might therefore not be available to most investors, and was not free of risk: price fluctuations might lead stocks bought on credit no longer to be worth enough to serve as collateral for the loan, leading an investor’s broker forcibly to sell such stock.

Modigliani and Miller’s basic conceptual device, homogeneous “risk classes” (see appendix A), had no empirical referent, argued Durand: “To the practically minded, it is unthinkable to postulate the existence of two or more separate and independent corporations with income streams that can fluctuate at random and yet be perfectly correlated from now until doomsday.” Modigliani and Miller had started “with a perfect market in a perfect world,” wrote Durand. Their examination of the consequences of relaxing their assumptions (p.45) was inadequate: “… they have taken a few steps in the direction of realism; but they have not made significant progress” (Durand 1959, p. 653).

# Portfolio Selection

Unpalatable as Modigliani and Miller’s assertion of the irrelevance of capital structure might be to a more traditional finance scholar such as Durand, it was undeniably a contribution to economics, published as it was in what was for many the discipline’s premier journal, the *American Economic Review*. The second major strand in the transformation of finance predated Modigliani and Miller’s contributions but was initially far more precarious in its position with respect to economics than theirs.

The initiator of this strand was Harry Markowitz.^{8} Born in 1927, the son of Chicago grocers, Markowitz studied economics at the University of Chicago. He received his M.A. in 1950 and became a student member of the Cowles Commission for Research in Economics, then located in Chicago. The Cowles Commission was one of the crucial sites of the mathematicization of postwar U.S. economics, and one of the places where the quantitative techniques of operations research—which had come to prominence in military applications in World War II—had their greatest effect on economics (Mirowski 2002).

The Cowles Commission was a lively, exciting place, buzzing with ideas, many of them from émigré European economists. Herbert Simon, who attended its seminars while teaching at the Illinois Institute of Technology in the 1940s, later recalled:

A visitor’s first impression of a Cowles seminar was that everyone was talking at once, each in a different language. The impression was not wholly incorrect…. But the accents may have been more a help than a hindrance to understanding. When several speakers tried to proceed simultaneously, by holding tight to the fact that you were trying to listen to say, the Austrian accent, you could sometimes single it out from the Polish, Italian, Norwegian, Ukrainian, Greek, Dutch, or Middle American. As impressive as the cacophony was the intellectual level of the discussion, and most impressive of all was the fact that everyone, in the midst of the sharpest disagreements, remained warm friends. (Simon 1996, p. 102)

A chance conversation with a broker pointed Markowitz toward the stock market as the subject for his Ph.D. thesis (Markowitz interview). Nowadays, this might seem a natural topic for an ambitious young graduate student, but that was not the case at the start of the 1950s. For example, the elite students of the Harvard Business School shunned Wall Street: in the early 1950s, only 3 percent of them took jobs there. “The new generation considered it (p.46) unglamorous. Outside its gargoyled stone fortresses, black limousines waited for men with weary memories. Inside, it was masculine, aging, and unchanged by technology.” (Lowenstein 1995, p. 52)

Wall Street’s low prestige spilled over into academic priorities. At the Harvard Business School, the course on investments was unpopular with students and was allocated an undesirable lunchtime slot, earning it the unflattering sobriquet “Darkness at Noon” (Bernstein 1992, p. 110). “[A]cademic suspicion about the stock market as an object of scholarly research” was sufficiently entrenched that in 1959 the statistician Harry Roberts of the University of Chicago could describe it as “traditional” (1959, p. 3).

In 1950 the economists of the Cowles Commission did not specialize in stock-market research—Markowitz was sent for a reading list to Marshall Ketchum, who taught finance in Chicago’s Graduate School of Business—but they were not in a position to dismiss the topic out of hand. The commission’s initial patron, Alfred Cowles III, grandson of the co-founder of the *Chicago Tribune*, had helped in his father’s investment counseling business and in 1928 had started keeping track of the performance of investment advisory services, publishing an analysis showing this performance generally to be poor (Cowles 1933; Bloom 1971, pp. 26–31). More fundamental, however, than this organizational pedigree for Markowitz’s topic was the fact that the approach he took to it was primed by a course on operations research taught by the Cowles economist Tjalling C. Koopmans (Markowitz interview).^{9}

Among the books that Ketchum recommended to Markowitz was one of the few that offered not practical guides to stock-market investment but a systematic account of what stocks ought to be worth. John Burr Williams’s *Theory of Investment Value* (1938) put forward what has become known as the “dividend discount model,” the basic idea of which seems to have come from stockmarket practice rather than from academia.

The value of a corporation’s stock is ultimately as an entitlement to the future stream of dividends paid to the stockholders by the corporation, argued both Williams and the market practitioners on whom he drew.^{10} The emphasis on dividends seems to be contradicted by the later Miller-Modigliani assertion of the irrelevance of dividend policy, but Williams anticipated the objection that dividend policy was arbitrary by arguing that if corporate earnings that are not paid out as dividends can profitably be reinvested then they will enhance future dividends, and so be taken account of in his model (Williams 1938, pp. 57–58).

Expected future dividend payments cannot, however, simply be added up in order to reach a value for a corporation’s stock. In part, that is because of the effect of inflation, but even without inflation the value of a dollar received
(p.47)
in a year’s time is less than that of a dollar received now, because the latter can be invested and earn interest. To work out the value of a stock, expected future dividends have therefore to be “discounted”: their present value has to be calculated using an appropriate interest rate.^{11} Hence the name “dividend discount model.”

The difficulty of reliably estimating future dividends was one obvious objection to Williams’s model. (The practitioners on whose work he built seem to have used the model “in reverse” to calculate the rate of dividend growth implied by a stock price, so as to check whether that price seemed reasonable.)^{12} Nor was Williams, writing as he was in the aftermath of the huge rise in stock prices in the 1920s and the subsequent calamitous crash, confident that investors used anything approximating to his model. He claimed only that “gradually, as men do become more intelligent and better informed, market prices should draw closer to the values given by our theory” (Williams 1938, p. 189).

When Markowitz read *The Theory of Investment Value* in the library of the University of Chicago’s Graduate School of Business in 1950, he was struck by a different objection to Williams’s dividend discount model, one rooted in the operations research that Koopmans was teaching him. Williams’s response to the obvious uncertainties involved in calculating the theoretical value of a security was to recommend weighting possible values by their probability and calculating the average, thus obtaining what mathematicians call the “expected value” (Williams 1938, p. 67).

What would happen, Markowitz asked himself, if one applied Williams’s way of tackling uncertainty not to a single stock but to an investor’s entire portfolio? “If you’re only interested in the expected value of a stock, you must be only interested in the expected value of a portfolio” (Markowitz interview). Simple reasoning based on the operations-research technique of linear programming quickly convinced Markowitz that an investor who focused only on expected value would put all his or her money into the single stock with the highest expected rate of return.

Plainly, however, investors did not put all their money into one stock: they diversified their investments,^{13} and they did so to control risk. Optimal portfolio selection could not be about expected return alone: “You’ve got two things—risk and return.” (Markowitz interview) Risk, Markowitz reasoned, could be thought of as the variability of returns: what statisticians call their “standard deviation,” or the square of that standard deviation, their “variance.” Asked by me why he had conceived of risk in this way, Markowitz simply cited how often he had come across the concept of standard deviation in the statistics courses he had taken (Markowitz interview).

(p.48) For Markowitz, being trained as he was in operations research as well as in economics, the problem of selecting optimal investment portfolios could then be formulated as being to find those portfolios that were “efficient,” in other words that offered least risk for a given minimum expected rate of return, or greatest return for a given maximum level of risk. The assignment Koopmans had set the students in his course on operations research was to “find some practical problem and say whether it could be formulated as a linearprogramming problem” (Markowitz interview).

Markowitz quickly saw that once risk as well as return was brought into the picture the problem of finding efficient portfolios was *not* a linear one. The formula for the standard deviation or variance of returns involves squaring rates of return, so the problem could not be solved using existing linear programming techniques: it fell into what was then the little-explored domain of quadratic programming.^{14} Koopmans liked Markowitz’s term paper, giving it a grade of A, and encouraged him to take the problem further, telling him: “It doesn’t seem that hard. Why don’t you solve it?” (Markowitz interview)

Solve the problem of selecting efficient portfolios was precisely what Markowitz went on to do. He worked on the topic in his remaining months at Chicago and then in time left over from his duties at the Rand Corporation, to which he moved in 1952. The Santa Monica defense think tank was a natural destination for a young scholar in whose work economics and operations research were hybridized.^{15}

Markowitz’s solution to the problem of portfolio selection presumed that estimates of the expected returns, variances of returns, and correlations^{16} of returns of a set of securities could be obtained, for example by administering questionnaires to securities analysts. It was then possible to work out the expected return (*E*) and variance of return (*V*) of any portfolio constructed out of that set of securities. Markowitz provided a simple graphical representation (figure 2.1) of the set of combinations of *E* and *V* that were “attainable” in the sense that out of the given set of securities a portfolio can be constructed that offered that combination of *E* and *V*.^{17}

Of the set of attainable portfolios, a subset (shown by the thicker “southeast” edge of the attainable set) is going to be efficient. From points in this subset one cannot move down (to a portfolio with the same expected return but lower variance) or to the right (to a portfolio with the same variance but greater expected return) without leaving the attainable set. The subset therefore represents the portfolios that offer “minimum *V* for given *E* or more and maximum *E* for given *V* or less” (Markowitz 1952, p. 82): it is what was later to be called the “efficient frontier” of the set of portfolios that can be constructed from the given securities.

*Naval Research Logistics Quarterly*(Markowitz 1956).

Markowitz’s 1952 paper in the *Journal of Finance* describing his portfolioselection method (though not the detail of the critical line algorithm) was later
(p.50)
to be seen as the “harbinger” of the “new” finance, of “the mathematical and model-building revolution.”^{18} It certainly stood out: “No other article in the issue [of the *Journal of Finance*] that carried Markowitz’s paper contains a single equation.” (Bernstein 1992, p. 42) Indeed, Markowitz’s contribution to the *Naval Research Logistics Quarterly*, with its matrix algebra and its careful analysis of the mathematical properties of the critical line algorithm, inhabited an epistemic culture quite different from that traditional in the academic study of finance: essentially the culture of applied mathematics.

The impact in the 1950s of Markowitz’s work was quite limited. Investment practitioners showed effectively no interest in his technique. Nor did the finance scholars active in the 1950s generally seize on it. Modigliani and Miller’s critic David Durand reviewed the 1959 Cowles monograph in which Markowitz systematically presented his technique and its justification. He conceded that Markowitz’s would “appeal to econometricians and to statisticians interested in decision theory,” but he saw “no obvious audience” for Markowitz’s overall approach (Durand 1960, p. 234).

It was wholly unrealistic, Durand suggested, to imagine that real-world portfolio selection proceeded according to Markowitz’s techniques, or perhaps even that it ever *could* proceed in this fashion:

His argument rests on the concept of the Rational Man, who must act consistently with his beliefs. But the history of Wall Street suggests that such consistency may be unwise…. His ideal is a Rational Man equipped with a Perfect Computing Machine…. Of course, he admits that the Rational Man does not exist at all and that the Perfect Computing Machine will not exist in the foreseeable future, but the image of these nonentities seems to have colored his whole work and given it an air of fantasy.

^{19}

It was not even clear that what Markowitz had done counted as economics. Milton Friedman was on Markowitz’s Ph.D. board. In 1954, on his way back from Washington to Rand’s Santa Monica headquarters, Markowitz stopped off in Chicago for his thesis defense, thinking to himself “This shouldn’t be hard. I know this stuff…. Not even Milton Friedman will give me a hard time.” (Markowitz interview) “So about two minutes into my defense,” Markowitz continues, “Friedman says, ‘Well, Harry, I’ve read your dissertation and I don’t find any mistakes in the math, but this isn’t a dissertation on economics and we can’t give you a Ph.D. in economics for a dissertation that’s not economics.’” Markowitz did receive the degree, but although Friedman cannot recall making the remark (“You have to trust Harry for that”) he believes it would have been justified: “What he did was a mathematical exercise, not an exercise in economics.” (Friedman interview)

The leading economist who responded most positively to Markowitz was James Tobin. Broadly Keynesian in his approach, Tobin succeeded Tjalling (p.51) Koopmans as head of the Cowles Commission, and was instrumental in it moving in 1955 from Chicago to Yale University, where Tobin taught. There had been tensions between the Cowles Commission and Friedman and his colleagues in the University of Chicago’s economics department, and recruitment was becoming more difficult because of worsening conditions in the South Chicago neighborhoods that surrounded the university (Bernstein 1992, p. 67).

Like Markowitz, Tobin was interested in portfolio selection, but from a different viewpoint. “Markowitz’s main interest is prescription of rules of rational behavior for investors,” wrote Tobin. His concern, in contrast, was the macroeconomic implications “that can be derived from assuming that investors do in fact follow such rules” (Tobin 1958, p. 85).^{20}

For Tobin, a crucial issue was the choice investors made between owning risky assets and holding money. This choice was important to Keynesian theory, because the extent to which people prefer holding money (the extent of their “liquidity preference”) will affect interest rates and macroeconomic phenomena such as the levels of economic activity and of unemployment.^{21} Tobin simplified the issue by proving, in a mathematical framework similar to Markowitz’s, what has become known as the “separation theorem,” which asserts the mutual independence of choice among risky assets and choice between such assets and cash. In other words:

… the proportionate composition of the non-cash [risky] assets is independent of their aggregate share of the investment balance…. Breaking down the portfolio selection problem into stages at different levels of aggregation—allocation first among, and then within, asset categories—seems to be a permissible and perhaps even indispensable simplification both for the theorist and for the investor himself. (Tobin 1958, pp. 84–85)

Tobin’s work thus suggested a route by which what Markowitz had done could be connected to mainstream economic concerns. However, after finishing his Cowles monograph on portfolio selection, Markowitz was disinclined to work on the topic. “My book [Markowitz 1959] was really a closed logical piece,” he told a 1971 interviewer. “I’d really said all I wanted to, and it was time to go on to something else.” (Welles 1971, p. 25) His interests had already begun to shift to problems more central to the work being done at the Rand Corporation, notably the applied mathematics of linear programming (Markowitz 1957) and the development of SIMSCRIPT, a programming language designed to facilitate the writing of software for computer simulations.^{22}

# The Underlying Factor

In 1960 a “young man … dropped into” Markowitz’s office at Rand (Markowitz 2002b, p. 383). William Sharpe (born in Cambridge, Massachusetts in 1934) had (p.52) completed his examinations as a graduate student in economics at the University of California at Los Angeles in 1960. His initial idea for a Ph.D. topic had been a study of “transfer prices,” the prices that are set internally for goods moving between different parts of the same corporation, but his proposal had met with an unenthusiastic reaction.

Unusually, however, Sharpe had substituted finance for one of the five “fields” of economics he had been required to study preparatory to his Ph.D. His studies in finance had been guided by UCLA’s J. Fred Weston, who was of the older generation of finance scholars but who was sympathetic to the new work described in this chapter.^{23} Weston introduced Sharpe to what Markowitz had done: “I loved … the elegance of it. The fact it combined economics, statistics [and] operations research.” (Sharpe interview) When his mentor in economics, UCLA professor Armen Alchian, helped Sharpe obtain a junior economist’s post at Rand, Sharpe sought out Markowitz.

Markowitz agreed to become the informal supervisor of Sharpe’s UCLA Ph.D. thesis. Despite Markowitz’s sense that his work was a “closed logical piece,” there was one issue that he had broached but not fully pursued. His full version of portfolio selection required getting securities analysts to estimate the correlation of every pair of securities among which selection was to be made. The number of such correlations increased rapidly with the number of securities being analyzed. Selection among 1,000 securities, for example, would require estimation of 499,500 correlations (Baumol 1966, pp. 98–99).

No stock analyst could plausibly estimate half a million correlations. Statistical analysis of past correlations was not enough, even if the data for it had been easily available, which in the 1950s they were not, because what was needed for portfolio selection were future correlations. Even if estimates of the latter could somehow be produced, the limited sizes of computer memories in the 1950s and the early 1960s put the resultant computation well beyond the bounds of the feasible.

While at Yale (at Tobin’s invitation) in 1955–56, Markowitz found that a 25-security problem was beyond the capacity of the computing resources available to him (Markowitz 2002b, p. 383). Even on an IBM 7090, in the early 1960s a state-of-the-art digital computer, the Rand quadratic programming code implementing portfolio selection could not handle a problem with more than 249 securities.^{24}

Markowitz had realized, however, that the difficulties of estimation and computation when selecting among large numbers of securities would be alleviated if it could be assumed that the correlation between securities arose because they were each correlated with “one underlying factor, the general prosperity of the market as expressed by some index” (Markowitz 1959, p. (p.53) 100). Instead of asking securities analysts to guess a huge array of crosscorrelations, they would have to estimate only one correlation per security: its correlation with the index.

It was with Markowitz’s suggested simplification that Sharpe began his Ph.D. work, employing a model in which “the returns of … securities are related only through common relationships with some basic underlying factor” (Sharpe 1963, p. 281). He found that the simplifying assumption did indeed reduce computational demands dramatically: large portfolio selection problems became feasible for the first time.

Sharpe’s development of Markowitz’s “underlying factor” model could still be seen as operations research rather than economics: it was published in *Management Science* (Sharpe 1963) rather than in an economics journal. At root, though, Sharpe was an economist, and Armen Alchian had taught him microeconomics—the foundational part of the discipline that studies how production and consumption decisions by rational firms and individuals shape market prices, giving rise to equilibrium in competitive markets.

Although the concept of “equilibrium” is complex, a simple notion of “equilibrium price” is the price at which the quantity of a commodity that suppliers will sell is equal to the quantity that buyers will purchase. If the market price is below that equilibrium level, there is excess demand: purchasers want to buy more than suppliers will sell, allowing the latter to raise prices. If the price is above equilibrium, there is excess supply: purchasers will not buy the full amount that suppliers want to sell, forcing the latter to lower their prices in order to clear their stocks.

Sharpe went beyond the issue of how a rational investor should select an investment portfolio. “I asked the question that microeconomists are trained to ask. If everyone were to behave optimally (here, follow the prescriptions of Markowitz’s portfolio theory), what prices will securities command once the capital market has reached *equilibrium*?” (Sharpe 1995, pp. 217–218). In order to make the answer to this question “tractable” (ibid., p. 218), he assumed that the underlying-factor model applied and also that all investors had the same estimates of expected returns, of variances, and of correlations with the underlying factor. These simplifying assumptions yielded a model in which, using primarily graphical reasoning, Sharpe could identify a precise mathematical formulation of equilibrium.

In equilibrium, securities prices would be such that there would be a simple straight-line relationship between the expected return on a security and the extent of its covariation with the posited underlying factor. “Following the conventions” of the standard statistical technique of regression analysis, Sharpe used the Greek letter β (beta) to designate the extent of the sensitivity of the (p.54) returns on a stock or other security to changes in the underlying factor. “Thus the result could be succinctly stated: securities with higher betas will have higher expected returns.” (Sharpe 1995, p. 218)

The logic of the diversification of investment portfolios gave an intuitive explanation of the dependence of expected return on beta. The effects on the performance of a portfolio of the idiosyncratic risks of a particular stock can be minimized by diversification, but one cannot in that way eliminate the risk resulting from stocks’ correlation with a factor underlying the performance of all stocks. Thus, in equilibrium, a stock that was highly sensitive to changes in that underlying factor (in other words, a stock with a high beta) would have to have a price low enough (and thus an expected return high enough) to persuade investors to include it in their portfolios. A stock that was not very sensitive to changes in the underlying factor (a stock with a low beta) would in contrast command a higher relative price (and thus a lower expected return).

The linear relationship between beta and expected return was a strikingly elegant result. However, Sharpe knew that by assuming a single common underlying factor he had “put the rabbit in the hat” in the first place, and “then I pull it out again and how interesting is that?” (Sharpe interview). That is, his modeling work might be viewed as trivial. So he set to work (again proceeding to a large extent geometrically, using graphical representations akin to Markowitz’s diagram shown in figure 2.1)^{25} to find out whether he could prove his result for “a general Markowitz world rather than this specific single factor. And happily enough, in a matter of relatively few months, guess what, it turns out you get the same result.” (Sharpe interview)

If investors are guided only by risk and return, and if they all have the same estimates of assets’ expected returns, risks, and mutual correlations,^{26} then, Sharpe’s analysis showed, the prices of assets had to adjust such that in equilibrium there was a straight-line relationship between the expected return on an asset and its beta, the extent of its sensitivity to the return on an optimal portfolio. This result was at the core of what was soon to become known as the Capital Asset Pricing Model (CAPM).

# Assumptions, Implications, and Reality

Sharpe knew perfectly well that his model rested on assumptions that were “highly restrictive and undoubtedly unrealistic” (Sharpe 1964, p. 434). In presenting his work, he encountered the attitude that “this is idiotic because he [Sharpe] is assuming everybody agrees [in their estimates of expected returns, risks, and correlations] and that’s patently false and therefore a result that
(p.55)
follows from that strong and totally unrealistic presumption isn’t worth [much]” (Sharpe interview).^{27}

In the paper in the *Journal of Finance* in which he laid out the CAPM, Sharpe defended it against the accusation that it was unrealistic with an implicit invocation of Milton Friedman’s methodological views, discussed in chapter 1. “The proper test of a theory,” wrote Sharpe, “is not the realism of its assumptions but the acceptability of its implications” (Sharpe 1964, p. 434). Aside from the likelihood that he would have said “accuracy” rather than “acceptability,” Friedman himself could have written the sentence.

Sharpe’s mentor Alchian shared many of Friedman’s convictions. He too was a member of the Mont Pèlerin Society, and like Friedman he regarded whether a model’s assumptions were realistic as irrelevant.^{28} Sharpe remembers that he and Alchian’s other graduate students had “drilled into” them the view that “you don’t question the assumptions. You question the implications and you compare them with reality.”^{29}

More than 40 years later, Sharpe can still recall the Darwinian analogy that Alchian used to convey the irrelevance of the verisimilitude of assumptions: “Assume that creatures crawled up from the primordial slime, looked around and decided that for maximum effect they should grow opposable thumbs.” The assumption that anatomical change resulted from conscious judgment of usefulness was plainly absurd, but “the prediction of the model [the development of opposable thumbs] would be correct, even though the actual mechanism (evolution) was very different.”^{30}

The fact that Sharpe was in this respect “very much in the Friedman camp at the time” meant that he did not allow making unrealistic assumptions to disturb him as he practiced what Alchian had taught in respect to modeling. As he puts it, his approach was to “take the problem, try to distil out the two or three most important things, build a logically coherent model … that has those ingredients in it and then see whether or not this can help you understand some real phenomenon” (Sharpe interview).

Sharpe’s choice of a word when discussing “the proper test of a theory”— “*acceptability* of [a theory’s] implications,” not *accuracy*—was, however, not accidental. Although he was not concerned about the lack of verisimilitude of his assumptions, his theory seemed to have a highly unrealistic implication that he worried might lead others to reject the model out of hand. His original mathematical analysis led to the conclusion that “there is only one portfolio of risky securities that’s optimal” (Sharpe interview).

Investors who were averse to risk would wish to hold much of their portfolio in the form of riskless assets (such as government bonds held to (p.56) maturity) and only a small amount of the optimal risky portfolio; risk-seeking investors might even borrow money to increase their holdings of the latter. However, what was in effect Tobin’s separation theorem pertained, and Sharpe’s model seemed to suggest that no investor would hold any combination of risky assets other than the unique optimal portfolio. All investors would, for example, hold exactly the same set of stocks, in exactly the same proportions.

Clearly, this implication of Sharpe’s model was empirically false. Sharpe balked—“I thought, well, nobody will believe this. This can’t be right”—and he tweaked his mathematics to avoid the unpalatable conclusion. “I wanted ever so much for there to be multiple risky portfolios that were efficient … it does not come naturally” out of the mathematics (Sharpe interview). In the analysis in his published paper, multiple optimal portfolios—albeit all perfectly correlated with each other—are indeed possible, and Sharpe told his readers that “the theory does not imply that all investors will hold the same combination” of risky assets (Sharpe 1964, p. 435).

The world was, however, changing, in ways to be discussed in chapter 3, in regard to the view that all rational investors will hold portfolios of risky assets that are identical in their relative composition. In the mid 1960s—Sharpe cannot date it more precisely than that—he allowed himself to return to the “egregious” (Sharpe interview) implication that all investors would hold the same portfolio. He knew that the way in which he had reached the conclusion to the contrary in his 1964 paper “was really a matter of wanting it” to be so (Sharpe interview). Gradually, he let himself embrace and put forward the counterintuitive conclusion that all investors would hold the same portfolio of risky assets.

If there was a single optimal portfolio, it was clear what that portfolio had to be. Prices would adjust such that no capital asset remained without an owner and every investor would hold every risky capital asset—every stock, for instance—in proportion to its market value. Sharpe: “When I finally broke out of [the view that there had to be more than one optimal portfolio of risky assets] I said ‘If there’s only one, it’s got to be the market portfolio. It’s the only way you can get everything to add up.” (Sharpe interview^{31})

“The conclusion is inescapable,” Sharpe wrote in 1970. “Under the assumed conditions, the optimal combination of risky securities is that existing in the market…. It is the market portfolio.” (p. 82) Along with the straightline relationship between expected return and beta, the other essential component of the CAPM was now in place. In equilibrium, all the apparent complication of portfolio selection dissolved. The optimal set of risky investments was simply the market itself.

(p.57) Although Sharpe did not know it at first, a model essentially the same as his Capital Asset Pricing Model was being developed at about the same time by an operations researcher named Jack Treynor. Very similar models were being developed by John Lintner at the Harvard Business School economist (Lintner 1965a,b) and by Jan Mossin at the Norwegian School of Economics and Business Administration (Mossin 1966).

With the market itself taken to be the unique optimal portfolio of risky assets (a conclusion on which Sharpe, Treynor, Linter, and Mossin concurred^{32}), the Capital Asset Pricing Model’s parameter, beta, had a simple interpretation: it was the sensitivity of returns on the asset to overall market fluctuations. That sensitivity was the risk that could not be diversified away, and the extent of that “market risk” or “systematic risk” determined the relative prices of risky capital assets such as stocks.

The reasoning that had led Sharpe, Treynor, Linter, and Mossin to the Capital Asset Pricing Model was sophisticated. However, the model predicted an equilibrium relationship between an asset’s beta and its expected return that was simplicity itself. (See figure 2.2.) An asset with a beta of zero has no correlation to the market as a whole, so any specific risk in holding such an asset can be eliminated by diversification. Hence, the asset can be expected to yield only the riskless rate of interest: the rate an investor could earn by holding until its maturity an entirely safe asset such as a bond issued by a major government in its own currency. As beta and thus market risk rises, so does expected return, in a direct straight-line relationship.

The theory of investment had been transformed. If the reasoning of Sharpe and of the other developers of the Capital Asset Pricing Model was correct, beneath the apparently bewildering complexity of Wall Street and other capital markets lay a simple tradeoff between systematic risk and return that could be captured by a straightforward, parsimonious, and elegant mathematical model.

# Random Walks and Efficient Markets

The third main strand in the transformation of the study of finance in the United States in the 1950s and the 1960s was the most general. It involved two closely related notions: that prices of stocks and similar securities follow a random walk, and that financial markets—at least the main markets in the United States and similar countries—are efficient.

The idea that the movements in the prices of financial securities are in some sense random—and therefore that the mathematical theory of probability can be applied to them—received its decisive development in the 1950s and the (p.58)

Regnault argued that short-term price movements were like coin-tossing: upward and downward movements will tend to have equal probabilities (of 1/2), and subsequent movements will be statistically independent of previous movements (Regnault 1863, pp. 34–38). That was a fair game, but even in a fair game a player with finite resources playing an opponent with unlimited resources will eventually lose the entirety of those resources, and if that happens in a financial market the game is over from the player’s viewpoint.

(p.59)
The market, “that invisible, mysterious adversary,” had effectively infinite resources. Thus, even if “chances were strictly equal” those who indulged in frequent short-term speculation faced “an absolute certainty of ruin.” Furthermore, brokers’ commissions and other transaction costs meant that in practice the game was less than fair.^{33}

As a follower of the “social physics” of the pioneering statistician Adolphe Quetelet, Regnault sought regularities underlying the market’s apparent randomness. What was from his viewpoint the central regularity could have been derived mathematically from his model, but Regnault seems to have found it empirically (and he certainly tested it empirically).

The regularity was that the average extent of price deviations is directly proportional to the square root of the length of the time period in question (Regnault 1863, pp. 49–50). Modern random-walk theory leads to the same conclusion, and there are passages in Regnault’s work (especially Regnault 1863, pp. 22–24) that bring to mind the efficient-market hypothesis. However, Regnault’s comments on his square-root law remind us that he inhabited a different intellectual world. For him, the regularity was an ultimately theological warning to “earthly princes … kings of finance” to be humble in the face of what was, at root, Providential order.^{34}

Regnault’s work has only recently been rediscovered. More celebrated as a precursor of modern random-walk theory has been Louis Bachelier, a student of the leading French mathematician and mathematical physicist Henri Poincaré (on whom see, for example, Galison 2003). While Regnault drew on existing and relatively elementary probability theory, Bachelier developed a model of a random or “stochastic” process in continuous time. In Bachelier’s model, the price of a security can change probabilistically in any time interval, however short. (The coin-tossing model is, in contrast, a stochastic process in discrete time: at least implicitly, the model is of a coin tossed only at specific moments, and not between those moments.)

In his Sorbonne thesis, defended in March 1900, Bachelier sought to “establish the law of probability of price changes consistent with the market” in French bonds.^{35} He constructed an integral equation that a stochastic process in continuous time had to satisfy, and showed that the equation was satisfied by a process in which, in any time interval, the probability of a given change of price followed the normal or Gaussian distribution, the familiar “bellshaped” curve of statistical theory.^{36} Although Bachelier had not demonstrated that his stochastic process was the only solution of the integral equation (and we now know it is not), he claimed that “evidently the probability is governed by the Gaussian law, already famous in the calculus of probabilities” (Bachelier 1900, p. 37).

(p.60) We would now call Bachelier’s stochastic process a “Brownian motion,” because the same process was later used by physicists as a model of the path followed by a minute particle suspended in a gas or liquid and subjected to random collisions with the gas or liquid’s molecules. Bachelier, however, applied it not to physics but to finance, in particular to various problems in the theory of options.

In 1900, despite the currency of the popular “science of investing,” the financial markets were an unusual topic for an aspirant academic mathematician. “Too much on finance!” was the private comment on Bachelier’s thesis by the leading French probability theorist, Paul Lévy (quoted in Courtault et al. 2000, p. 346). Bachelier’s contemporaries doubted his rigor, and his career in mathematics was modest: he was 57 before he achieved a full professorship, at Besançon rather than Paris.

Though knowledge of Bachelier’s work never vanished entirely, even in the Anglo-Saxon world (Jovanovic 2003), there was undoubtedly a rupture. Successive shocks—the two world wars, the 1929 crash and the subsequent depression, the rise of communism and of fascism—swept away or marginalized many of the surprisingly sophisticated and at least partially globalized nineteenth-century financial markets studied by Regnault, Bachelier, and their contemporaries.

When the view that price changes are random was revived in Anglo-Saxon academia later in the twentieth century, it was at first largely in ignorance of what had gone before. One of the earliest of the twentieth-century Anglo-Saxon writers to formulate what became known as the random-walk thesis was the statistician and econometrician Holbrook Working of the Food Research Institute at Stanford University.^{37} Working was not in fact a proponent of the thesis: he wrote that “few if any time series [observations of a variable, such as a price, at successive points in time] will be encountered that reflect in pure form the condition of strictly random changes” (Working 1934, p. 12). However, he thought it worth exemplifying what a random walk (he called it “a random-difference series”) might actually look like.

Working took a table of “random sampling numbers” produced by an assistant of the biostatistician and eugenicist Karl Pearson (who in 1905 had coined the term “random walk”)^{38} and applied to it a transformation Pearson had suggested, making the frequencies of the random numbers correspond to a normal distribution.

Working constructed his random-difference series by starting with a number from the random-number table, adding to it the next number in the table to get the second number in the series, and so on. The differences between (p.61) successive terms in the series therefore simulated random “draws” from a normal distribution. Working suggested that the series that he constructed in this way could be compared with real examples of time series, such as the successive prices of stocks or of agricultural commodities such as wheat. (Wheat was of particular interest to the Food Research Institute.) Such comparison could, Working (1934) hoped, help to distinguish non-random structures in time series from the results of random processes.

Working clearly wanted to find non-random structure. That there might be no such structure in economically important time series does not seem to have been an attractive conclusion to him or to others in the 1930s and the 1940s. That, at least, is what is suggested by the reaction that met the statistician Maurice Kendall when he presented the random-walk thesis to a 1952 meeting of Britain’s Royal Statistical Society.

Kendall analyzed indices of stock prices in particular sectors in the United Kingdom, wheat prices from the Stanford Food Research Institute,^{39} and cotton prices from the U.S. Department of Agriculture. Using one of the early digital computers (at the U.K. National Physical Laboratory), Kendall looked for “serial correlation” or “autocorrelation” in these time series, for example computing for each price series the correlation between each week’s price change and that of the previous week.

Almost without exception, Kendall found only very small levels of correlation. For instance, “such serial correlation as is present” in the stock index series “is so weak as to dispose at once of any possibility of being able to use them for prediction” (Kendall 1953, p. 18). That was the case also for wheat prices, where “the change in price from one week to the next is practically independent of the change from that week to the week after.”^{40}

In the absence of correlation, Kendall saw evidence of the workings of what he called “the Demon of Chance”: “The series looks like a ‘wandering’ one, almost as if once a week the Demon of Chance drew a random number from a symmetrical population of fixed dispersion and added it to the current price to determine the next week’s price.” (Kendall 1953, p. 13) If it were indeed the case that “what looks like a purposive movement over a long period” in an economic time series was “merely a kind of economic Brownian motion,” then any “trends or cycles” apparently observed in such series would be “illusory” (ibid., pp. 13, 18).

“That is a very depressing kind of conclusion to the economist,” commented the British economist and economic statistician R. G. D. Allen, the proposer of a distinctly lukewarm vote of thanks to Kendall after his talk. “This paper must be regarded as the first dividend on a notable enterprise,” Allen (p.62) continued. “Some ‘shareholders’ may feel disappointed that the dividend is not larger than it is, but we hope to hear more from Professor Kendall and to have further, and larger, declarations of dividends.” (Allen 1953, p. 26)

S. J. Prais of the (U.K.) National Institute of Economic and Social Research was more explicit in his criticism. He argued that Kendall’s variable-byvariable serial correlation tests “cannot in principle throw any light on the possibility of estimating the kind of dynamic economic relationships in which economists are usually interested” (Prais 1953, p. 29).

At MIT, Paul Samuelson learned from a participant in the meeting, the economist Hendrik Houthakker, that the “outsider” statistician Kendall was “invading the economists’ turf” (Samuelson interview). Houthakker had not been impressed by Kendall’s paper. “Can there by any doubt,” he asked, “that the movements of share prices are connected with changes in dividends and the rate of interest? To ignore these determinants is no more sensible or rewarding than to investigate the statistical properties of the entries in a railway time-table without recognizing that they refer to the arrivals and departures of trains.” (Houthakker 1953)

Samuelson suspected that his fellow economists’ negative reaction to Kendall’s invocation of the “Demon of Chance” was misplaced. He recalls saying to Houthakker “We should work the other side of the street” (Samuelson interview)—in other words, seek to develop Kendall’s viewpoint. Perhaps price changes were random because any systematic patterns would be detected by speculators, exploited in their trading, and thus eliminated?

Later, Samuelson put this way: “If one could be sure that a price will rise, it would have already risen.” (1965b, p. 41) If it is known that the price of a stock will go up tomorrow, it would already have gone up today, for who would sell it today without taking into account tomorrow’s rise? Thus, Kendall’s results could indicate that “speculation is doing its best because it leaves everybody with white noise”—in other words, with randomness (Samuelson interview).

Samuelson’s explanation of Kendall’s findings had in fact also struck at least one member of Kendall’s audience: S. J. Prais, whose criticism I have already quoted. Said Prais:

… the markets investigated [by Kendall] … are share and commodity markets [in which] any expected future changes in the demand or supply conditions are already taken into account by the price ruling in the market as a result of the activities of hedgers and speculators. There is, therefore, no reason to expect changes in prices this week to be correlated with changes next week; the only reason why prices ever change is in response to unexpected changes in the rest of the economy. (1953, p. 29)

However, the conclusions drawn by Prais and by Samuelson from their shared analysis of Kendall’s findings differed starkly. For Prais, Kendall had (p.63) made a mistake in focusing on the stock and commodity markets. Prais put it this way: “… from the point of view of investigating the dynamic properties of the [economic] system it is therefore particularly unfortunate that Professor Kendall found it necessary to choose these markets for his investigation” (1953, p. 29).

For Samuelson, in contrast, Kendall’s findings suggested that it might be worthwhile devoting at least part of his professional attention to the financial markets. Another incident sparking Samuelson’s interest was the receipt of a “round-robin letter” sent to “a number of mathematical economists” by the University of Chicago statistical theorist L. J. Savage, asking whether any of them knew of Bachelier.^{41} Samuelson did. He had heard Bachelier’s name previously from the mathematician Stanislaw Ulam (Samuelson 2000, p. 2). It was, however, Savage’s letter that prompted Samuelson to search out Bachelier’s thesis.

Samuelson was already interested in options, the central topic to which Bachelier applied his random-walk model, because he believed (wrongly, as he now acknowledges)^{42} that study of the options market might yield “a set of empirical data that would give one insight on what the belief in the market was about the future” (Samuelson interview). Samuelson had a Ph.D. student, Richard J. Kruizenga, who was finishing his thesis on options, so he pointed Kruizenga to Bachelier’s work. Kruizenga had been employing as his asset price model a discrete random walk analogous to Regnault’s, rather than Bachelier’s continuous-time random walk, and must have been discomfited to discover “after most of my analysis had been completed” (Kruizenga 1956, p. 180) that someone else had analyzed the same topic, half a century before, in a mathematically more sophisticated way.

Bachelier’s model was an “arithmetic” Brownian motion: it had the symmetry of the Gaussian or normal distribution. If the price of a bond was, for example, 100 francs, the probability on Bachelier’s model of a one-franc upward movement was the same as that of a one-franc downward movement. Given the pervasiveness of the normal distribution, employing such a model was a natural step for someone with a background in mathematics or physics. It was, for example, how Norbert Wiener—the famous MIT mathematical physicist, theorist of prediction in the presence of “noise” (random fluctuations), and pioneer of cybernetics—had modeled Brownian motion. In a physical context, the normal distribution’s symmetry seemed entirely appropriate: “… positive and negative displacements of the same size will, for physical considerations, be equally likely” (Wiener 1923, p. 134).

The main practical context in which Wiener’s cybernetics developed was anti-aircraft fire control.^{43} Samuelson worked on fire control in MIT’s
(p.64)
Radiation Laboratory in 1944–45. He did not enjoy the experience (“We worked … unproductively long hours because we were setting examples”), but he did “learn something about Wiener-like stochastic forecasting” (Samuelson interview).

However, Samuelson realized that he could not simply import Bachelier’s or Wiener’s arithmetic Brownian motion into finance, for it would imply a nonzero probability of a stock price becoming negative. “I knew immediately that couldn’t be right for finance because it didn’t respect limited liability.” (Samuelson interview^{44}) A bankrupt corporation’s creditors could not get their money back by suing the owners of its stock, so ownership of a stock was never a liability.

Samuelson therefore turned the physicists’ arithmetic Brownian motion into a “geometric” Brownian motion—“a law of percentage effect that maybe stocks had the same probability of doubling as of halving” (Samuelson interview), even though doubling would be a larger dollar move than halving. Bachelier’s “normal” random walk thus became a “log-normal” random walk: what followed the normal distribution was changes in the *logarithms* of prices. A price following a log-normal random walk would never become negative; limited liability was respected.^{45}

Samuelson did not publish his work on the random-walk model in finance until 1965, but he lectured on it in the late 1950s and the early 1960s at MIT, at Yale, at the Carnegie Institute of Technology, and elsewhere.^{46} In a randomwalk model, he pointed out, the average return on stocks probably had to be “sweeter” than the rate of interest that could be earned from “riskless securities,” for otherwise risk-averse investors could not be persuaded to hold stocks at all (Samuelson n.d., p. 11). However, after allowing for that “fair return,” stock-price changes had to be a “fair game” or what mathematicians call a “martingale,” because “if everyone could ‘know’ that a stock would rise in price, it would *already* be bid up in price to make that impossible.”^{47} In consequence, “it is not easy to get rich in Las Vegas, at Churchill Downs, or at the local Merrill Lynch office.”^{48}

Although Samuelson was by far the best-known economist to embrace the random-walk model, other analysts did so too. They included Harry V. Roberts, a statistician at the University of Chicago Graduate School of Business, and M. F. M. Osborne, an astrophysicist employed by the U.S. Naval Research Laboratory. Roberts knew of Working’s and Kendall’s papers, but Osborne reached the conclusion that there was “Brownian motion in the stock market” empirically, apparently without knowing of Bachelier, Working, or Kendall.^{49} Like Samuelson, Osborne concluded that a log-normal random
(p.65)
walk was a better model of stock-market prices than the physicists’ arithmetic Brownian motion.^{50}

By 1964, there was a sufficient body of work on the random character of stock-market prices to fill a 500-page collection of readings; it was edited by Paul Cootner of MIT. The definitive statement of the view of financial markets that increasingly underlay this research came in a 1970 article in the *Journal of Finance* by Eugene Fama of the University of Chicago. That view was that financial markets were “efficient”—in other words, that “prices always ‘fully reflect’ available information” (Fama 1970, p. 383). Drawing on a distinction suggested by his colleague Harry Roberts, Fama distinguished three meanings of “available information,” the first two of which correspond roughly to successive phases in the history of the relevant research.

The first meaning of “available information” is the record of previous prices. If the information contained in that record is fully reflected in today’s prices, it will be impossible systematically to make excess profits by use of the record. A market in which it is impossible to do that—for example, because prices follow a random walk—is, in Fama’s terminology, “weak form” efficient, and much of the work on the random-walk hypothesis in the 1950s and the 1960s was an attempt to formulate and to test this form of efficiency.

Different authors meant somewhat different things by “random walk,” but the most common meaning was that “successive price changes are independent, identically distributed random variables. Most simply, this implies that the series of price changes has no memory, that is, the past cannot be used to predict the future in any meaningful way.” (Fama 1965, p. 34) That property implied “weak form” efficiency. However, the latter was also implied by martingale (“fair game”) models that were more general than the original conception of random walks. After these martingale formulations were introduced in the mid 1960s by Samuelson (and also by Benoit Mandelbrot) they tended to replace random walks as the preferred mathematical formulations of weakform efficiency.^{51}

The second meaning of “available information,” and the subject of increasing research attention from the late 1960s on, was “other information that is obviously publicly available (e.g., announcements of annual earnings, stock splits, etc.)” (Fama 1970, p. 383). If excess profits could not be made using this class of information because prices adjusted to take it into account effectively instantaneously on its arrival, a market was “semi-strong form” efficient. By 1970, “event studies,” such as Ball and Brown 1968 and Fama, Fisher, Jensen, and Roll 1969, were beginning to accumulate evidence that the U.S. stock market was efficient in this second sense too.

(p.66) The third category of “available information” was that to which only certain groups, such as corporate insiders, were privy. If this kind of information was always reflected in prices, and insiders could not make excess profits on the basis of it, a market was “strong form” efficient. “We would not, of course, expect this [strong form] model to be an exact description of reality,” wrote Fama (1970, p. 409), and indeed there was already evidence against it.

Fama’s Ph.D. student Myron Scholes had examined the effects on prices of large sales of stock. While Scholes’s findings were broadly consistent with “semi-strong” efficiency, they also suggested that a corporate insider could have “higher expected trading profits than others because he has monopolistic access to some information.”^{52}

The “specialists” on the floor of the New York Stock Exchange also enjoyed “monopolistic access” to information. (As was noted in chapter 1, a specialist matches and executes buy and sell orders and is expected to trade on his or her firm’s own account if there is an imbalance.) Specialists, Fama suggested (1970, pp. 409–410), could make excess profits because of their private access to the “book” of unfilled orders.^{53}

Corporate insiders and New York’s “specialists” were, however, limited and probably exceptional cases. Their genuine advantages in respect to information were not shared by ordinary investors, nor, probably, by the vast bulk of investment professionals, such as fund managers. “There is,” Fama asserted, “no evidence that deviations from the strong form of the efficient markets model permeate down any further through the investment community.” (1970, pp. 415–416)

# Conclusion

The laying out of the efficient-market hypothesis by Fama was the capstone of the transformation of the academic study of finance that had occurred in the United States in the 1950s and the 1960s. What had started as separate streams—the Modigliani-Miller “irrelevance” propositions, portfolio theory and the Capital Asset Pricing Model, the random-walk model—were by 1970 seen as parts of a largely coherent view of financial markets.

For instance, the hypothesis of market efficiency explained the counterintuitive claim that changes in the prices of stocks followed a random walk. The hypothesis posited that prices reflect all publicly available existing information, including anticipation of those future events that are predictable. (To take a hypothetical example, the prices in winter of the stocks of ice cream manufacturers will reflect the knowledge that demand for their product will rise in the summer.) Thus, only genuinely new information—for example, events that (p.67) could not have been anticipated—can move prices. By definition, however, that information was unpredictable and thus “random.”

The Modigliani-Miller propositions, the Capital Asset Pricing Model, and the efficient-market hypothesis were also interwoven in more detailed ways. For example, the CAPM’s co-developer, Jack Treynor, showed that the model implied the Modigliani-Miller proposition that capital structure was irrelevant to total market value (Treynor 1962). Similarly, the CAPM gave a systematic account of the relative amounts by which the prices of stocks had to be such that they offered expected returns “sweeter” than the rate of interest on riskless investments, so that rational risk-averse investors would include those stocks in their portfolios.

The efficient-market hypothesis (and at least the more sophisticated versions of the random-walk model) did not rule out stock returns that were on average positive, indeed “sweeter” than the riskless rate of interest. Efficient-market theorists insisted, however, that higher expected returns were always accompanied, as they were in the CAPM, by higher levels of risk.

Indeed, as was noted in chapter 1, the Capital Asset Pricing Model was incorporated into tests of the efficient-market hypothesis. The systematic “excess” returns that the hypothesis ruled out had to be “excess” relative to some benchmark, and typically the CAPM was used as the benchmark: an “excess” return on an investment was one systematically greater than the return implied by the model as appropriate to the investment’s level of market risk.

By the late 1960s, the descriptive, institutional study of finance had in the United States been eclipsed by the new, analytical, mathematical approaches discussed in this chapter. The financial markets had been captured for economics, so to speak. Other disciplines—Dewing’s invocations of history and psychology, “behavioral” studies of organizations in the tradition of Herbert Simon—were left on the margins. Modigliani, Miller, and the authors of the Capital Asset Pricing Model and the efficient-market hypothesis shared the view that “securities prices are determined by the interaction of self-interested rational agents” (LeRoy 1989, p. 1613). They were, likewise, all convinced that the processes involved had to be analyzed, not simply described, and that the necessary analytical tools were those of economics, not those of other disciplines.

It was a considerable intellectual transformation, but as we shall see in the next chapter it was also more than that. It had institutional foundations; it represented a change in focus, not just in style; it met fierce opposition; and in the 1970s it began to affect its object of study: the financial markets.

## Notes:

(1.) Hunt did criticize Dewing for insufficient attention to some aspects of economics, especially theories of the business cycle (Hunt 1943, p. 309).

(2.)
See also Kavesh, Weston, and Sauvain 1970, p. 5. These authors note that the *Journal of Finance*’s “early years were filled with descriptive articles, with a heavy ‘institutional’ flavor—largely reflecting the type of research being carried out in those years.” Occasional symposia and debates published in the *Journal of Finance* are a useful way of tracking the field’s preoccupations. See, for example, the 1950 discussion of “Materials and Methods of Teaching Business Finance” (volume 5), the 1955 discussion of “Theory
(p.308)
of Business Finance” (volume 10), and the 1967–68 discussion of the “State of the Finance Field” (volumes 22 and 23).

(4.) On the influence of Carnegie’s divide on the transaction-cost economics of Oliver Williamson, who was a Ph.D. student there between 1960 and 1963, see Klaes 2001. On the divide more generally and its possible connections to rational-expectations theory, see Sent 1998, pp. 8–9.

(5.) Friedman and Friedman 1998, pp. 158–159.

(6.) As noted below, for Modigliani the spark was attendance at a 1950 conference organized by the National Bureau of Economic Research, at which he heard David Durand present the paper that became Durand 1952. As discussed in the text, Durand explored, but ultimately dismissed, the idea that capital structure might be irrelevant. Modigliani felt the proposition might hold despite Durand’s belief that it did not. (Modigliani’s own contribution to the meeting—Modigliani and Zeman 1952—was further from his eventual position than Durand’s paper.) It took Modigliani several years of occasional reflection on the problem to come up with “a really convincing proof ” that the proposition held (Modigliani interview). When he did, he described the proposition that capital structure was irrelevant to a class he was teaching at Carnegie that Miller was also attending. “Merton Miller said ‘I have the evidence for you.’ ” (Modigliani interview) Miller had set his graduate students the task of finding empirically the best capital structure from the viewpoint of minimizing a corporation’s cost of capital, and they had been unable to do so: “They couldn’t find any optimum.” (Miller interview)

(7.) Modigliani and Miller (1958, p. 268) defined a firm’s “average cost of capital” as the “ratio of its expected return to the [total] market value of all its securities” (bonds and stocks).

(8.) A broadly similar theoretical account of portfolio selection was developed independently by the British economist A.D. Roy (see Roy 1952), but Roy did not develop a fleshed-out “operations research” analysis analogous to Markowitz’s. For Roy’s work and other antecedents of Markowitz’s analysis, see Markowitz 1999 and Pradier 2000.

(10.)
Thus Robert G. Wiese of Scudder, Stevens & Clark wrote in 1930: “Theoretically, *the proper price of any security, whether a stock or a bond, is the sum of all future income payments discounted at the current rate of interest in order to arrive at the present value*” (Wiese 1930, p. 5). Williams quoted Wiese’s article to this effect (Williams 1938, p. 55).

(11.) If the interest rate is 5 percent per annum, for instance, a dollar received now can be turned into $1.05 in a year’s time. The present value of $21 to be received in a year’s time is thus $21/1.05 or $20. (To check this, note that adding a year’s interest at 5 percent to $20 produces $21.)

(p.309) (13.) Another source recommended to Markowitz by Ketchum was Arthur Wiesenberger’s annual survey of investment companies (see, e.g., Wiesenberger 1942). The attraction of such companies, if well run, was precisely the “intelligent diversification of investment” permitted by their wide-ranging holdings of different stocks (Wiesenberger 1942, p. 5).

(14.) Depending on how the portfolio selection problem was formulated, either a constraint (the maximum level of risk) was a quadratic function or the function to be minimized was quadratic.

(16.)
It is more elegant to present Markowitz’s method, as he did, in terms of covariances of return, but I use the term “correlation” because it will be more familiar to a wider audience. The correlation and the covariance of two variables are related by a simple formula. If the two variables have standard deviations σ_{1} and σ_{2}, their correlation is ρ_{12}, and their covariance is σ_{12}, then σ_{12} = ρ_{12}σ_{1}σ_{2}. See, e.g., Markowitz 1952, p. 80.

(17.)
As Markowitz worked on the problem further he realized that the shape of the “inefficient” boundary—“the side which *maximizes* variance for given return”—was not correctly drawn in this, his first published graphical representation of the attainable set (letter to D. MacKenzie, February 10, 2004). He corrected its shape in the equivalent figure in Markowitz 1956 (p. 111). The shape of the attainable set is discussed at length in Markowitz and Todd 2000 (pp. 225–239).

(19.) Durand 1960, pp. 235–236. “Rational Man” and “Perfect Computing Machine” are Markowitz’s phrases and capitalizations; see Markowitz 1959, p. 206.

(25.) “I was doing things mostly graphically,” says Sharpe (interview).

(26.) Sharpe also assumed that investors could borrow or lend at the riskless rate of interest, and in the 1964 version of his model short sales were not permitted (Sharpe 1964, p. 433).

(27.)
I am extremely grateful to John P. Shelton, who refereed Sharpe’s paper for the *Journal of Finance*, for a most helpful account of practitioner responses to Sharpe’s work (letter to D. MacKenzie, June 2, 2004).

(p.310) (28.) On Alchian’s membership of the Mont Pèlerin Society, see Mirowski 2002, p. 318. For Alchian’s methodological views, see Alchian 1950. This paper was cited by Friedman (1953a, p. 19) as similar in “spirit” and “approach” to one of the main strands of Friedman’s argument.

(29.) William F. Sharpe, email to D. MacKenzie, January 29, 2004.

(33.) Regnault 1863, pp. 94–95, my translation, capitalization in original deleted. I am grateful to Franck Jovanovic for a copy of Regnault 1863.

(35.) Bachelier 1900, p. 21. In this and the subsequent quotation my translation follows that of A. James Boness (Bachelier 1964).

(36.)
Denoting the probability that the price of the bond at time *t* will be between *x* and *x* + *dx* by *p*_{x,t}*dx*, Bachelier showed that his integral equation was satisfied by *p*_{x,t} = (*H*/√*t*)exp[–(−π*H*^{2}*x*^{2}/*t*)], where *H* is a constant. See Bachelier 1900, p. 38.

(37.) The Moscow State University econometrician Evgeny Evgenievich Slutsky also investigated the possibility of random fluctuations being the source of “cyclic processes” such as business cycles. Working knew of Slutsky’s work (Working 1934, p. 11), but probably second-hand: it did not appear in English until 1937 (Slutsky 1937).

(38.) See Pearson 1905. On Pearson’s interests in the application of statistical methods to biology and to eugenics, see MacKenzie 1981 and Porter 2004. Readers of Galison 1997 will know that there are deep issues as to what “random” means in contexts such as “random number” tables, but those can be set aside for current purposes.

(39.) Given this use of the Stanford data, it might be assumed that Kendall knew of Working’s analyses, but Kendall (1953) does not cite him.

(40.) Cotton was the exception. The first-order (month and immediately previous month) serial correlations were substantial, ranging from 0.2 to 0.4 (Kendall 1953, p. 23).

(41.) Samuelson n.d., p. 2. This unpublished typescript titled “The Economic Brown-ian Motion” is in Samuelson’s personal files, and at Samuelson’s request was kindly retrieved for me by his assistant Janice Murray. It probably dates from around 1960 or shortly thereafter, because Samuelson’s “acquaintanceship” with Bachelier’s work is described as dating back “only half a dozen years” (n.d., p. 2). As noted in the text, Kruizenga (and hence Samuelson) certainly knew of Bachelier’s work by 1956, and possibly somewhat earlier.

(42.) In modern option theory, the price of options is seen as determined by considerations of arbitrage, not by beliefs about whether stock prices will rise or fall.

(44.) Samuelson’s attention may first have been drawn to the need to revise Bachelier’s arithmetic Brownian motion by the fact that on that assumption the price of an option is proportional to the square root of time to expiration, and so the price of an option of long enough duration can rise above the price of the stock on which it is an option. A geometric Brownian motion “get[s] rid of the paradox” (Samuelson n.d., pp. 3–4 and 13–14).

(45.) Whereas the normal distribution is symmetrical, the log-normal is skewed, with a long positive “tail,” suggesting it might be useful in the analysis of phenomena such as the distribution of income, and it was relatively well-known to economists. Among the pioneers of the use of the log-normal distribution was the French engineer and economist Robert Gibrat (Armatte 1998). Samuelson had “long known” of Gibrat’s work (letter to D. MacKenzie, January 28, 2004). In 1957, J. Aitchison and J. A. C. Brown of the University of Cambridge published what was in effect a textbook of the log-normal distribution and its economic applications (Aitchison and Brown 1957).

(46.) Samuelson n.d. is most likely the text of one or more of these lectures, and Samuel-son recalls using its opening lines in a 1960 talk to the American Philosophical Society.

(48.) Samuelson 1973, p. 5. Churchill Downs is the Louisville racetrack on which the Kentucky Derby is run.