[proceedings of the national academy of sciences] (bookzz.org)

104

Upload: janel-fernandez

Post on 12-Dec-2015

228 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: [Proceedings of the National Academy of Sciences] (BookZZ.org)
Page 2: [Proceedings of the National Academy of Sciences] (BookZZ.org)

PROCEEDINGS OF THE NATIONALACADEMY OF SCIENCES OF THE

UNITED STATES OF AMERICA

Table of Contents

Papers from a National Academy of Sciences Colloquium on Science, Technology, and the Economy

Science, technology, and economic growthAriel Pakes and Kenneth L.Sokoloff

12655–12657

Trends and patterns in research and development expenditures in the United StatesAdam B.Jaffe

12658–12663

Measuring science: An explorationJames Adams and Zvi Griliches

12664–12670

Flows of knowledge from universities and federal laboratories: Modeling the flow of patent citations over timeand across institutional and geographic boundariesAdam B.Jaffe and Manuel Trajtenberg

12671–12677

The future of the national laboratoriesLinda R.Cohen and Roger G.Noll

12678–12685

Long-term change in the organization of inventive activityNaomi R.Lamoreaux and Kenneth L.Sokoloff

12686–12692

National policies for technical change: Where are the increasing returns to economic research?Keith Pavitt

12693–12700

Are the returns to technological change in health care declining?Mark McClellan

12701–12708

Star scientists and institutional transformation: Patterns of invention and innovation in the formation of thebiotechnology industryLynne G.Zucker and Michael R.Darby

12709–12716

Evaluating the federal role in financing health-related researchAlan M.Garber and Paul M.Romer

12717–12724

Public-private interaction in pharmaceutical researchIain Cockburn and Rebecca Henderson

12725–12730

Environmental change and hedonic cost functions for automobilesSteven Berry, Samuel Kortum, and Ariel Pakes

12731–12738

Sematech: Purpose and PerformanceDouglas A.Irwin and Peter J.Klenow

12739–12742

The challenge of contracting for technological informationRichard Zeckhauser

12743–12748

An economic analysis of unilateral refusals to license intellectual propertyRichard J.Gilbert and Carl Shapiro

12749–12755

TABLE OF CONTENTS i

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 3: [Proceedings of the National Academy of Sciences] (BookZZ.org)

TABLE OF CONTENTS ii

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 4: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper serves as an introduction to the following papers, which were presented at a colloquium entitled “Science, Technology, andthe Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine,CA.

Science, technology, and economic growth

ARIEL PAKES* AND KENNETH L.SOKOLOFF†

*Department of Economics, Yale University, New Haven, CT 06520; and †Department of Economics, University of California, LosAngeles, CA 90095

Systematic study of technology change by economists and other social scientists began largely during the 1950s, emerging out of aconcern with improving our quantitative knowledge of the sources of economic growth. The early work was directed at identifying theimportance of different factors in generating growth and relied on highly aggregated data. However, the finding that increases in the stocks ofconventional factors of production (capital and labor) accounted for only a modest share of economic growth stimulated more detailed researchon the processes underlying technological progress, and led to major advances in conceptualization, data collection, and measurement. It alsofocused attention on theoretical research, which was clarifying why market mechanisms were not as well suited to allocate resources for theproduction and transmission of knowledge as they were for more traditional goods and services. The intellectual impetus that these studiesprovided contributed to an increased appreciation by policy-makers of the economic significance of science and technology, and a moreintensive investigation of its role in phenomena as diverse as: the slowdown of productivity advance in the West, the extreme variation in ratesof growth across the world, and the increased costs of health care.

In organizing the National Academy of Sciences colloquium on “Science, Technology, and the Economy,” we sought to showcase thebroad range of research programs now being conducted in the general area of the economics of technology, as well as to bring together a groupof scholars who would benefit from dialogues with others whose subjects of specialization were somewhat different from their own. While themajority of participants were economists, there was also representation from a number of other disciplines, including political science,medicine, history, law, sociology, physics, and operations research. The papers presented at this colloquium have been shortened and revisedfor publication here.

Expenditure on research and development (R&D) is typically considered to be the best single measure of the commitment of resources toinventive activity on the improvement of technology. Accordingly, the colloquium began with a background paper by Adam Jaffe (1), whichprovided an overview of trends and patterns in R&D activity since the early 1950s, as well as some international comparisons. He discussedhow federal spending on R&D is roughly the same today in real terms as it was in the late 1960s, but that expenditures by industry have nearlytripled over that period—raising its share of all funding for R&D from roughly 40% to 60%. Basic research has fared relatively well andincreased its share of the total funds for R&D, with universities being the primary beneficiary of the marked shift of federal spending in thisdirection. From an international perspective, what stands out is that the historic pattern of United States leadership in R&D expenditures as ashare of gross domestic product has been eroding in recent years; and that the United States devotes a much higher proportions of its R&Dexpenditures to defense and to life sciences than do counterparts like Germany, Japan, France, and the United Kingdom.

Following Jaffe’s overview were two talks on projects aimed at improving on our measures of the quantity and value of contributions toknowledge. The first, by James Adams and Zvi Griliches (2), examined how the relationship between academic research expenditures andscientific publications, unweighted or weighted by citations, has varied across disciplines and over time. As they noted, if the returns toacademic science are to be estimated, we need good measures of the principal outputs—new ideas and new scientists. Although economistshave worked extensively on methods to value the latter, much less effort has been devoted to developing useable measures of the former. TheAdams-Griliches paper also provides a more general discussion of the quality of the measures of output that can be derived from data on paperand citation counts.

Adam Jaffe and Manuel Trajtenberg (3) reported on their development of a methodology for the use of patent citations to investigate thediffusion of technological information over geographic space and time. In illustrating the opportunities for linking inventions and inventors thatthe computerization of patent citation data provide, they found: substantial localization in citations, lower rates of citation for federal patentsthan for corporate, a higher fertility or value of university patents, and citation patterns across technological fields that conform to prior beliefsabout the pace of innovation and the significance of gestation lags.

National laboratories have come under increasing scrutiny in recent years. Although they perform a much smaller share of United StatesR&D than they did a generation ago and have been the target of several “restructuring” programs, these laboratories continue to claim nearlyone-third of the federal R&D budget. In their paper, Linda Cohen and Roger Noll (4) reviewed the historic evolution of the nationallaboratories, and explored whether there is an economic and political basis for sustaining them at their current size. They are deeply pessimisticabout the future of the laboratories in this era of declining support for defense-related R&D, portraying them as lacking potential forcooperative enterprises with industry, as well as political support.

Scholars and policymakers often ask about the significance and effects of trade in intellectual capital. Naomi Lamoreaux and KennethSokoloff (5) offered some historical perspective on this issue, presenting research on the evolution of trade in patented technologies over thelate nineteenth and early twentieth centuries. Employing samples of both patents and assignments (contracts transferring rights to patents), theyfound evidence that a class of individuals specialized in inventive activity emerged long before the rise of industrial research laboratories. Thisrise of specialized inventors was

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviation: R&D, research and development.

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH 12655

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 5: [Proceedings of the National Academy of Sciences] (BookZZ.org)

related to the increasing opportunities for extracting the returns to discoveries by selling or licensing off the rights, as opposed to having toexploit them directly. They also found that intermediaries and markets, supportive of such trade in technological information by reducingtransaction costs, appear to have evolved first in geographic areas with a record of high rates of patenting, and that the existence of these andlike institutions may in turn have contributed to the persistence over time of geographic pockets of high rates of inventive activity through self-reinforcing processes.

The paper by Keith Pavitt (6) was perhaps more explicitly focused on the design of technology policy than any other presented at thecolloquium. Making reference both to the weak association across nations between investment in R&D and economic performance, and to thepaucity of evidence for a direct technological benefit to the information provided by basic research, he argued that the major value of suchactivity is not in the provision of codified information, but in the enhancement of capacity to solve technological problems. This capacityinvolves tacit research skills, techniques and instrumentation, and membership in national and international research networks. In his view, theexaggerated emphasis on the significance of codified information has encouraged misunderstanding about the importance of the international“free-rider” problem and a lack of appreciation for institutional and labor policies that would promote the demand for skills and institutionalarrangements to solve complex technological problems.

One afternoon of the colloquium was devoted to papers on economic issues in medical technology. Many economists have long beenconcerned that the structures of incentives in the systems of health care coverage used in the United States have encouraged the development ofmedical technologies whose value on the margin is small, especially relative to their cost. The paper by Mark McClellan (7) presented newevidence on the marginal effects of intensive medical practices on outcomes and expenditures over time, using data on the treatment of acutemyocardial infarction in the elderly from 1984 through 1991 from a number of hospitals. In general, McClellan found little evidence that themarginal returns to technological change in heart attack treatment (catheterization is the focus here) have declined substantially; indeed, on thesurface, the data suggest better outcomes and zero net expenditure effects. Because a substantial fraction of the long-term improvement inmortality at catheterization hospitals is evident within 1 day of acute myocardial infarction, however, McClellan suggests that procedures otherthan catheterization, but whose adoption at hospitals was related to that of catheterization, may have accounted for some of the better outcomes.

Lynn Zucker and Michael Darby (8) followed with a discussion of their studies of the processes by which scientific knowledge comes tobe commercially exploited, and of the importance of academic researchers to the development of the biotechnology industry. Employing amassive new data set matching detailed information about the performance of firms with the research productivity of scientists (as measured bypublications and citations), they found a very strong association between the success of firms and the extent of direct collaboration betweenfirm scientists and highly productive academic scientists. The evidence is consistent with the view that “star” bioscientists were highlyprotective of their techniques, ideas, and discoveries in the early years of the revolution in genetic sequencing, and of the significance of bench-level working ties for the transmission on technological information in this field. Zucker and Darby also suggest that the research productivityof the academic scientists may have been raised by their relationships with the firms because of both the opportunities for commercializationand the additional resources made available for research.

The paper by Alan Garber and Paul Romer (9) begins by reviewing the arguments that lead economists and policy makers to worry thatmarket allocation mechanisms, if left alone, may not allocate an optimal amount of funds to research activity. They then consider the likelycosts and benefits of various ways of changing the institutional structures that determine the returns to research, including strengtheningproperty rights for innovative output and tax subsidy schemes. The discussion, which is weighted to medical research, points out alternativeways of implementing these schemes and considers how their relative efficacies are likely to differ with the research environment.

Iain Cockburn and Rebecca Henderson (10) followed with an empirical investigation of the interaction between publicly and privatelyfunded research in pharmaceuticals. Using a confidential data set that they gathered, they begin by showing that for their sample of 15important new drugs there was a long and variable lag between the date of the key enabling scientific discovery and the market introduction ofthe resultant new chemical entity (between 11 and 67 years). In at least 11 of the 14 cases the basic discoveries were done by publicinstitutions, but in 12 of those same cases the major compound was synthesized at a private firm, suggesting a “downstream” relationshipbetween the two types of research institutions. They stress, however, that private sector research scientists often publish their results andfrequently co-author with scientists from public sector institutions, suggesting that there are important two-way flows of information. There isalso some tentative evidence that the research departments of firms that have stronger ties to the public research institutes are more productive.

Steve Berry, Sam Kortum, and Ariel Pakes (11) analyze the impact of the lowering of emission standards and the increase in gas prices onthe characteristics and the costs of producing automobiles in the 1970s. Using their construct of a “hedonic” cost function, a function thatrelates the costs of producing automobiles to its characteristics, they find that the catalytic converter technology that was introduced after thelowering of emissions standards in 1975, did not increase the costs of producing an auto (though it may have hurt unmeasured performancecharacteristics). However, the more sophisticated three-way and closed-loop catalysts and the fuel injection technologies, introduced followingthe further lowering of emissions standards in 1980, increased costs significantly. They also show that the miles per gallon rating of the new carfleet increased significantly over this period, with the increases occurring primarily as a result of the introduction of new car models. Thoughthe new models tended to be smaller than the old, there was also an increase in the miles per gallon in given horsepower weight classes. This,together with striking increases in patenting in patent classes that deal with combustion engines following the 1973 and 1979 gas price hikes,suggests a significant technological response, which allowed us to produce more fuel efficient cars at little extra cost.

Since the founding of Sematech in 1987, there has been much interest in whether this consortium of United States semiconductorproducers has been effective in achieving the goal of promoting the advances of United States semiconductor manufacturing technology. Theoriginal argument for the consortium, which has received substantial support from the federal government, was based on the ideas that it wouldraise the return to, and thus boost, spending on investment in process R&D by increasing the extent to which new knowledge would beinternalized by the firms making the investments, and increase the social efficiency of the R&D conducted by enabling firms to pool their R&Dresources, share results, and reduce duplication. Douglas Irwin and Peter Klenow (12) have been studying whether these expectations werefulfilled, and here review their findings that: there are steep learning curves in production of both memory chips and microprocessors;

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH 12656

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 6: [Proceedings of the National Academy of Sciences] (BookZZ.org)

there exist efficiency gains from joint ventures; and that Sematech seems to have induced member firms to lower their expenditures on R&D.This evidence is consistent with the notion that Sematech facilitates more sharing and less duplication of research, and helps to explain whymember firms have indicated that they would fully fund the consortium in the absence of the government financing. It is difficult to reconcilethis, however, with the view that Sematech induces firms to do more semiconductor research.

In his presentation, Richard Zeckhauser (13) suggested that economists and analysts of technology policy often overestimate the degree towhich technological information is truly a public good, and that this misunderstanding has led them to devote inadequate attention to thechallenges of contracting for such information. Economists have long noted the problems in contracting, or agency, that arise from the costs ofverifying states of the world, or from the fact that potential outcomes are so numerous that it is not possible to prespecify contingent payments.All of these problems are relevant in contracting for technological information, and constitute impediments to the effectiveness of invention andtechnological diffusion. Zeckhauser discusses how government, in its role as enforcer and definer of property rights in intellectual capital aswell as in its tax, trade, and antitrust policies, has a major impact on the magnitude of contracting difficulties and the way in which they areresolved. United States policies toward intellectual capital were developed for an era of predominantly physical products, and it is perhaps timefor them to be reexamined and refashioned to meet current technological realities.

As long as authorities have acted to stimulate invention by granting property rights to intellectual capital they have been plagued by thequestions of when exploitation of such property rights comes to constitute abuse of monopoly power or an antitrust violation, and what shouldtheir policies be about such cases. The final paper presented at the colloquium offered an economic analysis of a contemporary policy problememanating from this general issue—whether or not to require holders of intellectual property to offer licenses. As Richard Gilbert and CarlShapiro (14) make clear, the effects of compulsory licensing on economic efficiency are ambiguous—for any kind of capital. They show thatan obligation to offer licenses does not necessarily increase economic welfare even in the short run. Moreover, as is well recognized,obligations to deal can have profound adverse consequences for investment and for the creation of intellectual property in the long run. Equalaccess (compulsory licensing in the case of intellectual property) is an efficient remedy only if the benefits of equal access outweigh theregulatory costs and the long run disincentives for investment and innovation. This is a high threshold, particularly in the case of intellectualproperty.1. Jaffe, A. (1996) Proc. Natl. Acad. Sci. USA 93, 12658–12663.2. Adams, J. & Griliches, Z. (1996) Proc. Natl. Acad. Sci. USA 93, 12664–12670.3. Jaffe, A. & Trajtenberg, M. (1996) Proc. Natl. Acad. Sci. USA 93, 12671–12677.4. Cohen, L. & Noll, R. (1996) Proc. Natl. Acad. Sci. USA 93, 12678–12685.5. Lamoreaux, N.R. & Sokoloff, K.L. (1996) Proc. Natl. Acad. Sci. USA 93, 12686–12692.6. Pavitt, K. (1996) Proc. Natl. Acad. Sci. USA 93, 12693–12700.7. McClellan, M. (1996) Proc. Natl. Acad. Sci. USA 93, 12701– 12708.8. Zucker, L. & Darby, M. (1996) Proc. Natl. Acad. Sci. USA 93, 12709–12716.9. Garber, A. & Romer, P. (1996) Proc. Natl. Acad. Sci. USA 93, 12717–12724.10. Cockburn, I. & Henderson, R. (1996) Proc. Natl. Acad. Sci. USA 93, 12725–12730.11. Berry, S., Kortum, S. & Pakes, A. (1996) Proc. Natl. Acad. Sci. USA 93, 12731–12738.12. Irwin, D. & Klenow, P. (1996) Proc. Natl. Acad. Sci. USA 93, 12739–12742.13. Zeckhauser, R. (1996) Proc. Natl. Acad. Sci. USA 93, 12743– 12748.14. Gilbert, R. & Shapiro, C. (1996) Proc. Natl. Acad. Sci. USA 93, 12749–12755.

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH 12657

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 7: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

Trends and patterns in research and development expenditures inthe United States

ADAM B.JAFFE*Department of Economics, Brandeis University and National Bureau of Economic Research, Waltham, MA 02254–9110ABSTRACT This paper is a review of recent trends in United States expenditures on research and development (R&D). Real

expenditures by both the government and the private sector increased rapidly between the mid-1970s and the mid-1980s, and havesince leveled off. This is true of both overall expenditures and expenditures on basic research, as well as funding of academic research.Preliminary estimates indicate that about $170 billion was spent on R&D in the United States in 1995, with ≈≈≈≈60% of that fundingcoming from the private sector and about 35% from the federal government. In comparison to other countries, we have historicallyspent more on R&D relative to our economy than other advanced economies, but this advantage appears to be disappearing. If defense-related R&D is excluded, our expenditures relative to the size of the economy are considerably smaller than those of other similareconomies.

This paper is an overview of historic trends and current patterns of research and development (R&D) activity in the United States. Most ofthe information contained herein comes from the National Science Foundation (NSF) (1). (I am indebted to Alan Rappaport and JohnJankowski of NSF for sharing with me preliminary, unpublished statistics from the 1996 edition of Science and Engineering Indicators, whichhad not been released when this paper was prepared.) The background is divided into three sections: (i) overall spending; (ii) basic andacademic research; and (iii) international comparisons.

OVERALL R&D SPENDINGTotal spending on R&D in the United States in 1994 was $169.6 billion, and is estimated to be $171 billion in 1995 (all numbers provided

herein for 1994 are preliminary and for 1995 are preliminary estimates). The 1994 number is about 2.5% of Gross Domestic Product (GDP).For comparison, 1994 expenditure on gross private domestic investment was $1038 billion, of which $515 billion was new producers’ durableequipment; state and local government spending on education was approximately $400 billion. Thus, among the major forms of socialinvestment, R&D is the smallest; however, it is a nontrivial fraction of the total.

There are myriad ways to decompose this total spending, including: by source of funding; by performer of the research or development; bybasic research, applied research and development; and by field of science and engineering.

All possible decompositions are beyond the scope of this paper; however, all can be found in some form in ref. 1. Fig. 1 represents anattempt to summarize the current data along the first two dimensions. The horizontal bars correspond to the four major performers of research:(i) private firms (“industry”), (ii) federal labs, including Federally Funded Research and Development Centers (FFRDCs), (iii) universities andcolleges, and (iv) other nonprofits. The vertical divisions correspond to the three major sources of funding for R&D, with industry funds on theleft, federal funds in the middle, and other funds (including state and local governments) on the right.

Overall, industry provides about 60% of all R&D funds, and the federal government provides about 35%. Industry performs about 70% ofthe R&D, federal labs and universities each perform about 13%, and other nonprofits perform about 3%. By far the biggest source-performercombination, with just shy of $100 billion, is industry-funded, industry-performed research. Federally funded research at private firms and thefederal labs each account for about $22 billion.† Universities performed about another $22 billion; of this amount, about 60% was funded bythe federal government, about a third was funded by universities’ own funds, state and local governments, or other sources, and about 7% camefrom industry. Other nonprofits performed a total of about $6 billion, with the funding breakdown roughly similar to universities.

Fig. 2 provides the same breakdown for 1970 (the picture for 1953 is very similar to that for 1970). It shows a striking contrast, with amuch larger share of funding provided by the federal government, both for the total and for each performer. In 1970, the federal governmentprovided 57% of total funding, including 43% of industry-performed research. The biggest difference in the performance shares is betweenfederal labs and universities; whereas the two now have about equal shares, in 1970 the labs performed about twice as much R&D asuniversities.

These changes in shares occurred in the context of large changes in the totals. These changes over time are shown in Fig. 3 (performers)and Fig. 4 (sources of funds). There is an overall reduction in total spending in the late 1960s, followed by very rapid increases in real spendingbetween 1975 and 1985; this increase decelerated in the late 1980s, and total real spending has fallen slightly since 1991. Fig. 3 shows that the1975–1985 increases occurred mostly in industry; universities then enjoyed a significant increase in performance share that still continues, withreal university-performed R&D continuing to increase as the total pie shrank in the early 1990s.

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviations: R&D, research and development; GDP, Gross Domestic Product; FFRDC, Federally Funded Research andDevelopment Center; NSF, National Science Foundation.

*e-mail: [email protected].†The preliminary 1995 data that I was able to get classify industry-operated FFRDCs (such as the Oak Ridge Lab in Tennessee) with

category in the 1993 Science Indicators, such facilities account for federally funded industry research. Based on a break-out for thisabout $2 billion. Thus, a more realistic accounting would put federal labs at about $24 billion and federally funded industry research atabout $20 billion.

TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES 12658

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 8: [Proceedings of the National Academy of Sciences] (BookZZ.org)

FIG. 1. United States R&D funding by performer and funding source; preliminary estimates for 1995 (in billions). “FederalLabs” includes intramural federal research and university-operated FFRDCs. Industry-operated FFRDCs are included underfederal industry research. “Other” funding sources are state and local governments and institutions’ own funds. Source: Ref. 1and A. Rappaport and J.Jankowski, personal communication (Division of Science Resource Studies, National ScienceFoundation).Fig. 4 shows that movements in the total over time have been driven by cycles in real federal funding combined with a rapid buildup in

industry spending between 1975 and 1991. Real federal spending peaked at about $60 billion (in 1994 dollars) in 1967, fell to about $47 in1975, rose to about $73 in 1987, and then fell back to about $61 billion in 1995. Hence, federal spending today is essentially the same as in1967. (We will see below that the composition of this spending is different today than it was in 1967.) Industry funding increased steadily toabout $36 billion in 1968, was essentially flat until 1975, and then increased dramatically, surpassing federal funding for the first time in 1981,increasing to about $80 billion in 1985–1986, and then increasing again to about $100 billion in 1991, where it has leveled off. One of the mostinteresting questions in the economics of R&D is exactly why industry went on an R&D spending “spree” (2) between 1975 and 1990, andwhether or not the economy has yet or will ever enjoy the benefits thereof. [For an analysis of the effects of this large increase in spending onthe private returns to R&D, see Hall (3).]

FIG. 2. United States R&D funding by performer and funding source for 1970 (in billions of 1994 dollars). Performers andfunding sources are as in Fig. 1. Source: Ref. 1 and A. Rappaport and J. Jankowski, personal communication (Division ofScience Resource Studies, National Science Foundation).FIG. 3. Total United States R&D by performer, 1953–1995 (in billions of 1994 dollars). The 1994 numbers are preliminary;1995 numbers are preliminary estimates. Source: Ref. 1 and A. Rappaport and J.Jankowski, personal communication(Division of Science Resource Studies, National Science Foundation).

BASIC, ACADEMIC, AND FEDERAL LAB RESEARCHWith respect to economic growth, the most important effect of R&D is that it generates “spillovers,” i.e., economic benefits not captured

by the party that funds or undertakes the research. Although there is relatively little concrete evidence regarding the relative potency ofdifferent forms of R&D in generating spillovers, theory suggests that the nature of the research and the research organization are likely to affectthe extent of spillovers. Specifically, basic research, whose output is inherently intangible, unpredictable, and therefore difficult for theresearcher to appropriate, and research performed at universities and federal labs, governed by social and cultural norms of wide disseminationof results, are likely to generate large spillovers. In my paper with Manuel Trajtenberg for this Colloquium (4), we provide evidence thatuniversities and federal labs are, in fact, quite different on this score, with universities apparently creating more spillovers per unit of researchoutput. In this section, I examine trends in basic research and in academic and federal lab research.

Figs. 5 and 6 are analogous to Figs. 3 and 4, but they refer to that portion of total R&D considered basic by NSF. They show a very rapidbuildup in basic research in the Sputnik era of 1958 to 1968, mostly funded by the federal government. Like total federal R&D spending,federal basic research funding peaked in 1968 and declined through the mid-1970s. It then

FIG. 4. United States R&D by source of funds, 1953–1995 (in billions of 1994 dollars). The 1994 numbers are preliminary;1995 numbers are preliminary estimates. Source: Ref. 1 and A. Rappaport and J.Jankowski, personal communication(Division of Science Resource Studies, National Science Foundation).

TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES 12659

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 9: [Proceedings of the National Academy of Sciences] (BookZZ.org)

began a period of rapid increase, rising from about $8.5 billion in 1973 to $12.3 in 1985 and to about $17 billion today. Universities have beena prime beneficiary of the increase in federal basic research spending; basic research spending at universities increased about 50% in real termsbetween 1985 and 1995 (from about $9 billion to about $14 billion). Although industry does fund a small amount of basic research atuniversities and receives a small amount of federal funding for basic research, industry performance of basic research tracks industry spendingon basic research very closely, increasing from just under $4 billion in 1985 to about $8 billion in 1993, and decreasing thereafter. Overall,basic research has fared relatively well in the 1990s, increasing its overall share of R&D spending (all sources, all performers) from 15% in1990 to 17% in 1995.

FIG. 5. United States basic research by performer, 1953–1995 (in billions of 1994 dollars). The 1994 numbers arepreliminary; 1995 numbers are preliminary estimates. Source: Ref. 1 and A. Rappaport and J.Jankowski, personalcommunication (Division of Science Resource Studies, National Science Foundation).

Fig. 7 examines the distribution of academic R&D (for all sources of funding, and including basic and applied research and development)by science and the engineering field. There have not been dramatic shifts over this period in the overall field composition of academic research.Life sciences account for about 55% of the total, with medical research accounting for about half of life sciences. This apparently reflects acombination of the high cost of medical research, combined with a general social consensus as to the social value of improvements in health.(We will see below, however, that the United States is unique in devoting this large a share of public support of academic research to lifesciences.) All of these major categories saw significant real increases in the last 15 years, although at a finer level of detail there has been morevariation.

FIG. 6. United States basic research by source of funds, 1953–1995 (in billions of 1994 dollars). The 1994 numbers arepreliminary; 1995 numbers are preliminary estimates. Source: Ref. 1 and A. Rappaport and J.Jankowski, personalcommunication (Division of Science Resource Studies, National Science Foundation).FIG. 7. Expenditures for academic R&D by discipline, 1981–1993 (in billions of 1994 dollars). Source: Ref. 1 and A.Rappaport and J. Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation).

Fig. 8 suggests that this relative constancy by discipline masks some underlying changes in the funding from the federal government.Fig. 8 Lower shows that while all agencies have increased their funding of academic research over this period, the fraction of federal support ofacademic research accounted for by the National Institutes of Health increased from 37% in 1971 (data not shown) to 47% in 1980 and 53% in1995. In the last few years, increases in National Institutes of Health funding (and smaller increases in NSF funding) have allowed total federalfunding of academic research to continue to rise (albeit slowly) despite declines in funding from the Departments of Defense and Energy. Therelatively small share of these two agencies in academic

FIG. 8. Federal lab and federal university funding by funding agency. The 1994 numbers are preliminary; 1995 numbers arepreliminary estimates. Source: Ref. 1 and A. Rappaport and J.Jankowski, personal communication (Division of ScienceResource Studies, National Science Foundation).

TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES 12660

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 10: [Proceedings of the National Academy of Sciences] (BookZZ.org)

research funding explains why universities have fared relatively better than the federal labs in the last few years. Fig. 8 Upper shows thatdeclines in funding from the Departments of Energy and Defense have led to reductions in the total level of real research spending at the federallabs since 1990. Note that the scales of the two graphs are quite different; the federal government still spends almost twice as much at the labsas it does at universities, and the Department of Defense is still the largest overall funder of research in the combined lab-university sector.

FIG. 9. International R&D expenditures as percentage of GDP, 1981–1995. Germany’s data for 1981–1990 are for WestGermany. The 1994 numbers are preliminary; 1995 numbers are preliminary estimates. Source: Ref. 1 and A. Rappaport andJ.Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation).

INTERNATIONAL COMPARISONSIt is very difficult to know in any absolute sense whether society should be spending more or less than we do on R&D, in total or for any

particular component. We generally believe that R&D is a good thing, but many other good things compete for society’s scarce resources, and abelief that the average product of these investments is high does not necessarily mean that the marginal product is high, in general or withrespect to specific categories of investments. While other countries in the world are not necessarily any better than we are at making thesechoices, it is interesting to see how we compare, and to note in particular ways in which our activities in these areas differ from those of othercountries.

Fig. 9 shows overall R&D expenditures, as a percent of GDP, for the G-5 countries (United States, Japan, Germany, France, and theUnited Kingdom). In general, R&D as a percent of GDP rose in the G-5 over the 1980s and has declined somewhat since. The United States isnear the top of the group,

FIG. 10. International nondefense R&D expenditures as percentage of GDP, 1981–1995. Germany’s data for 1981–1990 arefor West Germany. The 1994 numbers are preliminary; 1995 numbers are preliminary estimates. Source: Ref. 1 andA.Rappaport and J.Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation).

TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES 12661

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 11: [Proceedings of the National Academy of Sciences] (BookZZ.org)

exceeded only by Japan (since 1989) and by Germany (between 1987 and 1990). While we do not have estimates for the other countries in thelast 2 years, the trend would indicate that our recent and apparently continuing reductions in the R&D/ GDP ratio may be moving us to the“middle of the pack” from our historic position near the top. A different view of these comparisons is provided by Fig. 10, which excludesdefense-related R&D from the R&D/GDP ratio. The argument in support of this alternative formulation is that defense R&D is likely to havefewer economic benefits, direct and indirect, than nondefense research, so our relatively high position in Fig. 9 could be misleading. Excludingdefense R&D, our R&D/ GDP ratio is very similar to that of France and the United Kingdom, but is consistently exceeded by Japan andGermany. On the other hand, since much of the recent decrease has been in the defense area, the downward trend is less pronounced whendefense is excluded.

FIG. 11. Government-funded academic research as a fraction of GDP for G-5 nations. Source: Ref. 7.Of course, even if we accept that defense R&D has less economic benefit, Fig. 10 is not the right picture either, unless defense R&D is

economically useless. The right picture is presumably somewhere between Figs. 9 and 10, suggesting that historically our investment ineconomically relevant R&D has been comparable to other countries as a fraction of GDP, but that we appear to be on a downward trend, whileother nations have not, as yet at least, evidenced such a trend.

One could argue that the absolute level of R&D, rather than the R&D/GDP ratio, is the right measure of the scale of our investment; fromthis perspective, the United States would have far and away the strongest research position. This would be right if R&D were a pure publicgood, whose benefits or impact was freely reproducible and hence applicable to any amount of economic activity. [See Griliches (5). Forevidence that the ratio of R&D to economic activity is a better indicator of the significance of spillovers, see Adams and Jaffe (6).]

The defense/nondefense split is any extremely coarse way of distinguishing forms of R&D that might have the most important spillovereffects. An alternative approach is to look at academic research. This is much harder to do, because the nature of academic-like institutionsvaries greatly across countries. Irvine et al. (7) attempted to make overall comparisons of government support for academic research in anumber of countries. Fig. 11 shows their numbers for 1975–1987. Here the United States is again near the bottom of the pack, exceeding onlyJapan in its support for academic research as a fraction of GDP. To the extent that academic R&D comes closer to being a “pure” public goodthan private research, however, then the view that it is the total and not the ratio that counts may apply. If so, then Fig. 11 is irrelevant, andwhat matters is that we spend far more on academic research than any other country. [Of course, if academic research is a pure public good,then it is not clear why it matters which country does it; we can all benefit. Hence the relevant questions are how far-in geographic,technological and institutional space-can R&D be spread. See Adams and Jaffe (6) and Jaffe and Trajtenberg (4).]

Finally, Irvine and his colleagues (7) tabulated government support for academic research by academic field. The

FIG. 12. Distribution of government-funded academic research by field in 1987 for the G-5 nations. Source: Ref. 7.

TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES 12662

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 12: [Proceedings of the National Academy of Sciences] (BookZZ.org)

proportions are shown in Fig. 12. What stands out is that while the United States spends about half of its government support of academicresearch on life sciences, the other countries all spend more like one-third. Interestingly, the other countries differ in where else the money isspent. Relative to the United States, Japan spends more in engineering, and professional and vocational fields; Germany and France spend moreon physical sciences, and the United Kingdom spends more on everything but life sciences (all as shares of the country totals).

I gratefully acknowledge research support from the National Science Foundation Grants SBR–9320973 and SBR–9413099.1. National Science Foundation (1993) Science and Engineering Indicators (Natl. Sci. Found., Arlington, VA).2. Jensen, M. (1991) J. Finance 48, 831–880.3. Hall, B.H. (1993) Industrial Research During the 1980s: Did the Rate of Return Fall?, Brookings Papers on Economic Activity (Microeconomics)

(Brookings Inst., Washington, DC), Vol. 2, pp. 289–330.4. Jaffe, A.B. & Trajtenberg, M. (1996) Proc. Natl. Acad. Sci. USA 93, 12671–12677.5. Griliches, Z. (1979) Bell J. Econ. 10, 92–116.6. Adams, J.D. & Jaffe, A.B. (1996) Rand J. Econ., in press.7. Irvine, J., Martin, B.R. & Isard, P.A. (1990) Investing in the Future (Edward Elgar, Brookfield, VT).

TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES 12663

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 13: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

Measuring science: An exploration

JAMES ADAMS* AND ZVI GRILICHES†

*Department of Economics, University of Florida, Gainesville, FL 32611–7140; and †Department of Economics, Harvard University,National Bureau of Economic Research, Cambridge, MA 02138

ABSTRACT This paper examines the available United States data on academic research and development (R&D) expendituresand the number of papers published and the number of citations to these papers as possible measures of “output” of this enterprise.We look at these numbers for science and engineering as a whole, for five selected major fields, and at the individual university fieldlevel. The published data in Science and Engineering Indicators imply sharply diminishing returns to academic R&D using publishedpapers as an “output” measure. These data are quite problematic. Using a newer set of data on papers and citations, based on an “expanding” set of journals and the newly released Bureau of Economic Analysis R&D deflators, changes the picture drastically,eliminating the appearance of diminishing returns but raising the question of why the input prices of academic R&D are rising somuch faster than either the gross domestic product deflator or the implicit R&D deflator in industry. A production function analysis ofsuch data at the individual field level follows. It indicates significant diminishing returns to “own” R&D, with the R&D coefficientshovering around 0.5 for estimates with paper numbers as the dependent variable and around 0.6 if total citations are used as thedependent variable. When we substitute scientists and engineers in place of R&D as the right-hand side variables, the coefficient on papers rises from 0.5 to 0.8, and the coefficient on citations rises from 0.6 to 0.9, indicating systematic measurement problems withR&D as the sole input into the production of scientific output. But allowing for individual university field effects drives these numbersdown significantly below unity. Because in the aggregate both paper numbers and citations are growing as fast or faster than R&D,this finding can be interpreted as leaving a major, yet unmeasured, role for the contribution of spillovers from other fields, otheruniversities, and other countries.

While the definition of science and of its borders is ambiguous, it is clearly a major sector of our economy and the source of much pastand future economic growth. In this paper we look primarily at “academic” research [as defined by the National Science Foundation (NSF)]and its locus, the research universities. It is a major sector of the total United States “research” enterprise, accounting (in terms of performance)for only 13% of the total research and development (R&D) dollars spent in the United States in 1993 but 51% of all basic research expendituresand 36% of all doctoral scientists and engineers (S&Es) primarily employed in R&D (1). Other major R&D performing sectors, such asindustry, have been studied rather extensively in recent years, but quantitative studies of science by economists are relatively few and farbetween. [See J. Adams for an earlier attempt (2) and P.E.Stephan for a recent survey and additional citations (3)].

The limited question, which we would like to address in this exploratory paper, is posed by the numbers that appear in the latest issue ofScience and Engineering Indicators (S&EI) (1993; ref. 1): during 1981–1991 total R&D performed in the United States academic sector grewat 5.5% per year in “real” terms, whereas the total number of scientific articles attributable to this sector grew by only 1.0% per year (1). Is thisdiscrepancy in growth rates an indication of sharply diminishing returns to investments in science? Or is there something wrong with the basicdata or with our interpretation of them? [For a discussion of similar issues in the analysis of industrial R&D data, see Griliches (4).] Theseofficial measures of “activity” in United States science are plotted in Fig. 1 on a logarithmic scale. We shall try to examine this puzzle by usingdetailed recent (1981–1993) data on R&D expenditures, papers published, and citations to these papers, by major fields of science, for morethan 50 of the major research universities. But before we turn to these calculations, a more general discussion of the measurement issuesinvolved may be in order.

The two major outputs of academic science are new ideas and new scientists. The latter is relatively easy to count, and its private value canbe computed by capitalizing the lifetime income differentials that result from such training (5). Ideas are much more elusive (6). As far as direct(internal) measures of scientific output are concerned, the best that can be done at the moment is to count papers and patents and adjust themfor the wide dispersion in their quality by using measures of citation frequency. That is what we will be doing below. [For an analysis ofuniversity patenting see Henderson et al. (7). For an analysis of citations in industrial patents to the scientific literature see F.Narin(unpublished work)‡ and Katz et al. (8).]

Indirect measures of the impact of science on industrial invention and productivity are based either on survey data (9–11) asking firmsabout the importance of academic science to their success, case studies of individual inventions (12–15), or various regression analyses where ameasure of field or regional productivity (primarily in agriculture) is taken to be a function of past public R&D expenditures or the number ofrelevant scientific papers (2, 16–18). All of these studies are subject to a variety of methodological-econometric problems, some of which arediscussed by Griliches (4, 19). Moreover, none of them can capture the full externalities of science and thus provide only lower-boundestimates for its contributions.

Direct measures of scientific output such as papers and the associated citation measures have generated a whole research field ofbibliometrics in which economists have been only minor participants. [See Van Raan (20) and Elkana et al. (21) for surveys and additionalreferences.] Most of this work has

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviations: R&D, research and development; BEA, Bureau ofEconomic Analysis; ISI, Institute for Scientific Information;S&EI,Science and Engineering Indicators; CHI, Computer Horizons, Inc.;S&Es, scientists and engineers; NSF, National ScienceFoundation.

‡Narin, F., National Institutes of Health Economics Roundtable on Biomedical Research, Oct. 18–19, 1995, Bethesda, MD.

MEASURING SCIENCE: AN EXPLORATION 12664

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 14: [Proceedings of the National Academy of Sciences] (BookZZ.org)

focused on the measurement of the contribution of individual scientists or departments within specific fields [see Stigler (22) and Stephan (3) ineconomics and Cole and Cole (23) in science more generally]. Very few have ventured to use bibliometrics as a measure of output for a field asa whole. [Price (24) and Adams (25) at the world level and Pardey (26) for agricultural research are some of the exceptions.] The latter isbedeviled by changing patterns of scientific production and field boundaries and the substantive problems of interpretation implied by thegrowing size of scientific literature, some of which we will discuss below.

FIG. 1. Research input and output indicators I. All United States academic institutions (1980–93, log scale) (1). R&D is givenin 1987 dollars. Paper numbers are based on more than 3500 journals, interpolated for even years.

THE AGGREGATE STORYReturning to the aggregate story depicted in Fig. 1, we note that the number of scientific papers originating in United States universities

given in (S&EI) grew significantly more slowly during 1981–1991 than the associated R&D numbers. But reading the footnote in (S&EI)raises a clear warning signal. The paper numbers given in this source are for a constant set of journals! If science expands but the number ofjournals is kept constant, the total number of papers cannot really change much (unless they get shorter). United States academic papers couldalso expand in numbers if they “crowded out” other paper sources, such as industry and foreign research establishments. But in fact the qualityand quantity of foreign science was rising over time, leading to another source of downward pressure on the visible tip of the science outputiceberg, the number of published papers. If this is true then the average published paper has gotten better, or at least more expensive, in thesense that the resources required to achieve a certain threshold of results must have been rising in the face of the increase in competition forscarce journal space. Another response has been to expand the set of relevant journals, a process that has been happening in most fields ofscience but is not directly reflected in the published numbers. (The published numbers do have the virtue of keeping a dimension of the averagepaper quality constant, by holding constant the base period set of journals. This issue of the unknown and changing quality of papers willcontinue to haunt us throughout this exercise.)

We have been fortunate in being able to acquire a new set of data (INST100) assembled by ISI (Institute for Scientific Information), theproducers of the Science Citations Index, based on a more or less “complete” and growing number of journals, though the number of indexedjournals did not grow as fast as one might think (Fig. 2). The INST100 data set gives the number of papers published by researchers from 110major United States research universities, by major field of science and by university, for the years 1981–1993. (See Appendix A for asomewhat more detailed description of these and related data.) It also gives total citation numbers to these papers for the period as a whole andfor a moving 5-year window (i.e., total citations during 1981–1985 to all papers published during this same period). This is not exactly themeasure we would want, especially since there may have been inflation in their numbers over time due to improvements in the technology ofciting and expansion in the numbers of those doing the citing; but it is the best we have.

FIG. 2. Publications and Citations, Growth of Components, 1980– 1994, all “science” fields; 1980=1.0 (30).There are also a number of other problems with these data. In particular, papers are double counted if authors are in different universities

and the number of journals is not kept constant, raising questions about the changing quality of citations as measures of paper quality. The firstproblem we can adjust for at the aggregate and field level (but not university); the second will be discussed further below. Table 2 shows thatwhen we use the new, “expanding journals set” numbers, they grow at about 2.2% per year faster, in the aggregate. Hence, if one accepts thesenumbers as relevant, they dispose of about one-half of the puzzle.

Another major unknown is the price index that should be used in deflating academic R&D expenditures. NSF has used the gross domesticproduct implicit deflator in the Science and Engineering Indicators and its other publications. Recently, the Bureau of Economic Analysis(BEA) produced a new set of “satellite accounts” for R&D (27), and a new implicit deflator (actually deflators) for academic R&D (separatelyfor private and state and local universities).§ This deflator grew significantly faster than the implicit gross domestic product deflator during1981–1991, 6.6% per year versus 4.1%. It grew even faster relative to the BEA implicit deflator for R&D performed in industry, which is onlygrowing at 3.6% per year during this period. It implies that doing R&D in universities rather than in industry became more expensive at the rateof 3% per year! This is a very large discrepancy, presumably produced by rising fringe benefits and overhead rates, but it is not fullybelievable, especially since one’s impression is that there has been only modest growth in real compensation per researcher in the academyduring the last 2 decades. But that is what the published numbers say! They imply that if we switch to counting papers in the “expanding set” ofjournals and allow for the rising relative cost of doing R&D in universities, there is no puzzle left. The two series grow roughly in parallel. But

§See also National Institutes of Health Biomedical Research and Development Price Index (1993) (unpublished report) andJankowski (28).

MEASURING SCIENCE: AN EXPLORATION 12665

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 15: [Proceedings of the National Academy of Sciences] (BookZZ.org)

it does leave the question, to be pursued further on another occasion, why are the costs of doing academic R&D rising this fast? Is that adifferent manifestation of diminishing returns (rising costs) to it?Table 1. United States academic science by major field in 1989Field Total R&D,

millions of dollarsNo. of papersS&EI

No. of papers(UD) INST100

Citations 5 years(DU)

Citations per paper5 years*

Biology 2,638 29,862 28,271 536,453 16.4Chemistry 608 9,025 10,276 117,403 10.6Mathematics 214 3,367 3,013 11,231 3.1Medicine 3,828 34,938 25,885 399,983 13.4Physics 775 11,392 12,447 150,927 9.8Subtotal 8,063 88,584 79,892 1,215,997All sciences andengineering

15,016 99,215 107,674 1,404,993

UD, unduplicated; DU, duplicated paper and citation counts.*Duplicated citations per duplicated paper.

Fig. 3 adds these new measures to Fig. 1 and shows that the concern about diminishing returns at the aggregate level was an artifact of the“fixed journals” aspect of the official data and the use of the implicit gross domestic product deflator to deflate academic R&D. In theaggregate, the new measure of the total number of papers still grows more slowly than the NSF-deflator-based “real” R&D expenditures but isnow close to the growth rate of the BEA-deflator-based R&D numbers. On the other hand, total citations, which one is tempted to interpret as“quality” weighted paper numbers, grow at about the same rate as appropriately lagged and weighted NSF-based R&D numbers andsignificantly faster than the similar BEA-based numbers. (The citation numbers were adjusted for the growing double-counting of multi-authored papers across universities.)

Of course, these new numbers must also be interpreted with care. There are both factual and conceptual questions that need furtherinvestigation. To what extent does the time profile in the growth of papers and citations in the INST100 data set represent actual growth in thesize of the relevant scientific literatures or does it just reflect the “coverage” expansion by ISI of an already existing body of literature? A moredifficult question, given the public-good nature of scientific papers, is raised by the growing number of citations that come from an expansionin the size of the interconnecting literatures and also from changes in citation practices. If Russians are suddenly allowed to read Westernscience and publish in Western journals, and if their journals are now indexed by ISI, should that be counted as an increase in the output ofUnited States science? Is science in 1990 better than in 1980 just because it reaches more scientists today? Yes, in its public-good effect. Notnecessarily so, if we want a pure production concept. But before we continue this discussion we shall first turn to consider some of these issuesat the more “micro” field-by-university level.

FIG. 3. United States Academic Science: Alternative Views, 1981– 1993, log scale. Citations: 5-year moving sum to papers int to t-4, adjusted for duplication in interuniversity paper counts. See text for more detail. Authors’ calculations from databases and sources described in Appendix.

FIELDSTable 1 shows the levels of our major variables in 1989, for five different fields of science: biology, chemistry, mathematics, medicine,

and physics (we have excluded the more amorphous field of engineering and technology, the social sciences, and several other smaller fields,such as astronomy). The first two columns are based on data from (S&EI) for all United States academic institutions (1). The second half of thistable is based on a new unpublished data set from ISI and refers to the top 110 research universities. (See the Data Appendix for more detail.)The five fields that we shall examine accounted for about 54% of total academic R&D in 1989 and 74% of all scientific papers (in the INST100data set). Within these fields biology and medicine clearly dominate, accounting for 80% of total R&D in this subset of fields and 50% of allpapers.

Table 2 gives similar detail by major field of science. If one uses the NSF-(S&EI)-based R&D and paper numbers, all of the examinedfields have done badly. Switching to the INST100 population, BEA implicit indexes deflated R&D, and the unduplicated number of papers inthe ISI “expanding journals” set, biology, chemistry, and physics, are now doing fine, but medicine and especially mathematics still seem to besubject to diminishing returns. The numbers look better if one uses total citations as one’s “output” measure, but after adjusting them forgrowing duplication (we can make this adjustment only at the total field level) the story of mathematics is still a puzzle, and adding computersciences does not solve it.¶

FIELDS BY UNIVERSITIESTo try and get a better understanding of what is happening to research productivity we turn to the less aggregated and more relevant level

of fields in individual universities. We say more relevant because with more disaggregated data we are likely to match better research outputswith research inputs. In principle, data on the individual research project would improve the match, but these data are not available.

We have reasonable data on approximately 50 universities, 5 science fields, and 21 years (see Appendix). In reality we have two distincttime series on numbers of scientific papers attrib

¶The parallel numbers (in Table 2) for mathematics and computer sciences combined are: 7.8, 5.6, NA (not available), 1.7, 1.4, 1.2,(0.9).

MEASURING SCIENCE: AN EXPLORATION 12666

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 16: [Proceedings of the National Academy of Sciences] (BookZZ.org)

uted to a particular university, one from Computer Horizons (CHI) covering 1973–1984, the other from ISI covering 1981– 1993. In additionwe have citation data from ISI for the second period only, which appear in the form of five-year moving sums or “windows,” and are thusoverlapping from year to year. Therefore, for the analysis of citations we use just three effectively non-overlapping windows ending in 1985,1989, and 1993, but appropriately recentered on 1982, 1986, and 1990 because of the timing of citations, which are concentrated on the earlieryears of any window.Table 2. United States academic science annual growth rates by selected field and total (all fields)

Total R&D 1979–91 Papers S&EI 1981–91, % Papers, INST100 1981–91 Citations 1981–85 to 1989–93Field SE&I,* % BEA,† % DU, % UD, % DU, % UDA, %Biology 5.3 3.1 –1.0 3.7 3.2 7.2 (6.7)Chemistry 5.0 2.8 2.1 3.6 3.5 4.4 (4.3)Mathematics 4.2 2.0 –2.3 0.6 0.2 0.5 (0.1)Medicine 6.1 3.9 1.0 3.2 2.4 5.3 (4.7)Physics 4.3 2.2 3.9 6.4 5.6 5.9 (5.1)Total 5.1 2.9 1.0 3.6 2.8 5.7 (4.9)

UD, unduplicated; DU, duplicated counts; UDA, duplicate counts adjusted by the estimated rate of duplication in paper counts.*From S&EI, deflated by the gross domestic product deflator.†Deflated by the BEA R&D deflator.

Table 3 shows that the universities in our sample accounted for about two-thirds of the R&D, papers, and citations in the full INST100data from ISI covering the top 110 research universities in the year 1989.

We estimate several versions of a “production function,” of the formγ=α+βW(r)+γX+λt+u,where y is the logarithm of one of our measures of output (papers or citations), W(r) is the logarithm of a distributed lag function of past

R&D expenditures, or the number of S&Es, or both, X is a set of other “control” variables such as type of school, and t is a time trend or a setof year or period dummy variables, whereas u represents all other unaccounted forces determining this particular measure of output. Ourprimary interest centers on the parameters β and λ. The first would measure the returns to the scale of the individual (or rather, university)research effort level, if everything else were correctly specified in this equation, while the second, will indicate the changing general level of“technology” used to convert research dollars into papers or citations.

Table 4 summarizes our estimates of this relationship. The first two columns report the estimated coefficients of the logarithm of laggedR&D with weights 0.25, 0.5, and 0.25, respectively, for R&D lagged one, two, and three years, and the coefficients of a linear time trend, basedon two different paper series and different time periods. The estimated R&D coefficients hover around 0.5, indicating rather sharplydiminishing returns to the individual university effort, with medicine having a somewhat higher coefficient and mathematics an even lowerone.|| Again, except for mathematics, the trend coefficients are positive and significant, indicating that this tendency to diminishing returns atthe individual university level is counteracted to a significant extent by the external contribution of the advances in knowledge in the field (andin science) as a whole, arising both from the R&D efforts in other universities, other institutions (such as the National Institutes of Health), andother countries. Other variables included in the list of Xs, such as indicators whether a university was listed among the top 10 researchuniversities, whether it was private, and the size of its doctoral program, were significant and contributed positively to research “productivity”but did not change the estimated β and λ coefficients significantly.

Columns 3 and 4 of Table 4 use 5-year sums of papers and citations centered on 1982, 1986, and 1990 as their dependent variables. Theypool the three non-overlapping cross-sections, allowing for different year constants and including the above mentioned “type of school” controlvariables. For the citation regressions we redefine our R&D variable to reflect the fact that the dependent variable includes citations to 5 yearsworth of papers, but in different proportions. We assume, and it is consistent with the available evidence, that each of the 5-year windows ofcitations refers only to 4 years of lagged papers in 1, 2, 3, and 4 proportions.** Combined with our assumed 3-year lag of papers behind R&D,this gives a relatively long lag structure for the 5-year citations relevant distributed lag of R&D (CWRD: 0.025, 0.100, 0.200, 0.300, 0.275,0.100).

The results of using 5-year sums of papers (column 3) are essentially the same as those using annual numbers (columns 1 and 2). Theestimated R&D coefficients in the citations regressions (column 4) are significantly higher, however, in all fields, by about 0.1+. Since theseare basically cross-sectional results, they are not an artifact of the expanding journal universe and indicate that additional R&D investments proTable 3. Regression sample as a fraction of total INST100 population in 1989 and growth rates of major variables

Growth ratesField No. of universities Total R&D Papers Citations Total R&D*

1979–91Papers 1981–91DU

5-year citations(81–85)–(89–93)DU

Biology 54 0.78 0.69 0.58 2.5 3.7 7.0Chemistry 55 0.83 0.67 0.68 2.0 3.7 4.7Mathematics 53 0.66 0.69 0.73 2.3 0.6 0.5Medicine 47 0.69 0.58 0.58 2.4 3.2 5.1Physics 52 0.69 0.63 0.63 0.9 5.5 5.8

DU, duplicated counts.*BEA deflator.

||This is still true if computer sciences are included in the definition of “mathematics.”**This is about right. The number of current (i.e., lag zero) citations is only 1 to 1.5% of the total.

MEASURING SCIENCE: AN EXPLORATION 12667

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 17: [Proceedings of the National Academy of Sciences] (BookZZ.org)

duce not only more papers but also higher quality papers (at least as measured by the average number of citations that they receive). When werun regressions of citation numbers on both paper numbers and R&D, both variables are “significant,” with the papers coefficient at about 1.1,indicating some increasing returns in terms of citations to the size of the research output unit (perhaps a larger opportunity for self-citation?),and a consistently significant R&D coefficient of about 0.1+ . Still, the “total” R&D coefficients in column 4 are far below unity.Table 4. “Output” regressions: Coefficient of lagged 3-year average R&D and trend

Papers (annual) Pooled cross-sections centered on 1982, 1986, 1990 Eight-year difference 1982–1990

Field CHI 1976–1984

INST1001981–1993

Papers (5 year) Citations (5 year) Papers Citations

Biology 0.625 0.517 0.553 0.682 0.063 0.170Chemistry 0.434 0.510 0.475 0.687 0.187 0.318Mathematics 0.365 0.419 0.408 0.543 0.171 0.179Medicine 0.717 0.582 0.625 0.711 0.015* –0.058*Physics 0.478 0.511 0.511 0.643 0.173 0.263Trend 1989 1993 1989 1993 1985–1993 1985–1993Biology 0.024 0.025 0.09* 0.21 0.17 0.41 0.04 0.06Chemistry 0.015 0.022 0.07* 0.17 0.06* 0.20 0.03 0.04Mathematics –0.023 –0.002* –0.00 –0.01* –0.03* –0.11* 0.01* 0.00*Medicine 0.032 0.024 0.15 0.23 0.19 0.42 0.03 0.06Physics 0.025 0.050* 0.23 0.41 0.21 0.39 0.07 0.06

*Not “significantly” different from zero at conventional statistical test levels.

Formal R&D expenditures may not measure correctly the total research resource input, especially in smaller institutions. The only otherresource measure available to us is total S&Es (within the field and university). Table 5 looks at the effect of varying the measure of scienceinput on the estimated elasticities of science output. We compare the distributed lag function of real R&D of Table 4 with a similar distributedlag function of S&Es, and for good measure we report a third specification that includes both S&Es and real R&D per S&E. The outputmeasures are 5-year windows of papers and citations. However, data on scientists and engineers by field and university were not collected after1985 so that we can use only two cross sections of papers and citations centered on 1982 and 1986, not the three reported in Table 4. Allvariables are in logarithms.

The results of this switch are interesting. The elasticities reported in Table 5 are all highly significant by conventional standards, but theelasticities calculated using S&Es are on average 0.26 higher: the paper elasticity clusters around 0.8 rather than 0.5, whereas the citationelasticity is 0.9 on average rather than 0.6. When we add R&D per S&E as a separate variable the main effect of S&Es is about the same butthere is an additional effect, generally somewhat smaller yet still significant, of per capita R&D. These findings suggest that not all research isfinanced by grants, but that departments with more generous support per researcher are more productive. More of the research in the smallerprograms is being supported by teaching funds, because the S&E input measure is larger in these programs relative to real R&D. Thisinterpretation is borne out by the comparison between biology, medicine, and chemistry, where a larger fraction of researchers earn grants, withmathematics and physics, where grants are less common. The jump in the elasticity when S&Es are substituted for R&D is only 0.1 forchemistry, biology, and medicine but it is 0.5 for mathematics and physics. Of course, in all of the fields we are counting total S&Es, notresearch S&Es. Fewer of these are researchers in the smaller programs, so that to some extent the human resources used in research are beingoverstated, more so in the smaller programs than in the larger ones.

The last column of Table 4 reports parallel results using an 8-year difference in these moving average variables, allowing thereby for thepossible influence of unmeasured individual university effects on research productivity. (The same is also true for the 4-year difference-basedresults, not shown, using the S&E variables reported in Table 5.) The estimated R&D coefficients are now much smaller, though still“significant,” except for medicine, where they effectively vanish, indicating that there is a large university effect that dominates this result andthat there is little information in the changes in R&D or S&E numbers during this period.

There may also be problems arising from the differential truncation caused by the 5-year window in the ISI data. If larger R&D programsare directed to more basic questions they could be producing a smaller number of more and longer cited papers. Thus, cutting off the citationcount at 5 yearsTable 5. “Output” regressions: Coefficient of lagged 3-year average R&D or S&Es

Coefficients of R&D or S&Es*Papers† (5 year) Citations† (5 year)

Field R&D* S&Es* S&Es, R&D per S&E‡ R&D‡ S&Es‡ S&Es, R&D per S&E‡

Biology 0.64 0.85 0.91, 0.27 0.80 0.96 1.04, 0.45Chemistry 0.48 0.67 0.68, 0.38 0.71 0.85 0.88, 0.63Mathematics 0.41 0.88 0.75, 0.26 0.57 1.07 0.86, 0.44Medicine 0.68 0.67 0.73, 0.55 0.82 0.78 0.86, 0.67Physics 0.51 0.93 0.85, 0.22 0.65 1.11 0.96, 0.38Biology and medicine combined 0.75 0.68 0.81, 0.59 0.99 0.82 1.03, 0.89

All reported coefficients are statistically “significantly” different from zero at conventional significance levels.*Two-year pooled cross-sections, 1982 and 1986.†Output variables.‡Input variables.

MEASURING SCIENCE: AN EXPLORATION 12668

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 18: [Proceedings of the National Academy of Sciences] (BookZZ.org)

would differentially underestimate their contribution. That something like that may be happening can be seen in Table 6, where we reportparallel results for a cross-section based on 13 years of citations (in 1981) and compare it to another single-year cross-section of papers (in1989) with only 5 years worth of citations. The longer window does yield higher coefficients, but only in the life sciences (biology andmedicine) is the difference substantively significant. Moreover, there is no indication that if the window were lengthened even further, theestimated coefficients would approach unity. In parallel intermediate different window-length regressions (not shown here), the estimatedcoefficients peak around the 9–11-year window and do not rise significantly thereafter.

Looking at the estimated year constants in the lower half of Table 4 we see that they are all substantial in size and statistically “significant”by 1993, except for mathematics. Allowing for the growth in multi-university papers (based on the numbers in Table 2) would reduce thesenumbers somewhat, but not substantially (to 0.37 for biology, 0.19 chemistry, 0.36 medicine, and 0.33 physics in the 1993 citations column).Dividing these numbers by 8 (the difference in years between 1993 and 1985) would give another estimate of the contribution of “external”science, per year, to research productivity.

An alternative interpretation for an estimated β < 1 would focus on the possibility of errors in allocating both R&D expenditures andpapers within universities to particular fields. If papers in universities are created both by research expenditures in the designated fields and byresearch expenditures in other relevant or misclassified fields within the university, then an aggregate regression, aggregating over all fields ofinterest, may yield higher R&D coefficients. That is what may be implied by the last row in Table 6, where the coefficients based onaggregated data are significantly higher and now approach unity. Note that this measures either errors or within university across fields ofscience spillovers. It does not reflect possible contributions of new knowledge eminating from other universities and countries. (This findingrequires additional analysis to check its robustness against other unmeasured university effects and different field aggregations.)

It is especially difficult to separate biology from medicine. In the final line of Table 5 we collapse biology and medicine into a biomedicalcomposite. The results suggest that there is indeed some difficulty in distinguishing R&D in biology from R&D in medicine, since thecomposite R&D elasticity is higher than the R&D elasticities computed separately for biology and medicine. The results of the aggregation forthe S&E measure are more mixed and appear to be an average of the separate estimates.

At this time there are many loose ends in the analysis. As indicated above, we have only started exploring the data available to us and therange of possible topics it could throw some light on. In the intermediate run we could do more with the panel structure of the data and withother indications of university quality. We could also explore directly within-university spillovers from neighboring fields of science and therole of Ph.D. training in the research productivity nexus. In the longer run and with more resources, better data could be assembled, allowing usto analyze citations to single year papers and to more finely defined fields. All of this, however, will still leave us looking “within” science, atits internal output, without being able to say much about its overall, external societal impact.Table 6. Impact of window length: Citations as a function of lagged R&D

Coefficients ofField 13-year total 1981 papers 5-year total 1989 papers DifferenceBiology 0.841 0.546 0.295Chemistry 0.727 0.687 0.040Mathematics 0.620 0.562 0.058Medicine 0.881 0.574 0.307Physics 0.658 0.661 –0.003Five fields combined 0.970 0.889 0.081

AN INCONCLUSIVE CONCLUSIONFrom the numbers we have one could conclude that United States academic science has been facing diminishing returns in terms of papers

produced per R&D dollar, both because of the rising cost of achieving new results within specific scientific fields and because of risingcompetition due to the expanding overall size of the scientific enterprise, both within the United States and worldwide, impinging on arelatively slowly growing publication outlets universe. In terms of total citations achieved per R&D dollar, the picture is somewhat brighter,indicating a rising quality of United States science in the face of such difficulties, though this interpretation is clouded by the question whetherthe actual science is better or is it just being evaluated on a larger and changing stage (the growing number of journals and papers in the worldas a whole and changing citation practices).

Even though the within-science costs of new knowledge may be rising, its social value may also be rising as our economy grows and alsoas it continues to contribute to a growing worldwide economy. But to measure this will require different data and different modes of analysis.Just trying to connect it to the growth of the gross national product will not do, since most of the output of science goes into sectors where itscontribution is currently not counted.†† Measuring the true societal gains from medical advances or from the vast improvements in informationtechnology is in principle possible but far from implementable with the current state of economic and other data. That is a most important taskthat we should all turn to. But right now, unfortunately (or is it fortunately?), we have to leave it for another day.

DATA APPENDIXThe data at the field and university level used in this paper derive from NSF Surveys and two bibliometric sources. We took total R&D

expenditures from the CASPAR data base of universities created for the NSF (Quantum Research Corporation, 1994; ref. 31). The underlyingsource for university R&D is the NSF’s annual Survey of Scientific Expenditures at Universities and Colleges, which collects R&D by scienceand engineering discipline, source of funds, and functional category of expenditures. R&D is available for the entire period 1973–1992 with theexception of a few disciplines. Our R&D deflators are the BEA’s newly available sectoral R&D price indexes, which convert current dollarR&D expenditures into constant 1987 dollars separately for private and public universities.

The data on papers and citations come from two distinct sources. The earlier data were produced for NSF by CHI and cover the period1973–1984, based on the original ISI tapes. These earlier data report numbers of papers by university and field published in an expanding set ofthe most influential journals in science, rising in number from about 2100 journals in 1973 to about 3300 in 1984. The later data wereconstructed by the ISI itself for the 1981–1993 period. Thus, we have an overlapping period in the two data sets for comparative purposes. Thejournal selection criteria are slightly different in the ISI data than for CHI and the number of journals is somewhat larger. At this time the ISIjournal set includes

††See Griliches (4, 29) for additional discussion of these issues.

MEASURING SCIENCE: AN EXPLORATION 12669

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 19: [Proceedings of the National Academy of Sciences] (BookZZ.org)

roughly 4000 journals in the sciences and 1500 journals in the social sciences. A second difference from the CHI data is that ISI countsmultiple authored papers in different universities as whole papers up to 15 times, whereas CHI assigns equal shares of the papers to differentuniversities based on the number of authors in different universities.

A final difference is that the CHI data follow the CASPAR fields to the letter, whereas the ISI data on papers and citations by universityand field appear originally in a more disaggregated form than the biological and medical fields of our regressions. We combined “biology andbiochemistry” and “molecular biology and genetics” to form biology. We combined “clinical medicine,” “immunology,” “neuroscience,” and“pharmacology” to form medicine.

The later ISI data contain more measures of scientific output in the universities and fields than the CHI data. There are two measures ofnumbers of papers: the number published in a particular year, and the number published over a 5-year moving window. Added to this are twomeasures of citations to the papers: cumulative total citations to papers published in a particular year through 1993 and total citations to paperspublished over a 5-year moving window over the course of that window. Each of these output measures has some limitations that stem from theconcept and interval of time involved in the measurement. Numbers of papers do not take into account the importance of papers, whereas totalcitations do. Especially in the larger research programs it is the total impact that matters, not the number of papers, however small. Turning tocitations, cumulative cites through 1993 suffer from truncation bias in comparing papers from different years. A paper published in 1991 hasonly a small part of the citations it will ever get by 1993, whereas a paper published in 1981 has most of them intact. The time series profile ofcites will show a general decline in citations, especially in short panels, merely because earlier vintages of papers have decreasing periods inwhich to draw cites. The second measure available to us—the 5-year moving window of cites to papers published in the same window—is freeof this trended truncation bias. However, there is still a truncation bias in the cross-section owing to the fact that better papers are cited over alonger period. Thus, total cites are to some extent understated in the better programs over the 5 years in comparison to weaker programs. Thisproblem could be gotten around by using a 10–12-year window on the cites, but then we are stuck with one year’s worth of data and we wouldbe unable to study trends.

Another point about the data used in the regressions, as opposed to the descriptive statistics, is that they cover an elite sample of topUnited States universities that perform a lot of R&D. The number of universities is 54 in biology, 55 in chemistry, 53 in mathematics, 47 inmedicine, and 52 in physics. These universities are generally the more successful programs in their fields among all universities. Theirexpenditures constitute roughly one-half of all academic R&D in each of these areas of research. It turns out that, for the much larger set ofuniversities that we do not include, the data are often missing or else the fields are not represented in these smaller schools in any substantiveway. The majority of high-impact academic research in the United States is in fact represented by schools that are in our samples. Remarkably,and as if to underscore the skewness of the distribution of academic R&D, it is still true that the research programs in our sample display anenormous size range.

We are indebted to JianMao Wang for excellent research assistance and to the Mellon Foundation for financial support. We are alsoindebted to Lawrence W.Kenny for encouraging us to investigate the role of S&Es, as well as real R&D.1. National Science Board (1993) Science and Engineering Indicators: 1993 (GPO, Washington, DC).2. Adams, J.D. (1990) J. Political Econ. 98, 673–702.3. Stephan, P.E. (1996) J. Econ. Lit., in press.4. Griliches, Z. (1994) Am. Econ. Rev. 84 (1), 1–23.5. Jorgenson, D.W. & Fraumeni, B.M. (1992) in Output Measurement in the Service Sectors, NBER Studies in Income and Wealth, ed. Griliches, Z.

(Univ. Chicago Press, Chicago), Vol. 55, pp. 303–338.6. Rosenberg, N. & Nelson, R.R. (1993) American Universities and Technical Advance in Industry, CEPR Publication No. 342 (Center for Economic

Policy Research, Stanford, CA).7. Henderson, R., Jaffe, A.B. & Trajtenberg, M. (1995) Universities as a Source of Commercial Technology: A Detailed Analysis of University

Patenting 1965–1988, NBER Working Paper 5068, (Natl. Bureau of Econ. Res., Cambridge, MA).8. Katz, S., Hicks, D., Sharp, M. & Martin, B. (1995) The Changing Shape of British Science (Science Policy Research Unit, Univ. of Sussex, Brighton,

England).9. Levin, R., Klevorick, A., Nelson, R. & Winter, S. (1987) in Brookings Papers on Economic Activity, Special Issue on Microeconomics, eds. Baily, M.

& Winston, C. (0), Brookings Inst., Washington, DC. 783–820.10. Mansfield, E. (1991) Res. Policy 20, 1–12.11. Mansfield, E. (1995) Rev. Econ. Stat. 77 (1), 55–65.12. Griliches, Z. (1958) J. Political Econ. 64 (5), 419–431.13. Nelson, R.R. (1962) in The Rate and Direction of Economic Activity, ed. Nelson, R.R. (Princeton Univ. Press, Princeton), pp. 549–583.14. Weisbrod, B.A. (1971) J. Political Econ. 79 (3), 527–544.15. Mushkin, S.J. (1979) Biomedical Research: Costs and Benefits (Ballinger, Cambridge, MA).16. Griliches, Z. (1964) Am. Econ. Rev. 54 (6), 961–974.17. Evenson, R.E. & Kislev, Y. (1975) Agricultural Research and Productivity (Yale Univ. Press, New Haven, CT).18. Huffman, W.E. & Evenson, R.E. (1994) Science for Agriculture (Iowa State Univ. Press, Ames, IA).19. Griliches, Z. (1979) Bell J. Econ. 10 (1), 92–116.20. Van Raan, A.F.J. (1988) Handbook of Quantitative Studies of Science and Technology (North-Holland, Amsterdam).21. Elkana, Y., Lederberg, J., Merton, R.K., Thackray, A. & Zuckerman, H. (1978) Toward a Metric of Science: The Advent of Science Indicators

(Wiley, New York).22. Stigler, G.J. (1979) Hist. Political Econ. II, 1–20.23. Cole, J.R. & Cole, S. (1973) Social Stratification in Science (Univ. Chicago Press, Chicago).24. Price, D.J. de S. (1963) Little Science, Big Science (Columbia Univ. Press, New York).25. Adams, J.D. (1993) Am. Econ. Rev. Papers Proc. 83 (2), 458–462.26. Pardey, P.G. (1989) Rev. Econ. Stat. 71 (3), 453–461.27. Bureau of Economic Analysis, U.S. Department of Commerce (1994) Surv. Curr. Bus. 74 (11), 37–71.28. Jankowski, J. (1993) Res. Policy 22 (3), 195–205.29. Griliches, Z. (1987) Science 237, 31–35.30. ISI (1995) Science Citation Index (ISI, Phildelphia).31. Quantum Research Corp. (1994) CASPAR, CD Rom version 4.4 (Quantum Res., Bethesda).

MEASURING SCIENCE: AN EXPLORATION 12670

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 20: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

Flows of knowledge from universities and federal laboratories:Modeling the flow of patent citations over time and across

institutional and geographic boundariesADAM B.JAFFEab AND MANUEL TRAJTENBERGcaBrandeis University and National Bureau of Economic Research, Department of Economics, Waltham, MA 02254–9110; and cTel Aviv

University and National Bureau of Economic Research, Department of Economics, Tel Aviv 69978, IsraelABSTRACT The extent to which new technological knowledge flows across institutional and national boundaries is a question of

great importance for public policy and the modeling of economic growth. In this paper we develop a model of the process generatingsubsequent citations to patents as a lens for viewing knowledge diffusion. We find that the probability of patent citation over time aftera patent is granted fits well to a double-exponential function that can be interpreted as the mixture of diffusion and obsolescensefunctions. The results indicate that diffusion is geographically localized. Controlling for other factors, within-country citations are more numerous and come more quickly than those that cross country boundaries.

The rate at which knowledge diffuses outward from the institutional setting and geographic location in which it is created has importantimplications for the modeling of technological change and economic growth and for science and technology policy. Models of endogenouseconomic growth, such as Romer (1) or Grossman and Helpman (2), typically treat knowledge as completely diffused within an economy, butimplicitly or explicitly assume that knowledge does not diffuse across economies. In the policy arena, ultimate economic benefits areincreasingly seen as the primary policy motivation for public support of scientific research. Obviously, the economic benefits to the UnitedStates economy of domestic research depend on the fruits of that research being more easily or more quickly harvested by domestic firms thanby foreign firms. Thus, for both modeling and policy-making purposes it is crucial to understand the institutional, geographic, and temporaldimensions of the spread of newly created knowledge.

In a previous paper Henderson et al. (3) we explored the extent to which citations by patents to previous patents are geographicallylocalized, relative to a baseline likelihood of localization based on the predetermined pattern of technological activity. This paper extends thatwork in several important dimensions. (i) We use a much larger number of patents over a much longer period of time. This allows us toexplicitly introduce time, and hence diffusion, into the citation process, (ii) We enrich the institutional comparisons we can make by looking atthree distinct sources of potentially cited patents: United States corporations, United States universities, and the United States government. (iii)The larger number of patents allows us to enrich the geographic portrait by examining separately the diffusion of knowledge from United Statesinstitutions to inventors in Canada, Europe, Japan, and the rest of the world, (iv) Our earlier work took the act of citation as exogenous, andsimply measured how often that citation came from nearby. In this paper we develop a modeling framework that allows the generation ofcitations from multiple distinct locations to be generated by a random process whose parameters we estimate.

THE DATAWe are in the process of collecting from commercial sources a complete data base on all United States patentsd granted since 1963 (≈2.5

million patents), including data for each indicating the nature of the organization, if any, to which the patent property right was assigned; thenames of the inventors and the organization, if any, to which the patent right was assigned; the residence of each inventore; the dates of thepatent application and the patent grant; and a detailed technological classification for the patent. The data on individual patents arecomplemented by a file indicating all of the citations made by United States patents since 1977 to previous United States patents (≈9 millioncitations). Using the citation information in conjunction with the detailed information about each patent itself, we have an extremely rich mineof information about individual inventive acts and the links among them as indicated by citations made by a given patent to a previous one.

We and others have discussed elsewhere at great length the advantages and disadvantages of using patents and patent citations to indicateinventions and knowledge links among inventions (3–5). Patent citations perform the legal function of delimiting the patent right by identifyingprevious patents whose technological scope is explicitly placed outside the bounds of the citing patent. Hence, the appearance of a citationindicates that the cited patent is, in some sense, a technological antecedent of the citing patent. Patent applicants bear a legal obligation todisclose any knowledge that they might have of relevant prior inventions, and the patent examiner may also add citations not identified by theapplicant.

Our basic goal in this paper is to explore the process by which citations to a given patent arrive over time, how this process is affected bycharacteristics of the cited patent, and how differ

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

bTo whom reprint requests should be addressed, e-mail: jaffe@binah. cc. brandeis.edu.dBy “United States patents,” we mean in this context patents granted by the United States Patent Office. All of our research relies on

United States patents in this sense. Currently, about one-half of United States patents are granted to foreigners. Hence, later in thepaper, we will use the phrase United States patents to mean patents granted to residents of the United States, as opposed to thosegranted to foreigners.

eThe city and state are reported for United States inventors, the country for inventors outside the United States.

FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENTCITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES

12671

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 21: [Proceedings of the National Academy of Sciences] (BookZZ.org)

ent potentially citing locations differ in the speed and extent to which they “pick up” existing knowledge, as evidenced by theiracknowledgment of such existing knowledge through citation. Because of the policy context mentioned above, we are particularly interested incitations to university and government patents. We recognize that much of the research that goes on at both universities and governmentlaboratories never results in patents, and presumably has impacts that cannot be traced via our patent citations-based research. We believe,however, that at least with respect to relatively near-term economic impacts, patents and their citations are at least a useful window into theotherwise “black box” of the spread of scientific and technical knowledge.Table 1. Simple statistics for patent subsamples

United States corporations United States universities United States governmentRange of cited patents 1963–1990 1965–1990 1963–1990Range of citing patents 1977–1993 1977–1993 1977–1993Total potentially cited patents 88,257

(1 in 10)10,761(Universe)

38,254(Universe)

Total citations 321,326 48,806 109,729Mean citations 3.6 4.5 2.9Mean cited year 1973 1979 1973Mean citing year 1986 1987 1986Cited patents by field, %Drugs and medical 4.89 29.12 3.36Chemicals excluding drugs 30.37 28.71 20.73Electronics, optics, and nuclear 26.16 27.39 45.40Mechanical 28.18 9.51 17.09Other 10.39 5.28 13.42Citations by region, %United States 70.6 71.8 70.8Canada 1.6 1.7 1.7European Economic Community 14.5 13.2 16.8Japan 11.3 11.0 8.6Rest of world 1.9 2.4 2.1

The analysis in this paper is based on the citations made to three distinct sets of “potentially cited” patents. The first set is a 1-in-10random sample of all patents granted between 1963 and 1990 and assigned to United States corporations (88,257 patents). The second set is theuniverse of all patents granted between 1965 and 1990 to United States universities, based on a set of assignees identified by the Patent Officeas being universities or related entities such as teaching hospitals (10,761 patents).f The third set is the universe of patents granted between1963 and 1990 to the United States government (38,254 patents). Based on comparisons with numbers published by the National ScienceFoundation, these patents are overwhelmingly coming from federal laboratories, and the bulk come from the large federal laboratories. TheUnited States government set also includes, however, small numbers of patents from diverse parts of the federal government. We haveidentified all patents granted between 1977 and 1993, which cite any of the patents in these three sets (479,861 citing patents). Thus we areusing temporal, institutional, geographic, and technological information on over 600,000 patents over about 30 years.

Some simple statistics from these data are presented in Table 1. On average, university patents are more highly cited, despite the fact thatmore of them are recent.g Federal patents are less highly cited than corporate patents. But it is difficult to know how to interpret these averages,because many different effects all contribute to these means. First, the differences in timing are important because we know from other workthat the overall rate of citation has been rising over time (7), so more recent patents will tend to be more highly cited than older ones. Second,there are significant differences in the composition of the different groups by technical field. Most dramatically, university patents are muchmore highly concentrated in Drugs and Medical Technology and less concentrated in Mechanical Technology, than the other groups.Conversely, the federal patents are much more concentrated in Electronics, Optics, and Nuclear Technology than either of the other groups,with less focus on Chemicals. To the extent that citation practices vary across fields, differences in citation intensities by type of institutioncould be due to field effects. Finally, different potentially citing locations have different field focuses of their own, with Japan more likely tocite Electronics patents and less likely to cite Drug and Medical patents. The main contribution of this paper is the exploration of an empiricalframework in which all of these different effects can be sorted out, at least in principle.

THE MODELWe seek a flexible descriptive model of the random processes underlying the generation of citations, which will allow us to estimate

parameters of the diffusion process while controlling for variations over time and technological fields in the “propensity to cite.” For thispurpose we adapt the formulation of Caballero and Jaffe (7), in which the likelihood that any particular patent K granted in year T will citesome particular patent k granted in year t is assumed to be determined by the combination of an exponential process by which knowledgediffuses and a second exponential process by which knowledge becomes obsolete. That is:

p(k, K)=α(k, K) exp[–β1(k, K)(T – t)]× [1–exp(–β2 (T–t))], [1]

where β1 determines the rate of obsolescence and β2 determines the rate of diffusion. We refer to the likelihood determined by Eq. 1 as the“citation frequency,” and the citation frequency as a function of the citation lag (T—t) as a citation

fThere are, presumably, university patents before 1965, but we do not have the ability to identify them as such.gIn previous work (6), we showed that university patents applied for up until about 1982 were more highly cited than corporate

patents, but that the difference has since disappeared.

FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENTCITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES

12672

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 22: [Proceedings of the National Academy of Sciences] (BookZZ.org)

function. The dependence of the parameters α. and β1 on k and K is meant to indicate that these could be functions of certain attributes of boththe cited and citing patents. In this paper, we consider the following as attributes of the cited patent k that might affect its citation frequency: t,the grant year of the potentially cited patent; i = 1..3, the institutional nature of the assignee of the potentially cited patent (corporate, university,or government); and g=1..5, the technological field of the potentially cited patent. As attributes of the potentially citing patent K that mightaffect the citation likelihood we consider: T, the grant year of the potentially citing patent, and L=1..5, the location of the potentially citingpatent.

FIG. 1. Plot of the average citation functions for each of five geographic regions (citation frequency as a function of timeelapsed from each potentially cited patent).To illustrate the plausibility of this formulation, we plot the average citation functions (citation frequency as a function of time elapsed

from the potentially cited patent), for each of the five geographic regions in Fig. 1. This figure shows that citations display a pattern of gradualdiffusion and ultimate obsolescence, with maximal citation frequency occurring after about 5 years. The contrasts across countries in these rawaverages are striking: United States patents are much more likely to cite our three groups of United States patents than are any other locations,with an apparent ranking among other regions of Canada, Rest of World (R.O.W.), European Economic Community (E.E.C.), and then Japan.Although many of these contrasts will survive more careful scrutiny, it is important at this point to note that these comparisons do not controlfor time or technical field effects.

Additional insight into this parameterization of the diffusion process can be gained by determining the lag at which the citation function ismaximized (“the modal lag”), and the maximum value of the citation frequency achieved. A little calculus shows that the modal lag isapproximately equal to 1/β1; increases in β1 shift the citation function to the left. The maximum value of the citation frequency is approximatelydetermined by β2/β1; increases in β2 holding β1 constant increase the overall citation intensity.h Indeed, increases in β2, holding β1 constant, arevery close to equivalent to increasing the citation frequency proportionately at every value of (T—t). That is, variations in β2 holding β1constant are not separately identified from variations in α. Hence, because the model is somewhat easier to estimate and interpret withvariations in α, we do not allow variations in β2.

Consider now a potentially cited patent with particular i, t, g attributes, e.g., a university patent in the Drug and Medical area granted in1985. The expected number of citations that this patent will receive from a particular T, L combination likelihood, as a function of i, t, g, T, andL, times the number of patents in the particular T, L group that are thereby potential citing patents. Even aggregating in this way over T and L,this is still a very small expected value, and so it is not efficient to carry out estimation at the level of the individual potentially cited patent.Instead we aggregate across all patents in a particular i, t, g cell, counting all of the citations received by, e.g., university drug patents granted in1985, given by, e.g., Japanese patents in 1993. The expected value of this total is just the expected value for any one potentially cited patent,times the number of potentially cited patents in the i, t, g cell. In symbols

implying that the equation

[2]

[3]

[4]

or

can be estimated by non-linear least squares if the error εigtTL is well behaved. The data set consists of one observation for each feasiblecombination of values of i, t, g, T, and L. The corporate and federal data each contribute 9,275 observations (5 values of g times 5 values of Ltimes 28 values of t times either 17 (for years before 1977) or 1993 —t (for years beginning in 1977) values of T.i Because the universitypatents start only in 1965, there are only 8,425 university cells, for a total number of observations of 26,975. Of these, about 25% have zerocitations;j the mean number of citations is about 18 and the maximum is 737. The mean value of pitgTL is 3.3 × 10–6.

hThe approximation involved is that log(1+β2/β1) ≈ β2/β1. Our estimations all lead to β2/β1 on the order of 10–6, and indeed theapproximation holds to five significant figures for lags up to 30 years. (e.g., Japanese patents granted in 1993) is just the above

iWe exclude cells for which t=T, where the model predicts that the number of citations is identically zero. In fact, the number ofcitations in such cells is almost always zero.

jAbout two-thirds of the zero citation observations are for cells associated with either Canada or Rest of World.

FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENTCITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES

12673

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 23: [Proceedings of the National Academy of Sciences] (BookZZ.org)

MODEL SPECIFICATION AND INTERPRETATIONThe first specification issue to consider is the difficulty of estimating effects associated with cited year, citing year, and lag. This is

analogous to estimating “vintage,” time and age effects in a wage model or a hedonic price model. If lag (our “age” effect) entered the modellinearly, then it would be impossible to estimate all three effects. Given that lag enters our model non-linearly, all three effects are identified inprinciple. In practice, we found that we could not get the model to converge with the double-exponential lag function and separate α parametersfor each cited year and each citing year. We were, however, able to estimate a model in which cited years are grouped into 5-year intervals.Hence, we assume that α(t) is constant over t for these intervals, but allow the intervals to differ from each other.

All of the estimation is carried out including a “base” value for β1 and β2, with all other effects estimated relative to a base value of unity.kThe various different effects are included by entering multiplicative parameters, so that the estimating equation looks like:

pitgTL = αiαtpαgαTαL exp[–(β1)β1iβIgβIL(T–t)]× [1– exp(–β2 (T–t))] + εigtTL, [5]where i=c, u, f (cited institution type); t = 1963–1990 (cited year) tp= 1…. 6 (5-year intervals for cited year, except the first interval is

1963–1965); g = 1….5 (technological field of cited patent); T = 1977…1993 (citing year); and L = 1….5 (citing region). In this model, unlikethe linear case, the null hypothesis of no effect corresponds to parameter values of unity rather than zero. For each effect, one group is omittedfrom estimation, i.e., its multiplicative parameter is constrained to unity. Thus, the parameter values are interpreted as relative to that base group.l

The estimate of any particular α(k), say α(g=Drugs and Medical), is a proportionality factor measuring the extent to which the patents inthe field “Drugs and Medical” are more or less likely to be cited over time vis à vis patents in the base category “All Other.” Thus, an estimateof α(k=Drugs)=1.4 means that the likelihood that a patent in the field of Drugs and Medical will receive a citation is 40% higher than thelikelihood of a patent in the base category, controlling of course for a wide range of factors. Notice that this is true across all lags; we can thinkof an α. greater than unity as meaning that the citation function is shifted upward proportionately, relative to the base group. Hence the integralover time (i.e., the total number of citations per patent) will also be 40% larger.

We can think of the overall rate of citation intensity measured by variations in α to be composed of two parts. Citation intensity is theproduct of the “fertility” (7) or “importance” (4) of the underlying ideas in spawning future technological developments, and the average “size”of a patent, i.e., how much of the unobservable advance of knowledge is packaged in a typical patent. Within the formulation of this paper, it isnot possible to decompose the α-effects into these two components.m

In the case of α(K), that is, when the multiplicative factor varies with attributes of the citing patents, variations in it should be interpretedas differences in the “propensity to cite” (or in the probability of making a citation) of patents in a particular category vis à vis the base categoryof the citing patents. If, for example, α(K=Europe) is 0.5, this means that the average patent granted to European inventors is one-half as likelyas a patent granted to inventors residing in the United States to cite any given United States patent.

Variations in β1 (again, by attributes of either the cited or the citing patents) imply differences in the rate of decay or “obsolescence”across categories of patents. Higher values of β1 mean higher rates of decay, which pull the citations function downwards and leftward. In otherwords, the likelihood of citations would be lower everywhere for higher β1 and would peak earlier on. Thus, a higher α means more citations atall lags; a lower β1 means more citations at later lags.

When both α(k, K) and β1(k, K) vary, the citation function can shift upward at some lags while shifting downward at others. For example,if α(g=Electronics)=2.00, but β1(g = Electronics)=1.29, then patents in electronics have a very high likelihood of citations relative to the basecategory, but they also become obsolete faster. Because obsolescence is compounded over time, differences in β1 eventually result in largedifferences in the citation frequency. If we compute the ratio of the likelihood of citations for patents in electronics relative to those in “allother” using these parameters, we find that 1 year after being granted patents in electronics are 89% more likely to be cited, but 12 years laterthe frequencies for the two groups are about the same, and at a lag of 20 years Electronics patents are actually 36% less likely to be cited thanpatents in the base category.

RESULTSTable 2 shows the results from the estimation of Eq. 5, using a weighted non-linear least-squares procedure. We weight each observation

by nn=(ntgi*nTL)**0.5, where ntgi is the number of potentially cited patents and nTL the number of potentially citing patents corresponding to agiven cell. This weighting scheme should take care of possible heteroskedasticity, since the observations correspond essentially to “groupeddata,” that is, each observation is an average (in the corresponding cell), computed by dividing the number of citations by (ntgi*nTL).

Time Effects. The first set of coefficients, those for the citing years (αT), and for the cited period (αtp), serve primarily as controls. The αTshow a steep upward trend, reaching a plateau in 1989. This reflects a well-known institutional phenomenon, namely, the increasing propensityto make citations at the patent office, due largely to the computerization of the patent file and of the operations of patent examiners. Bycontrast, the coefficients for the cited period decline steadily relative to the base (1963–1965=1), to 0.65 in 1981–1985, recovering somewhat in1986–1990 to 0.73. This downward trend may be taken to reflect a decline in the “fertility” of corporate patents from the 1960s until themid-1980s, with a mild recovery thereafter. The timing of such decline coincides, with a short lag, with the slowdown in productivity growthexperienced throughout the industrialized world in the 1970s and early 1980s. This suggests a possible causal nexus between these twophenomena, but further work would be required to substantiate this conjecture.

Technological Fields. We allow both for variations in the multiplicative factor αg and in the β1 of each technological field of the citedpatents. Thus, fields with α larger than one are likely to get more citations than the base field at any point in time. On the other hand, the rate ofcitations to patents in fields with larger β1 decays faster than for others. For example, we see in Table 2 that α(Electronics, etc.) = 2.00, meaningthat patents in this field get on average twice as many citations as those in the base field. However, β1 (Electronics, etc.) = 1.29,

kAs noted above, α is not separately identified from β1 and β2..Hence, we do not estimate a “base” value for the parameter α; it isimplicitly unity.

lThe base group for each effect is: Cited time period (tp), 1963–1965; Cited field (g), “All Other”; Type of Cited Institution (i),Corporate; Citing year (T), 1977; Citing region (L), United States.

mCaballero and Jaffe (7) attempt to identify the size of patents by allowing exponential obsolescence to be a function of accumulatedpatents rather than elapsed calendar time. We intend to explore this possibility in future work.

FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENTCITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES

12674

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 24: [Proceedings of the National Academy of Sciences] (BookZZ.org)

and hence the large initial “citation advantage” of this field fades rather quickly over time. This is clearly seen in Fig. 2, where we plot thepredicted citation function for patents in Electronics, Optics, and Nuclear, versus patents in the base field (“All Other”). Patents in electronicsare much more highly cited during the first few years after grant; however, due to their faster obsolescence, in later years they are actually lesscited than those in the base group.Table 2. Non-linear least-squares regression results

Parameter Asymptotic standard error t-statistic for H0 (Parameter=1)Citing year effects (Base=1977)1978 1.115 0.03449 3.321979 1.223 0.03795 5.881980 1.308 0.03943 7.801981 1.400 0.04217 9.481982 1.511 0.04637 11.011983 1.523 0.04842 10.801984 1.606 0.05209 11.641985 1.682 0.05627 12.121986 1.753 0.06073 12.401987 1.891 0.06729 13.241988 1.904 0.07085 12.761989 2.045 0.07868 13.291990 1.933 0.07795 11.971991 1.905 0.07971 11.361992 1.994 0.08627 11.521993 1.956 0.08918 10.73Cited year effects (Base=1963–1965)1966–1970 0.747 0.02871 –8.821971–1975 0.691 0.02820 –10.971976–1980 0.709 0.03375 –8.621981–1985 0.647 0.03647 –9.691986–1990 0.728 0.04752 –5.72Technological field effects (Base=all other)Drugs and medical 1.409 0.01798 22.73Chemicals excluding drugs 1.049 0.01331 3.65Electronics, optics, and nuclear 1.360 0.01601 22.51Mechanical 1.037 0.01370 2.69Citing country effects (Base=United States)Canada 0.647 0.00938 –37.59European Economic Community 0.506 0.00534 –92.49Japan 0.442 0.00542 –102.99Rest of world 0.506 0.00824 –59.93University/corporate differential by cited time period1965 1.191 0.12838 1.491966–1970 0.930 0.04148 –1.701971–1975 1.169 0.02419 7.001976–1980 1.216 0.01765 12.261981–1985 1.250 0.01718 14.551986–1990 1.062 0.01746 3.57Federal government/corporate differential by cited timeperiod1963–1965 0.720 0.04592 –6.111966–1970 0.739 0.02498 –10.451971–1975 0.744 0.01531 –16.711976–1980 0.759 0.01235 –19.511981–1985 0.754 0.01284 –19.151986–1990 0.709 0.01551 –18.78β1* 0.213 0.00247 86.28β2* 3.86E-06 1.97E-07 19.61

Total observations, 26,975; R-square=0.5161.*t-statistic is for H0, parameter = 0.

To grasp the meaning of these estimates, we present in Table 3 the ratio of the citation probability of each of the technological fields, tothe citation probability of the base field, at different lags (1, 5, 10, 20, and 30 years after the grant date of the cited patent). Looking again atElectronics, we see that the ratio starts very high at 1.89, but after 12 years it is the same as the base field, after 20 years it declines to 0.64, anddeclines further to 0.36 after 30 years. This implies that this field is

FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENTCITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES

12675

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 25: [Proceedings of the National Academy of Sciences] (BookZZ.org)

extremely dynamic, with a great deal of “action” in the form of follow-up developments taking place during the first few years after aninnovation is patented, but also with a very high obsolescence rate. Thus, a decade later the wave of further advances subsides, and 30 yearslater citations have virtually ceased. Commonly held perceptions about the technological dynamism of this field are thus amply confirmed bythese results, and given a precise quantitative expression.

FIG. 2. Plot of the predicted citation function for patents in Electronics, Optics, and Nuclear versus patents in the base field(All Other).For other fields the results are perhaps less striking but still interesting. Drugs and Medical begins at 133% of the base citation frequency,

but due to the low obsolescence rate it actually grows over time (at a slow pace), so that 20 years later it stands at 170% relative to the basefield. Again, this is shown graphically in Fig. 2 and numerically in Table 3. The conjecture here is that due to the long lead times inpharmaceutical research, including the process of getting approval from the Federal Drug Administration, follow-up developments are slow incoming. Thus, whereas in Electronics a given innovation has very little impact 10–20 years later because the field is evolving so fast, inpharmaceuticals a new drug may still prompt follow-up innovations much later, after its medical and commercial viability have been wellestablished.

As to the Chemical field, we see that it starts off at 127% of the base field, but due to a high obsolescence rate the advantage fades overtime (though not as fast as in Electronics), falling behind the base field in less than a decade. The Mechanical field is similar to the base field,slowly losing ground over time. Note that after 20 years the ranking of fields changes dramatically compared with the ranking at the beginning,suggesting that allowing for variations in both α and β1 is essential to understand the behavior of fields over time.

Institutional Type. To capture the various dimensions of institutional variations we interact the a of each institutional type with the citedperiod (except for corporate, which serves as the base), and allow also for differences across institutions in the rate of decay β1. The resultsshow that the estimates of β1 for universities and for Government are less than 1, but only slightly so, and hence we limit the discussion tovariations in α (see Table 4 for the effects of the variations in β1.)Table 3. Citation probability ratio by technological field

Lag, yrTechnological field β1 1 5 10 20 30Drugs and medical 0.932 1.33 1.40 1.50 1.71 1.96Chemical 1.158 1.27 1.12 0.96 0.70 0.51Electronics, etc. 1.288 1.89 1.50 1.13 0.64 0.36Mechanical 1.054 1.11 1.06 1.01 0.91 0.81Other 1.000 1.00 1.00 1.00 1.00 1.00

Ignoring 1965, we see that university patents became increasingly more “fertile” than corporate ones in the 1970s and early 1980s, buttheir relative citation intensity declined in the late 1980s. This confirms and extends similar results that we obtained in previous work (6).Government patents, on the other hand, are significantly less fertile than corporate patents, with a moderate upward trend over time (from 0.59in 1963– 1966 to 0.68 in 1981–1985), except for a decline in the last period. Their overall lower fertility level may be due to the fact that theselaboratories had been traditionally quite isolated from mainstream commercial innovations and, thus, those innovations that they did choose topatent were in some sense marginal. By the same token, one might conjecture that the upward trend in the fertility ratio may be due to theincreasing “openness” of federal laboratories, and their efforts to reach out and make their innovations more commercially oriented.

Location. The regional multiplicative coefficients show very significant “localization” effects. That is, patents granted to United Statesinventors are much more likely to cite previous United States patents than are patents granted to inventors of other countries: α for the differentforeign regions/countries is in the 0.43–0.57 range, as opposed to the (normalized) value of 1 for the United States. At the same time, though,all foreign Thus, the propensity to cite (i.e., to “absorb spillovers”) for countries except Japan have lower β1 than the United States. Canada andEurope increases over time relative to patents in the base category. This means that the localization effect fades over time. This can be seenclearly in Table 5 and in Fig. 3: the probability that a foreign inventor would cite a patent of a United States inventor is 42–56% lower than thatof a United States resident inventor 1 year after grant, but 20 years later the difference has shrunk to 20–36%. The puzzling exception is Japan;the estimates imply that the “receptiveness” of Japanese inventors to United States inventions remains low, since β1 (Japan) does not differsignificantly from unity.Table 4. Citation probability ratio by institution

Lag, yrResearch institution β1 1 5 10 20 30Universities 1981–1985 0.978 1.23 1.25 1.28 1.34 1.40Universities 1986–1990 0.978 1.08 1.10 1.12 1.18 1.23Federal Labs 1981–1985 0.932 0.69 0.73 0.78 0.90 1.03Federal Labs 1986–1990 0.932 0.67 0.70 0.75 0.86 0.99Corporate 1.000 1.00 1.00 1.00 1.00 1.00

FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENTCITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES

12676

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 26: [Proceedings of the National Academy of Sciences] (BookZZ.org)

FIG. 3. Frequency of citations to U.S. patents, from patents originating in the United States, the European EconomicCommunity, Canada, and Japan. The localization effect fades over time.The “fading” effect in the geographic dimension corresponds to the intuitive notion that knowledge eventually diffuses evenly across

geographic and other boundaries, and that any initial “local” advantage in that sense will eventually dissipate. Once again, these results offer aquantitative idea of the extent of the initial localization and the speed of fading. Notice also that starting a few years after grant, the differencesacross regions seem to depend upon a metric of geographic, and perhaps also cultural, proximity: at lag 10, for example, Canada is highest witha coefficient of 0.67, followed by Europe with 0.53, and Japan with 0.44.

Further Results. Finally, the overall estimate of β1=0.2 means that the citation function reaches its maximum at about 5 years, which isconsistent with the empirical citation distribution shown in Fig. 1. The R2 of 0.52 is fairly high for models of this kind, suggesting that thepostulated double exponential combined with the effects that we have identified fit the data reasonably well.

CONCLUSIONThe computerization of patent citations data provides an exciting opportunity to examine the links among inventions and inventors, over

time, space, technology, and institutions. The ability to look at very large numbers of patents and citations allows us to begin to interpret overallcitation flows in ways that better reflect reality. This paper represents an initial exploration of these data. Many variations that we have notexplored are possible, but this initial foray provides some intriguing results. First, we confirm our earlier results on the geographic localizationof citations, but now provide a much more compelling picture of the process of diffusion of citations around the world over time. Second, wefind that federal government patents are cited significantly less than corporate patents, although they do have somewhat greater “stayingpower” over time. Third, we confirm our earlier findings regarding the importance or fertility of university patents. Interestingly, we do not findthat university patents are, to any significant extent, more likely to be cited after long periods of time. Finally, we show that citation patternsacross technological fields conform to prior beliefs about the pace of innovation and the significance of “gestation” lags in different areas, withElectronics, Optics, and Nuclear Technology showing very high early citation but rapid obsolescence, whereas Drugs and Medical Technologygenerate significant citations for a very long time.Table 5. Citation probability ratio by citing geographic area

Lag, yrLocation β1 1 5 10 20 30Canada 0.914 0.58 0.62 0.67 0.80 0.95Europe 0.899 0.44 0.48 0.53 0.65 0.79Japan 1.002 0.44 0.44 0.44 0.44 0.44Rest of World 0.900 0.44 0.48 0.53 0.64 0.78United States 1.000 1.00 1.00 1.00 1.00 1.00

The list of additional questions that could be examined with these data and this kind of model is even longer. (i) It would be interesting toexamine if the geographic localization differs across the corporate, university, and federal cited samples. (ii) The interpretation that we give tothe geographic results could be strengthened by examining patents granted in the United States to foreign corporations. Our interpretationsuggests that the lower citation rate for foreign inventors should not hold for this group of cited patents. (iii) We could apply a similar model togeographic regions within the United States, although some experimentation will be necessary to determine how small such regions can be andstill yield reasonably large numbers of citations in each cell while controlling for other effects, (iv) It would be useful to confirm the robustnessof these results to finer technological distinctions, although our previous work with citations data lead us to believe that this will not make a bigdifference, (v) We would like to investigate the feasibility of modeling obsolescence as a function of accumulated patents. Caballero and Jaffe(7) implemented this approach, but in that analysis patents were not distinguished by location or technological field.

We acknowledge research support from National Science Foundation Grants SBR-9320973 and SBR-9413099.1. Romer, P.M. (1990) J. Pol Econ. 98, S71-S102.2. Grossman, G.M. & Helpman, E. (1991) Q.J. Econ. 106, 557–586.3. Jaffe, A.B., Henderson, R. & Trajtenberg, M. (1993) Q.J. Econ. 108, 577–598.4. Trajtenberg, M., Henderson, R. & Jaffe, A.B. (1996) University Versus Corporate Patents: A Window on the Basicness of Invention, Economics of

Innovation and New Technology, in press.5. Griliches, Z. (1990) J. Econ. Lit. 92, 630–653.6. Henderson, R., Jaffe, A.B. & Trajtenberg, M. (1996) in A Productive Tension: University-Industry Research Collaboration in the Era of Knowledge-

Based Economic Growth, eds. David P. & Steinmueller, E. (Stanford Univ. Press, Stanford, CA).7. Caballero, R.J. & Jaffe, A.B. (1993) in NBER Macroeconomics Annual 1993, eds. Blanchard, O.J. & Fischer, S.M. (MIT Press, Cambridge, MA), pp.

15–74.

FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENTCITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES

12677

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 27: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

The future of the national laboratories

LINDA R.COHEN* AND ROGER G.NOLL†

*Department of Economics, University of California, Irvine, CA 92717; and †Department of Economics, Stanford University, Stanford,CA 94305

ABSTRACT The end of the Cold War has called into question the activities of the national laboratories and, more generally, thelevel of support now given to federal intramural research in the United States. This paper seeks to analyze the potential role of thelaboratories, with particular attention to the possibility, on the one hand, of integrating private technology development into thelaboratory’s menu of activities and, on the other hand, of outsourcing traditional mission activities. We review the economic efficiencyarguments for intramural research and the political conditions that are likely to constrain the activities of the laboratories, and analyzethe early history of programs intended to promote new technology via cooperative agreements between the laboratories and privateindustry. Our analysis suggests that the laboratories are likely to shrink considerably in size, and that the federal government faces asignificant problem in deciding how to organize a downsizing of the federal research establishment.

The federal government directly supports nearly half of the research and development (R&D) performed in the United States. Of this,about a third is for intramural research (research performed by agencies or in federal laboratories), while the remainder is performedextramurally by industry, universities, and nonprofit organizations under grants or contracts with the federal government. In fiscal year 1994,federal obligations for all laboratories amounted to nearly 23 billion dollars. In constant dollars, the federal R&D budget has been shrinkingsince fiscal year 1989, and the laboratory budget has followed suit (see Fig. 1).‡

Intramural research includes a range of activities. Much of it is in support of agency activities and contributes to technology that ispurchased by the government. Examples include weapons technology in Department of Defense laboratories and research that supports theregulatory activities of the Environmental Protection Agency and the Nuclear Regulatory Commission. (The distribution of intramural andextramural research by agency is shown in Table 1.) A relatively small but important activity is the collection and analysis of statistics by theDepartment of Commerce, the Bureau of Labor Statistics, and the National Science Foundation. A significant share of the intramural R&Dbudget goes for basic and applied science in areas where the government has determined that there is a public interest: National Institutes ofHealth (NIH) in the biomedical field; National Institute of Standards and Technology in metrology; Department of Energy (DOE) in the basicphysics; and agricultural research at the Agricultural Research Stations. Finally, the laboratories support commercial activities of firms. Thefinal category has been growing in recent years and is usually distinguished from all the previous categories (although the distinction is blurredin some agencies) in that the former are called “mission” research and the latter “technology transfer” or “cooperative research with industry.”

FIG. 1. Federal obligations for R&D.An important distinction between the categories lies in the treatment of intellectual property rights. Whereas the government has pursued

strategies to diffuse the results of mission activities, the cooperative programs contain arrangements that allocate property rights to privateparticipants. This distinction is not sharp: results of defense-related work, of course, have been tightly controlled. However, the governmentretains for itself property rights for intramural defense R&D, and where feasible, licenses the patents to more than one company. Alternatively,the new programs have used assignment of property rights as a tool to raise profits to firms and thereby encourage private technology adoption,through exclusive licensing arrangements (particularly for those technologies developed primarily by the laboratories) or assignment of patents(for cooperative projects). In their intellectual property rights policy, the latter set of programs mirror the policies employed for extramuralresearch. Thus to some extent, private firms effectively retain residual rights in inventions. For these programs, the laboratories can becharacterized in part as subcontractors to industry.

Recently, the role of the federal laboratories in the national research effort has come under serious reexamination. At the core of thequestion about the future of the national laboratories is the importance of national security missions in justifying their budgets. The end of theCold War has called into question the missions of the Department of Defense laboratories and the weapons laboratories run by the DOE and itscontractors. In addition, the end of the Cold War has weakened the political coalition that supports public R&D

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviations: R&D, research and development; NIH, National Institutes of Health; DOE, Department of Energy; CRADAs,cooperative research and development agreements.

‡Statistical information about R&D spending in the United States reported here comes from refs. 1 and 2, and the National ScienceFoundation web site: www.nsf.gov.

THE FUTURE OF THE NATIONAL LABORATORIES 12678

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 28: [Proceedings of the National Academy of Sciences] (BookZZ.org)

activities in the United States more generally. Furthermore, increased expenditures on entitlement programs for the elderly and resistance tofurther tax increases has placed further pressure on budget levels at the laboratories. The budgets of most federal laboratories have beenconstant or declining in recent years, and expectations are that reductions will continue.Table 1. Federal obligations for total R&D, selected agencies and performers, fiscal year 1994

Total Labs-total Intramural FFRDC Share oftotalR&D, %

Share oflab R&D,%

Share ofintramural, %

Share ofFFRDC, %

All agencies 71,244 22,966 17,542 5424Defense,development

33,107 8,613 7,651 962 46.5 37.5 43.6 17.7

Defense, research 4,416 1,634 1,515 119 6.2 7.1 8.6 2.2DHHS 10,722 2,285 2,213 72 15.0 9.9 12.6 1.3NIH 10,075 1,980 1,908 72 14.1 8.6 10.9 1.3NASA 8,637 3,356 2,653 703 12.1 14.6 15.1 13.0Energy 6,582 3,822* 500 3322 9.2 16.6 2.9 61.2NSF 2,217 148 17 131 3.1 0.6 0.1 2.4Agriculture 1,368 901 901 0 1.9 3.9 5.1 0.0ARS 640 609 609 0 0.9 2.7 3.5 0.0Forest Service 215 180 180 0 0.3 0.8 1.0 0.0Commerce 897 655 654 1 1.3 2.9 3.7 0.0NIST 382 244 244 0 0.5 1.1 1.4 0.0NOAA 504 400 399 1 0.7 1.7 2.3 0.0Transportation 688 280 253 27 1.0 1.2 1.4 0.5EPA 656 135 135 0 0.9 0.6 0.8 0.0Interior 588 517 517 0 0.8 2.3 2.9 0.0USGS 362 332 332 0 0.5 1.4 1.9 0.0All other 1,366 620 533 87 1.9 2.7 3.0 1.6

Data are from ref. 3, table 9, pp. 26–28. DHHS, Department of Health and Human Services; NIH, National Institutes of Health; NASA, NationalAeronautics and Space Administration; NSF, National Science Foundation; ARS, Agricultural Experiment Station; NIST, National Institute ofStandards and Technology; NOAA, National Oceanic and Atmospheric Administration; USGS, U.S. Geological Survey; federally funded research anddevelopment corporations (FFRDCs).*Not including Bettis, Hanford, and Knolls, former FFRDCs, which were decertified in 1992. Obligations for these facilities are now reported asobligations to industrial firms.

In contrast to these trends, in the early 1980s the federal laboratories were called on to expand their activities. Responding to the perceivedproductivity slow-down in the 1970s, and later, the increased competition of foreign firms in high-tech industries, efforts were undertaken bythe laboratories to improve the technology employed by U.S. firms. The Stevenson-Wydler Act of 1980 established “technology transfer” as arole of all federal laboratories. Whereas the original Stevenson-Wydler Act had few teeth, it ushered in a decade of legislative activity designedto expand laboratory activities in promoting private technology development. The primary innovation in laboratory activities has been thedevelopment of cooperative research and development agreements, or CRADAs, which provide a mechanism for industry to enter intocooperative, cost-shared research with government laboratories. In 1993, the Clinton Administration proposed that these activities would notonly be pursued, but would substitute for the decline in traditional activities at the national laboratories (4, 5). President Clinton proposeddevoting 10–20% of the federal laboratory budgets to these programs. That number has not been reached, although CRADA activity has beenimpressive. The President’s 1996 Budget claims that 6093 CRADA partnerships had been entered into by fiscal year 1995, with a value(including cash and noncash contributions of public and private entities) of over $5 billion (6). Some estimates of the size and distribution ofCRADAs are provided in Table 2.

The past 2 years have witnessed a retreat from the policy of promoting commercial technology development at the laboTable 2. Number and industry of CRADAs by agency, 1993

Distribution of 1993 CRADAs by industrial technologyBiological Manufacturing Information

technologyComputersoftware

Energy OtherAgency Total Medical Other Aerospace Automobile Chemical OtherAgriculture 103 1 47 1 0 12 31 0 1 2 8Commerce 144 1 2 17 1 21 33 44 7 8 10DefenseAir 73 1 2 7 1 2 2 33 16 3 6Army 87 19 6 2 4 3 27 9 3 0 14Navy 46 9 0 4 2 1 10 13 5 0 2Total 206 29 8 13 7 6 39 55 24 3 22Energy 368 14 10 21 20 35 86 86 18 61 17EPA 5 1 2 0 0 0 0 0 0 1 1HHS 25 25 0 0 0 0 0 0 0 0 0Interior 15 0 1 0 0 3 8 0 0 0 3Transportation 14 0 0 12 0 1 0 0 0 1 0Total 880 71 70 64 28 78 189 185 50 76 61

Data from ref. 7.

THE FUTURE OF THE NATIONAL LABORATORIES 12679

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 29: [Proceedings of the National Academy of Sciences] (BookZZ.org)

ratories. During 1995, the Clinton Administration undertook a major review of the national laboratory structure in the United States (8). Bothits reports (9–13) and additional analyses from the science policy community (14–16) have recommended that the laboratories deemphasizeindustry technology efforts, outsource some R&D activities, and concentrate on missions, narrowly defined. Although the cooperativeprograms continued to expand, their future is now problematic.

This paper seeks to analyze the potential role of the laboratories, with particular attention to the possibility, on the one hand, of integratingprivate technology development into the laboratory’s menu of activities and, on the other hand, of outsourcing traditional mission activities.The next section reviews the economic efficiency arguments for intramural research and the political conditions that are likely to constrain theactivities of the laboratories. The third section considers cooperative agreements between the laboratories and industry in somewhat moredetail, and reviews some of the early history with these programs. Our discussion suggests that the laboratories are likely to shrink considerablyin size, and that the federal government faces a significant problem in deciding how to organize a downsizing of the federal researchestablishment. In the last section, we examine this issue, and conclude that without some advance planning about how to downsize, the processis likely to be costly and inefficient. In particular, downsizing cannot be addressed sensibly without two prior actions: a reprioritization of therelative effort devoted to different fields of R&D, and a commitment to minimize the extent to which short-term political considerations affectthe allocation of cuts across programs and laboratories. Thus, to rationalize this process, we propose the creation of a National LaboratoriesAssessment and Restructuring Commission, fashioned after the Military Base Closing Commission.

ECONOMICS, POLITICS, AND INTRAMURAL RESEARCHThe economic rationale for government support of R&D has two distinct components. The first relates to the fact that the product of R&D

activity is information, which is a form of public good. The second relates to problems arising in industries in which the federal government hasmarket power in its procurement.

The public good aspect of R&D underpins the empirical finding that, left to its own devices, the private sector will underinvest in at leastsome kinds of R&D. To the extent that the new information produced by an R&D project leaks out to and is put to use by organizations otherthan the performer of the project, R&D creates a positive externality: some of the benefits accrue to those who do not pay for it. To the extentthat the R&D performer can protect the new information against such uses unless the user pays for it, the realized social benefits of R&D areless than is feasible. (See ref. 17 for an excellent discussion of these issues.)

Keeping R&D proprietary has two potential inefficiencies. First, once the information has been produced, charging for its use by others isinefficient because the charge precludes some beneficial uses. Second, an organization that stumbles upon new information that is useful inanother organization with a completely different purpose may not recognize the full array of its possible applications. Hence, even if it couldcharge for its use, neither the prospective buyer nor the potential seller may possess sufficient knowledge to know that a mutually beneficialtransaction is possible.

The potential spillovers of R&D usually are not free; typically, one firm must do additional work to apply knowledge discoveredelsewhere for its own activities. Hence spillovers generate complementarities across categories of R&D. More R&D in one area, when itbecomes available to those working in another area, increases the productivity of the latter’s research. This complementarity can be bothhorizontal (from one industry, technology, or discipline to another) or vertical (between basic and applied areas) (19).

The public goods argument leads to a richer conclusion than simply that government should support R&D. In particular, it says thatgovernment should support R&D when a project is likely to have especially large spillover benefits, and that when government does supportR&D, the results should be disseminated as widely as possible. One area where this is likely to be true is in basic research: projects that aredesigned to produce new information about physical reality that, once discovered, is likely to be difficult to keep secret and/or that is likely tohave many applications in a variety of industries. Here the term “basic” diverges from the way that term is used among researchers in that itrefers primarily to the output of a project, rather than its motivation. A project that is very focused and applied may come upon and solve newquestions about the fundamental scientific and engineering principles that underpin an entire industry and so have many potential uses andrefinements.

The public goods argument also applies to industries in which R&D is not profitable simply because it is difficult to keep new discoveriessecret. If products are easily reverse engineered, intellectual property rights are not very secure, and innovators are unable to secure a “first-in”advantage, private industry is likely to underinvest in R&D, so that the government potentially can improve economic welfare by supportingapplied research and development.

Finally, the complementarities among categories of R&D indicate still another feature of an economically optimal program: increases insupport in one area may make support for another area more attractive. Thus, if for exogenous reasons a particular area of technical knowledgeis perceived to become more valuable, putting more funds into it may cause other areas to become more attractive, and so increase overall R&Deffort by more than the increase in the area of heightened interest.

If the purpose of government R&D is to add to total R&D effort in areas where private incentives for R&D are weak and where extensivedissemination is valuable, a government laboratory is a potentially attractive means for undertaking the work. A private contractor will not havean incentive to disseminate information widely and will have an incentive to try to redirect R&D effort in favor of projects that are likely togive the firm an advantage over competitors. For basic research, another attractive institution in the United States is the research universities,which garner the lion’s share of the extramural basic research budget.

The second rationale for publicly supported R&D arises when the government is the principal consumer of a product. The problem thatarises here is that once a new product has been created, the government, acting as a monopolist, can force the producer to set the price for theproduct too low for the producer to recover its R&D investment. If a private producer fears that the government will behave in this way, theproducer will underinvest in R&D.

Whereas this problem can arise in any circumstance in which a market is monopsonized, the problem is especially severe when themonopsonist is the government. The root of the problem is the greater susceptibility of government procurement to inefficient and even corruptpractices, and, consequently, the more elaborate safeguards that government puts in place to protect against corruption. The objectives ofgovernment procurement are more complex and less well defined than is the case in the private sector, where profit maximization is theoverriding objective. In government, end products do not face a market test. Hence, in evaluating whether a particular product (including itstechnical characteristics) is efficiently produced and worth the cost, one does not have the benefit of established market prices. In addition,

THE FUTURE OF THE NATIONAL LABORATORIES 12680

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 30: [Proceedings of the National Academy of Sciences] (BookZZ.org)

the relevant test for procurement is political success, which involves more than producing a good product at reasonable cost. Such factors as theidentity of the contractor and geographic location of production also enter into the assessment.Table 3. Basic research share of federal R&D expenditures by performing sector and function

1982 1984 1989 1990 1992 1993 1995Basic share of total federal R&D 0.150 0.145 0.170 0.177 0.188 0.203 0.200Basic share of DoD R&D 0.050 0.029 0.025 0.025 0.029 0.032 0.034Basic share of DoD intramural R&D 0.043 0.035 0.027 0.028 0.030 0.036 0.033Basic share of DoD extramural R&D 0.030 0.027 0.025 0.024 0.028 0.031 0.035Basic share of federal civilian R&D 0.303 0.365 0.410 0.392 0.379 0.388 0.390Basic share of federal civilian intramural R&D 0.252 0.298 0.320 0.314 0.310 0.318 0.380Basic share of federal civilian extramural R&D 0.347 0.427 0.480 0.449 0.428 0.435 0.440

DoD, Department of Defense

Because of the complexity and vagueness of objectives, procurement is susceptible to inattentiveness or even self-serving manipulation bywhomever in the government—an agency official or a congressional overseer—has authority for negotiating a contract. To protect againstinefficiency and corruption, the government has adopted extremely complex procurement rules, basing product procurement on auditedproduction costs when competitive bidding is not feasible. In such a system, recovering the costs associated with financial risk and exploratoryR&D in the procurement price is uncertain at best. Thus, the firm that produces for the government faces another form of a public goodsproblem in undertaking R&D: even if the knowledge can be kept within the firm, the firm still may not benefit from it because of thegovernment’s procurement rules. Hence, the government usually deals with the problem of inducing adequate R&D in markets where it is amonopsonist by undertaking the R&D in a separate, subsidized project.

Unfortunately, the procurement problem is even more severe for research projects. Because of the problems associated with contractingfor research, in the private sector firms perform almost all of their research in house. Only about 2% of industrial R&D is procured fromanother organization. Monitoring whether a contractor is actually undertaking best efforts—or even doing the most appropriate research—ismore difficult than monitoring whether a final product satisfies procurement specifications. Likewise, a firm is likely to find it easier to preventdiffusion of new information to its competitors if it does its own work, rather than contracts for it from someone else. For the government, theanalogous problem is to prevent other countries from gaining access to military secrets or even commercially valuable knowledge that thegovernment wants U.S. firms to use to gain a competitive advantage internationally. Thus, it is not surprising that the public sector has nationallaboratories: research organizations that are dedicated to the mission of the supporting agency, even if organizationally separated, over whichthe agency can exercise strong managerial control. Indeed, a primary rationale in the initial organization of the national laboratories that wereestablished during the second world war and shortly thereafter was to avoid the complexities of contractual relationships that would benecessary were the activities to be performed by the private sector.§

Table 3 shows the distribution by character of R&D supported by industry, by government through intramural programs, and bygovernment through extramural programs. The distribution bears a rough relationship to the principles discussed here. Government support forbasic research greatly exceeds that of industry, with the differential magnified when the activities of the Department of Defense (which investsheavily in weapons development activities) is excluded. Outside of the Department of Defense, the basic research component of extramuralresearch is significantly higher than for intramural research, although the differential narrows in recent years. Thus, the budget levels areconsistent with extramural support for activities undersupported by the private sector (i.e., basic research) and intramural support that includesmission-oriented development work as well as basic research.

The preceding economic rationales for government R&D and national laboratories do not necessarily correspond to an effective politicalrationale for a program. Public policies emerge because there is a political demand for them among constituents. Organizations that undertakeresearch have an interest in obtaining federal subsidies regardless of the strength of the economic rationale behind them. And, nationallaboratories, once created, can become a political force for their continuation, especially large laboratories that become politically significantwithin a congressional district.

In most cases, areas of R&D are not of widespread political concern. Instead, the advocates consist of some people who seek to attain theobjectives of the R&D project and some others who will undertake the work. In principle, an area of R&D could enjoy widespread politicalsupport, but as a practical matter almost all R&D projects have relatively narrow constituencies. Even in defense, which until the demise of theformer Soviet Union enjoyed broad-based political support, controversies emerged out of disagreements about the priorities to be assigned todifferent types of weapons systems: nuclear versus conventional weapons, aircraft versus missiles versus naval ships, etc.

The standard conceptual model of understanding the evolution of public policy involves the formation of support coalitions, each memberof which agrees to support all of the projects favored by the coalition, not just the ones personally favored. Applied to R&D, the coalitionmodel implies that public support for a broad menu of R&D programs arose as something of a logroll among groups of constituents and theirrepresentatives, with each group supporting some programs that it regarded as having lower value in return for the security of having stablesupport for its own pet projects. The members of this support coalition included various interested in defense-related activities, but was notconfined to them.

The coalitional basis of political support suggests another form of complementarity among programs. If, for exogenous reasons, theproponents of research in one area perceive an increase in the value of their pet programs, they will be willing to support an increase in otherR&D programs to obtain more funds for their own. Hence, coalitional politics can be expected to cause the budgets for different kinds ofresearch to go up and down together, even across areas that do not have technical complementarities.

In other work, we have tested the hypothesis that real federal R&D expenditures by broad categories are complements, and arecomplementary with defense procurement. In this work, we use two-stage least squares to estimate simultaneously

§This point was made in the report prepared for the White House Science Council by the Federal Laboratory Review Panel (the“Packard Report”) in 1983. For a discussion of this report (considered the “grand-daddy” of federal laboratory reviews), see ref. 19.

THE FUTURE OF THE NATIONAL LABORATORIES 12681

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 31: [Proceedings of the National Academy of Sciences] (BookZZ.org)

annual expenditures on defense R&D, civilian R&D, and defense procurement for the period 1962–1994.One major finding is that defense and civilian R&D are strong complements, that defense procurement and defense R&D are

complements, and that defense procurement and civilian R&D are substitutes. Quantitatively, however, the last effect is sufficiently small sothat an exogenous shock that increases procurement has a net positive effect on civilian R&D as well as defense R&D. Logically, the systemworks as follows: if defense procurement becomes more attractive, it causes a small reduction in civilian R&D and a large increase in defenseR&D; however, due to the combination of political and economic complementarities between defense R&D and civilian R&D, the increase indefense R&D leads to an increase in civilian R&D that more than offsets the initial reduction.

The other major finding is that basic and applied research are also strongly complementary, with analogous relationships withprocurement. Whereas defense procurement and basic research are substitutes, quantitatively this relationship is smaller than thecomplementarities between procurement and applied research and between applied research and basic. Hence, an exogenous shock thatincreases procurement has a net positive effect on both basic and applied R&D.

These results have important implications for the national laboratories. Many have observed the obvious fact that the reductions in defenseexpenditures associated with the end of the Cold War have led to reductions in defense-related R&D, including support for defense-relatednational laboratories. About the time that the end of the Cold War was in sight, federal officials and the national laboratories placed newemphasis on commercially relevant R&D. At the national laboratories, this emphasis took the form of participation by the laboratories in largeindustrial research consortia (such as SEMATECH, a consortium concerned with semiconductor manufacturing technology) and in CRADAswith individual firms to apply in-house expertise to commercial R&D problems. Simultaneously, the Department of Defense developed its“dual use” concept: supporting the development of new technology that could be used simultaneously for military and civilian purposes. Thetheme running through these programs was that a new emphasis on commercially relevant activity could substitute for the drop in demand fornational security brought on by the end of the Cold War.

In principle, this strategy could have worked—but only if a genuine exogenous shock took place that increased politically effectivedemand for nondefense R&D. If a counterpart to the Soviet Union in defense after World War II arose in commercial activities around themiddle of the 1980s, the complementarities among categories of research could have worked not only to maintain the overall R&D effort, but,through complementarities between defense and civilian R&D, actually softened the blow to defense R&D. For a while, through the economicstagnation of the late 1970s and early 1980s, the declining relative economic position of the United States in comparison to Japan and theEuropean Economic Community (EEC) was a possible candidate; however, as the decade of the 1980s progressed, and the economicperformance of other advanced industrialized nations deteriorated relative to the United States, it became clear that no such exogenous changewas taking place. Regardless of the conceptual merits of civilian R&D, whether basic or applied, no fundamental change had taken place in thepolitical attractiveness of such work.

If this line of reasoning is correct, there is no “peace dividend” for civilian R&D, whether basic or applied. To the extent that there aretechnical complementarities between defense and civilian R&D, the reduction in the former reduces the attractiveness of the latter, all elseequal. And, because one member of the R&D coalition—the defense establishment— has experienced an exogenous shock that reducesdemand for national security, the willingness of this group to support other areas of R&D has concomitantly shrunk.

The preceding argument abstracts from partisanship and ideology in politics. The November 1994 elections increased the relative power ofdefense-oriented interests compared with those who support civilian R&D. To the extent that the relative influence of these groups has shifted,a given level of economic attractiveness of defense and civilian R&D will produce more of the former and less of the latter. But the forces weidentify here are separate from these short-term political shifts. Here a reference to the mid-1970s and early 1980s is instructive.

In the mid-1970s, in the wake of Viet Nam and Watergate, the Congress became substantially more liberal. Not only did defenseexpenditures fall, but so did almost all components of R&D, civilian and defense, basic and applied. In the late 1970s, under President Carterand with a liberal Democratic Congress, defense procurement and all categories of R&D began to recover. The election of 1980 broughtRepublic control of the Senate and the Presidency, and a more defense-oriented government; however, after much criticism of federallysubsidized commercial R&D, again all categories of R&D expanded until the end of the Cold War. Now, once again, all categories aredeclining. Expenditures in the national laboratories followed the same pattern.

COOPERATIVE RESEARCH ACTIVITIES AT THE FEDERAL LABORATORIESThe purpose of this section is to examine in more detail the set of cooperative research activities that the federal laboratories have been

engaged in during the recent past. CRADAs seek to advance technology that will be used by private industry, and in particular industries thatcompete with foreign firms. Expanding such activities is the primary proposal for maintaining historic levels of support at the federallaboratories.

The economic justification for the programs is not frivolous. In part, the case rests on the considerable expertise of the federal laboratoryestablishment. The contributions of the laboratories to commercial technology has, in the past, been substantial, and provides a basis for thebelief that considerable technology exists at the laboratories whose “transfer” to industry would be beneficial. Detailed studies of the R&Dprocess suggest that transferring technology is far from a straightforward process, and can be substantially facilitated by close interaction,ideally through joint activities of personnel from the transferring and receiving entities. Thus, cooperative projects are seen as a mechanism toincrease the extant and efficiency of technology transfer.

Second, the laboratories and private firms can bring different areas of expertise to the research project, so that complementarities may existbetween the two types of entities. As a result, cooperative R&D may yield interesting new technologies that go beyond transfers from thelaboratories to industry. Both arguments apply to private firms, in addition to firms and the laboratories, and provide economic justification forthe government’s preference for working with private consortia, and with consortia that include university members as well as commercial firms.

Instituting the policy has required legislation that departs significantly from some past practices. One set of laws has dealt with the conflictbetween promoting joint research and antitrust policies. Relaxed antitrust enforcement was established for research joint ventures in 1984, andextended in 1993 to joint production undertaken by firms to commercialize the products of joint research.¶

¶The National Cooperative Research and Development Act of 1984 and The National Cooperative Research and Production Act of1993.

THE FUTURE OF THE NATIONAL LABORATORIES 12682

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 32: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Table 4. Number of CRADAs in the Department of Health and Human ServicesFiscal year No. new CRADAs1987 981988 1451989 2251990 2391991 2611992 631993 251994 19*

Data are from G.Stockdale (personal communication).*Estimated.

The thornier legislative problem involves intellectual property rights. Historically, results of publicly supported research (both intramuraland research supported by grants and contracts) were not patented. The policy was consistent with the philosophy that the results were publicgoods, and hence social benefits would be maximized by wide dissemination, constrained only by the requirements of national security.However, this philosophy was manifestly at odds with the new programs. Implementing new technology typically requires large investmentsthat constitute sunk costs of development. As with other R&D expenditures, firms may not, absent some form of patent protection, be able torecover these expenditures if the products are sold in competitive markets. Moreover, if the purpose of the programs is to advantage U.S.manufacturers over foreign competitors, widely disseminating the laboratories’ research results is (in the short-run) counterproductive: thegovernment needs to erect barriers that prevent the diffusion of technology to foreign firms. Thus, the policies have required the government torethink its policies on intellectual property rights.

Congress has reconsidered intellectual property rights policies in nearly every legislative session for the past 15 years. Currently firms anduniversities are, with numerous caveats, allowed to patent inventions arising from federal contract work and to obtain exclusive licenses forapplication of inventions to specific fields of use for inventions arising from cooperative work with the federal laboratories. Government-owned, government-operated laboratories (GOGOs, or the intramural category of activities) obtained this authority in 1986; government-owned, contractor-operated laboratories (GOCOs, including the federally funded research and development corporations) were given theauthority in 1989.|| Chief caveats include (i) small business preferences in the assignment of exclusive licenses; (ii) requirements (withexceptions) for domestic manufacturing; and (iii) limited government march-in rights.** Disposition of intellectual property rights havebecome increasingly complicated with the laboratories’ increased emphasis on cooperative research, as opposed to technology transfer, andwith their preference for working with consortia, wherein arrangements are needed to allocate, specify and protect the rights of each participant.

The initial legislation for these policies enjoyed broad non-partisan support; indeed, Congress passed the major bills by voice vote ratherthan conducting rollcalls. More recent efforts to modify and clarify patent policies have not been successful. Similarly, CRADAs have enjoyedwide support from both industry and politicians. Until last year, agency heads were regularly exhorted in hearings before Congress to speed upand expand their cooperative activities. The number of CRADAs executed by agencies has grown enormously overall (see Table 2 for recentstatistics), and agencies have received far more requests from private firms for cooperative research than they are able to accommodate.However, enthusiasm for the policies appears to be waning. Reports from the Office of Technology Assessment and DOE AdvisoryCommittees have recommended that DOE focus more narrowly on agency missions; the current Congress is likely to slash budgets for theextramural programs in fiscal 1996. In part, the turnaround reflects the partisan shift in Congress. But more importantly, both it and thedifficulty congress has had in resolving intellectual property rights issues reflects more fundamental political and economic problems with thepolicies.

The potential problems in these programs are illustrated by the history of CRADAs at NIH. Table 2 reveals a rather puzzling statistic. NIHis the primary provider of biomedical research in the United States. Moreover, the biomedical industry is extraordinarily research intensive andopportunities for new products and processes are rife. Yet NIH is now involved in a very modest number of CRADAs. This was not always thecase (see Table 4). However, CRADAs at NIH have suffered from previous technological successes. In the past, some projects createdespecially valuable property rights, which were conferred on private partners. As a result, some firms enjoyed apparently exorbitant profits, anddirect competitors were excluded from what could be presented as a government-sponsored windfall—two conditions that created politicalfirestorms.

The first firestorm arose in 1989 over 3′-azido-3′-doexythymidine (AZT), a drug for treating patients infected by HIV, which wasdeveloped in a CRADA with Burroughs Wellcome Company.†† Members of Congress were outraged at the price set by Burroughs Wellcomefor the drug; in response, NIH adopted a “fair pricing” clause for future CRADAs. The clause did not resolve the controversy, for to institute it,NIH would have to undertake a broad examination of the economics of the pharmaceutical industry—in effect, an effort tantamount to thatrequired for traditional economic regulation. Then-Director of NIH Bernadine Healy appointed a panel to study the issue, but ultimatelyconcluded that NIH was unable to undertake the type of economic regulation of pharmaceutical prices that would be necessary to enforce it.Furthermore, NIH lacks any statutory basis for obtaining the necessary information. An additional problem was identified in 1994 by a NewYork patent attorney who served on the NIH panel, and claimed that the U.S. Department of Justice had decided not to enforce drug patentsissued to firms participating in CRADAs. Industry officials claim that the political problems and legal uncertainties about the ultimatedisposition of property rights have made them reluctant to engage in CRADAs with NIH. The statistics bear out their claim.

The high profits of drug companies for particular products developed under CRADAs may have engendered a particularly fast responsefrom Congress since it is also the public sector that pays a large share of the costs of medical care. But the apparent inequity—public supportfor companies who are then in a position to extract large profits from consumers— could easily arise in other cooperative research activities. Asyet, complaints of either upstream suppliers or downstream customers have not focused on the products of CRADA consortia, but if theprojects are successful, the modifications in antitrust policies as well as patent policies are likely to cause controversy.

||Stevenson-Wydler Technology Innovation Act of 1980; Bayh-Dole University and Small Business Patent Act of 1980; FederalTechnology Transfer Act of 1986; National Competitiveness Technology Transfer Act of 1989.

**This summary of the patenting situation gives only a general overview of an extremely complicated situation. Additional rules andregulations apply to establishing and protecting proprietary information in cooperative research.

††The AZT congressional response is not unique; a similar firestorm a rose over the profitable marketing of a second CRADA-created product, Taxol (see ref. 20).

THE FUTURE OF THE NATIONAL LABORATORIES 12683

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 33: [Proceedings of the National Academy of Sciences] (BookZZ.org)

A second issue that has arisen in successful CRADAs concerns the arrangements for exclusive licensing. Agencies in theory can signnumerous CRADAs, or sign CRADAs with consortia with open membership policies, so that CRADA proponents claim that the policy is freefrom the possibility that government will create identified “losers” and “winners” among firms. In practice, exclusive licensing excludes firmsin competitive industries—sometimes at the choice of the excluded firm, who may not have wished to participate in a consortium, sometimesbecause firms will agree to CRADAs only if competitors do not participate. Successful projects, or projects that are believed to be likely tosucceed, can engender complaints with political, if not legal clout. NIH ran into this problem with Taxol, which it developed with Bristol-Myers (a big multinational) and not with Unimed (a small, inexperienced company). Relying on the small business preferences written into theBayh-Dole Act, Unimed succeeded in opening up more embarrassing oversight hearings for NIH. The Environmental Protection Agency wassued for executing CRADAs with the competitors of a firm, Chem Services, who had not been also awarded a CRADA. The EnvironmentalProtection Agency prevailed in court, but the politics of “unfair advantage” claims suggests that the agency take care in future agreements. Athird example is a $70 million CRADA entered into in 1993 between Cray Research and two national laboratories for supercomputerdevelopment. After objections from other supercomputer manufacturers, and pressure from congress, the CRADA was dropped.

The issue revealed by these examples is that CRADAs generate political problems when they create industry winners and losers—orpotential losers—and when they succeed and make visibly large profits for private firms. The programs have not been in place long enough toobserve how congress will respond if agencies fund—at substantial cost—projects that do not succeed. Given the nature of R&D, potentialcandidates are likely to arise. The historical record of responses in government procurement suggests that the likely response will be for thegovernment to institute much more elaborate cost accounting and oversight, the traditional baggage of procurement policies that CRADAlegislation sought to avoid. Expanded oversight will create conflicts with the confidentiality provisions of CRADAs and the flexibility oflaboratories in contracting with firms (a hard-won right), and bodes poorly for private interest in cooperative research.

The fundamental problem with CRADA policy is that the laboratories are expected to fill an institutional role that provides external R&Dto firms, which, as detailed in the previous section, presents exceptionally difficult organization and incentive problems, exacerbated by theessentially political problems presented by the potential creation of private winners and losers. As a result, we do not expect that it can providea long-term rationale for maintaining the level of support at the federal laboratories.

IMPLICATIONS FOR THE FUTUREOur examination of the state of the national laboratories yields two main conclusions. First, that commercial R&D is unlikely to work as a

substitute for national security as a means for keeping the national laboratories at something like their current level of operation. Second, in anyevent the scope for economically and politically successful collaborations with industry is limited because of the conflicts of interest betweenthe government and the private sector in selecting and managing projects. The good news is that uneconomic commercial collaborations are notlikely to command a large share of the budget, but the bad news is that, because of the political complementarities among categories ofresearch, the failure of the commercialization initiative is likely to cause parallel reductions elsewhere in programs that are worthwhile.

The standard approach to budgetary retrenchment is to spread the pain among most categories of effort. In particular, this means roughlyequal reductions in the size of each laboratory, rather than consolidation. The early returns on the 1995 budget indicate that a “share the pain”approach is generally being followed by Congress. In the House appropriations bills passed in the summer of 1995, nondefense R&D was cut5% ($1.7 billion). Most of this was transferred to defense R&D, which grew by 4.2% ($1.6 billion). This represents a real cut in total R&Deffort equal to roughly the rate of inflation (about 3%) and a general shift of priorities in favor of defense (about 1% real growth) and againstcivilian (about 7% real decline).

In the nondefense category, every major category of R&D took a cut except NIH. Real federal expenditures on basic research, evenincluding the NIH increase, will fall by about 1.5%.

If, as we conclude, the next few years are likely to witness a steady decline in real federal R&D expenditures in all categories, includingthe national laboratories, two major issues arise. The first is prioritization of the cuts among areas of R&D, and the second is how to spread cutsin an area of research among institutions.

With respect to priorities, the logic of our argument is that technical and political complementarities work against substantial differences incuts from the historical shares of each major area of research. Only changes in political representation, such as took place in the elections of1994 (and 1974 and 1932 before), are likely to cause a substantial shift in priorities, and these will be based less on the economic and technicalcharacteristics of programs than on their distributive effects and ideological content.

With respect to allocations among institutions, the political process is much more likely to embrace a relatively technical solution. Threeissues arise in deciding how to spread cuts among national laboratories within a given category of research, one political and two technical. Thepolitical issue is classically distributive: no member of Congress, regardless of party of ideology, is likely to volunteer the local nationallaboratory as a candidate for closure. And, given the number of national laboratories, a majority of Congress is likely to face strongconstituency pressure to save a laboratory, just as they did when facing base closures. Congress has considerable experience in facing acircumstance in which each member has a strong incentive to try to protect a significant local constituency, but collectively the members havean incentive to do some harm. The mechanism is to commit in advance to the policy change, before the targets are identified and without theopportunity for amendment. This action relieves a member of Congress from direct responsibility for the harmful action.

Two recent examples of the use of this mechanism are the “fast track” process for approving trade agreements, and the base closurecommission. Under fast track, the Congress commits to vote a trade agreement up or down without amendment on the floor of Congress. Thisprocess prevents any single member from trying to assist a local industry by proposing an amendment to increase its protections. Historically,when Congress wrote trade legislation, logrolls among representatives led to the adoption of many such amendments. Under the base closureprocess, the commission, after listening to recommendations from the Department of Defense, submits a list of targets to the President. ThePresident can propose changes, and then the amended list is sent to Congress—again, without the opportunity to amend the list on the floor.Like the trade procedure, this process prevents a member from trying to remove a local base from the list.

A similar process for the national laboratories would deal with the two relevant technical issues. The first is the value of competitionamong laboratories in a given area of research, and the second is the importance of scale economies.

THE FUTURE OF THE NATIONAL LABORATORIES 12684

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 34: [Proceedings of the National Academy of Sciences] (BookZZ.org)

R&D competition has two potential benefits. The first is that it provides the supporter of research with performance benchmarks thatimproves its ability to manage the research organizations, as well as spurs each competitor to be more efficient and so reduces the need forintensive monitoring of performance. The second is that it facilitates parallel R&D projects that take radically different approaches to solvingthe same problem.

The primary disadvantages of competition are that it can sacrifice economies of scale and scope. If a large physical facility is needed forexperiment and testing, duplication can be excessively costly. In addition, if projects have strong complementarities, separating them intocompeting organizations can increase the difficulty of facilitating spillovers among projects, and cause duplication of effort as each entityseparately discovers the same new information. In addition, competition has a political liability: parallel R&D means that some projects mustbe failures in that they lose the competition. Scandal-seeking political leaders can use these failures as an opportunity to look for scapegoats,falsely equating a bad outcome with a bad decision.

The decision about how to downsize the national laboratory system requires an assessment for each area of work whether competition is,on balance, beneficial or harmful. This issue is fundamentally factual, not theoretical, and constitutes the most difficult question to be answeredbefore a reasonable proposal for downsizing the laboratories can be developed.1. National Science Board (1993) Science and Engineering Indicators: 1993 (U.S. Government Printing Office, Washington, DC), Rep. NSB-93–1.2. National Science Foundation (1995) Federal Funds for Research and Development, Fiscal Years 1993, 1994 and 1995 (U.S. Government Printing

Office, Washington, DC), Rep. NSF-95–334.3. National Science Foundation (1994) Federal Funds for Research and Development, Fiscal Years 1992, 1993 and 1994 (U.S. Government Printing

Office, Washington, DC), Rep. NSF-94–311.4. Clinton, W.J. & Gore, A., Jr. (1993) Technology for America’s Economic Growth: A New Direction to Build Economic Strength (Executive Office of

the President, Washington, DC).5. Office of Science and Technology Policy (1994) Science in the National Interest (Executive Office of the President, Washington, DC).6. U.S. Office of Management and Budget (1995) The Budget of the United States, Fiscal Year 1996 (U.S. Government Printing Office, Washington,

DC), Chapter 7.7. Stockdale, G. (1994) The Federal R&D 100 and 1994 CRADA Handbook (Technology Publishing, Washington, DC).8. National Science and Technology Council (1995) Interagency Federal Laboratory Review, Final Report (Executive Office of the President,

Washington, DC).9. Department of Defense (1995) Department of Defense Response to NSTC/PRD 1, Presidential Review Directive on an Interagency Review of Federal

Laboratories (U.S. Department of Defense, Washington, DC), The Dorman Report.10. NASA Federal Laboratory Review Task Force, NASA Advisory Council (1995) NASA Federal Laboratory Review (National Aeronautics and Space

Administration, Washington, DC), The Foster Report.11. Task Force on Alternative Futures for the DOE National Laboratories (1995) Alternative Futures for the Department of Energy National

Laboratories (U.S. Department of Energy, Washington, DC), The Galvin Report.12. Ad Hoc Working Group of the National Cancer Advisory Board (1995) A Review of the Intramural Program of the National Cancer Institute

(National Institutes of Health, Bethesda, MD), The Bishop/Calabresi Report.13. External Advisory Committee of the Director’s Advisory Committee (1995) The Intramural Research Program (National Institutes of Health,

Bethesda, MD), The Cassell/Marks Report.14. Bozeman, B. & Crow, M (1995) Federal Laboratories in the National Innovation System: Policy Implications of the National Comparative

Research and Development Project (Department of Commerce, Washington, DC).15. Markusen, A., Raffel, J., Oden, M. & Llanes, M. (1995) Coming in from the Cold: The Future of Los Alamos and Sandia National Laboratories

(Center for Urban Policy Research, Piscataway, NJ).16. Committee on Criteria for Federal Support of Research and Development (1995) Allocating Federal Funds for Science and Technology (National

Academy Press, Washington, DC).17. The Council of Economic Advisors (1995) Supporting Research and Development to Promote Economic Growth: The Federal Government’s Role

(Executive Office of the President, Washington, DC).18. Rosenberg, N. (1982) Inside the Black Box: Technology and Economics (Cambridge Univ. Press, Cambridge, U.K.).19. Cook-Deegan, R.M. (1995) Survey of Reports on Federal Laboratories (National Academy of Sciences, Washington, DC).20. Cohen, L.R. & Noll, R.G. (1995) The Feasibility of Effective Public-Private R&D Collaboration: The Case of CRADAs (Institute of Governmental

Studies, Berkeley, CA), Working Paper 95–10.

THE FUTURE OF THE NATIONAL LABORATORIES 12685

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 35: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

Long-term change in the organization of inventive activity

NAOMI R.LAMOREAUX*† AND KENNETH L.SOKOLOFF*Departments of *Economics and †History, University of California, 405 Hilgard Avenue, Los Angeles, CA 90095ABSTRACT Relying on a quantitative analysis of the patenting and assignment behavior of inventors, we highlight the evolution

of institutions that encouraged trade in technology and a growing division of labor between those who invented new technologies andthose who exploited them commercially over the nineteenth and early-twentieth centuries. At the heart of this change in theorganization of inventive activity was a set of familiar developments which had significant consequences for the supply and demand ofinventions. On the supply side, the growing complexity and capital intensity of technology raised the amount of human and physicalcapital required for effective invention, making it increasingly desirable for individuals involved in this activity to specialize. On thedemand side, the growing competitiveness of product markets induced firms to purchase or otherwise obtain the rights to technologiesdeveloped by others. These increasing incentives to differentiate the task of invention from that of commercializing new technologiesdepended for their realization upon the development of markets and other types of organizational supports for trade in technology. The evidence suggests that the necessary institutions evolved first in those regions of the country where early patenting activity hadalready been concentrated. A self-reinforcing process whereby high rates of inventive activity encouraged the evolution of a market fortechnology, which in turn encouraged greater specialization and productivity at invention as individuals found it increasingly feasibleto sell and license their discoveries, appears to have been operating. This market trade in technological information was an important contributor to the achievement of a high level of specialization at invention well before the rise of large-scale research laboratories inthe twentieth century.

The generation of new technological knowledge is one of the fundamental processes of economic growth. Despite its importance,however, scholars have only an incomplete understanding of how the sources of invention have changed over time with the development oftechnology and of the economy more generally. Although there has been recent progress in establishing basic historical patterns in thecomposition of patentees and in the levels of patenting over place and time, issues such as how resources were mobilized and directed toinventive activity, as well as how they were organized, have not yet been systematically investigated (1–5).

Two stylized models dominate thinking about the process of invention. The first, which mainly grows out of research on technologyduring the early nineteenth century, views the inventor as a creative individual who comes up with an idea and then extracts a return by directlyapplying or exploiting the invention himself (6). The second derives from study of the twentieth-century economy and portrays invention ascarried out by large, in-firm research laboratories where teams of salaried employees pursue a range of activities—from basic research to thedevelopment of commercial products (7). If these models accurately reflect the eras that inspired them, their contrast raises questions as to howand why such a major transformation in the organization of inventive activity occurred during the nineteenth and early-twentieth centuries andwhat effect it had on the pace and direction of technological change.

This paper reports preliminary findings from our long-term program of research on these issues. Relying on a quantitative analysis of thepatenting and assignment behavior of inventors, we demonstrate that a substantial trade in technological information had emerged by the end ofthe nineteenth century, and suggest that the evolution of institutional supports for this exchange of property rights to intellectual capital helpedfoster a growing division of labor between those who invented new technologies and those who exploited them commercially. At the heart ofthis change was a set of familiar developments which had significant consequences for the supply and demand for inventions. On the supplyside, the increasing complexity and capital intensity of technology raised the amounts of human and physical capital required for effectiveinvention, encouraging individuals involved in this activity to specialize. Moreover, although expanding markets meant higher returns forsuccessful discoveries, they also increased the cost of marketing products and led the inventor to regard more favorably the spinning off thetask of commercialization to other specialized parties. On the demand side, the growing competitiveness of product markets made it imperativefor firms to stay on the technological cutting edge—in the first place, by making inventive activity a regular part of their operations, but also byobtaining the rights to technologies developed by others.

These increasing incentives to differentiate the task of invention from that of commercializing new technologies depended for theirrealization upon the development of markets and other types of organizational supports for trade in technology. As we show below, suchinstitutions evolved first in areas where inventive activity was high and spread only gradually to other regions of the country. They appear tohave been the product of a self-reinforcing process whereby high rates of patenting stimulated investments supporting a market in technologicalinformation, which in turn encouraged greater specialization and productivity at invention as inventors found it feasible to sell and license theirdiscoveries. The prominence of firms in this market for technology rose substantially over the late nineteenth century, as they acquired agrowing share of patents at issue, and patentees who chose to assign their patents to firms were more specialized and productive at inventionthan their counterparts who did not. This evidence seems to indicate that the evolution of market exchange in technology had gone far towardachieving high degrees of specialization at invention among individuals, long before firms invested in large-scale research laboratories or evendeveloped stable employment relationships with inventors.

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY 12686

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 36: [Proceedings of the National Academy of Sciences] (BookZZ.org)

THE PATENT SYSTEM AS THE BASIS FOR TRADE IN TECHNOLOGYThe patent system provided the institutional framework within which a market for technology evolved. Consciously designed with the aim

of encouraging more investment in inventive activity, the U.S. system granted inventors an exclusive property right to the use of theirdiscoveries for a fixed term of years. Responsibility for enforcing these rights was left to the courts, especially before an 1836 revision in thelaw empowered the Patent Office to examine applications for originality, and the courts responded by quickly developing an effective set ofprinciples that protected the property rights of patentees and also those who purchased or licensed patented technologies (8).

Although one purpose of the patent system was to stimulate invention, another was to promote the diffusion of technological knowledge.The law required all patentees to provide the Patent Office with detailed specifications for their inventions (including, where appropriate,working models). The end result was a central storehouse of information about technology that was open to all who wished to exploit it. Inaddition, the very act of establishing secure property rights in invention promoted the diffusion of technological knowledge. With the protectionoffered by the patent system, inventors had an incentive to promote their discoveries as widely as possible so as to maximize the returns fromtheir ideas, whether they commercialized them themselves or traded the rights to others. Because infringers were subject to severe penalties,moreover, firms could not risk investing in a new technology without finding out whether others already controlled the relevant property rights.They therefore had to keep well informed about technological developments in other sectors of the economy as well as geographic areas, and itis likely that technologies diffused more rapidly as a consequence and also that the resulting cross-fertilization was a potent stimulus totechnological change overall.

Finally, two distinctive features of the U.S. law encouraged more widespread participation in the patent system and, at the same time, tradein technological information. First, the much lower costs of obtaining a patent in the United States than in other countries meant that a largerfraction of inventions would have expected yields high enough to warrant being patented. Second, the United States was exceptional for muchof the nineteenth century in reserving for the first and true inventor the right to patent an invention (9). Inventors in the United States, therefore,did not have to be as protective of their discoveries as their counterparts elsewhere; they could even risk revealing critical technologicalinformation before the award of the patent to negotiate the early sale of their invention.

Although the patent system provided a legal framework conducive to trade in technology, there were nonetheless a variety of informationand transactions costs that limited the market for inventions. Over the nineteenth century, however, a number of institutional and organizationalchanges reduced these costs, and in so doing, encouraged an expansion of trade. One of the most important was an explosion of publishedsources of information about patented technologies. The Patent Office itself published an annual list of patents issued, but private publicationsemerged early in the nineteenth century to improve upon this service. For example, Scientific American featured articles about technologicaldevelopments, printed complete lists of patents issued on a weekly basis, and provided readers with copies of patent specifications for a smallfee. Over time, moreover, in industry after industry specialized trade journals appeared that kept producers informed about patents of interest.

Patent agents and solicitors also became important channels through which individuals and firms far from Washington could accessinformation about and take advantage of recent discoveries. Their numbers began to mushroom in the 1840s, first in the vicinity of Washingtonand then in other urban centers, especially in the Northeast. Solicitors in different cities linked themselves through chains of correspondentrelations not unlike those that characterized the banking system of that era. Although the original function of these solicitors was to shepherdapplications for patents through the official review process and to defend previously issued patents in interference and infringementproceedings, they soon began to act as intermediaries for trade in technologies. Solicitors advertised their services in journals like ScientificAmerican, offering to find buyers for patents on a commission basis, and we know from manuscript records of assignment contracts that it wasnot uncommon for inventors actually to transfer control to such agents (10). Although we are not yet able to construct a precise index of thevolume of trade in patented technologies for the period before 1870, it is clear that such exchange began to take off during the middle of thenineteenth century, at the same time as these new information channels and intermediaries were developing. Not only was there a substantialincrease in the number of assignments filed at the Patent Office, but a new focus on the rights of assignees and licensees is evident in the courtcases of the period (8).

THE GROWTH OF TRADE IN PATENTS AND SPECIALIZATION AT INVENTIONInventive activity, as reflected in rates of patenting per capita, first began to increase rapidly with the beginnings of industrialization early

in the nineteenth century. This initial phase of secular increase in invention was characterized by distinctive geographic patterns. In particular,the rise in patenting was concentrated in districts that were near urban centers or along navigable waterways that provided low-costtransportation to markets. These patterns, together with the pro-cylicality of patenting rates and other evidence that patenting was sensitive todemand, have led scholars to suggest that expanding markets helped induce the acceleration of invention and technological change associatedwith the onset of economic growth, and that differential access to these markets was an important contributor to the opening up of pronouncedgeographic variation in inventive activity (1, 3, 5).

The responsiveness of patenting to market demand may have been related to the small scale of enterprise and the broad familiarity of thepopulation with the relatively simple and labor-intensive technologies characteristic of the era; in such a context, the “supply” of inventionscould be elastic. Indeed, studies of the careers of early inventors suggest that they were drawn from rather ordinary occupations, were far fromspecialized at inventive activity, and were usually involved in the commercial exploitation of their discoveries. Changes in these patterns beganto be apparent about the middle of the nineteenth century, however, as the share of inventors from more technical occupations rose—parallelingthe spread of mechanization and the rise in capital intensity across the manufacturing sector (4, 5, 11).

Despite significant changes in technology and in the backgrounds of inventors, as well as the massive extension of product marketsassociated with the building of the railroads, marked geographic differentials in patenting persisted over time. As shown in Table 1 for theperiod from 1840 to 1910, not only did patenting rates remain lower in regions like the South and the West North Central than in the Northeast,but there were substantial differences between New England and the Middle Atlantic as well. Although the regional gaps narrowedconsiderably over time, most of the convergence occurred late—after 1890.

Among the factors that might contribute to the persistence of such regional differences in inventive activity are institutions that havelocation-specific effects on the costs of contracting

LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY 12687

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 37: [Proceedings of the National Academy of Sciences] (BookZZ.org)

over technological information. The evolution of such institutions would stimulate increases in invention by making it easier for inventors toraise capital to support their inventive activity, increasing the net returns they could expect from a given discovery, and accordinglyencouraging individuals with a comparative advantage to make appropriate task-specific investments to augment their productivity at invention.Moreover, the investments necessary for the emergence of market institutions, such as patent agents, would presumably be concentrated inareas where rates of invention were already high and, therefore, the prospects for returns on trade in technology would be greatest. Since thesesorts of institutions likely had a limited geographic scope during the early stages of their evolution, persistence in geographic patterns ofpatenting could have resulted from a self-reinforcing process whereby inventive activity stimulated the development of the institutions, whichin turn promoted specialization and productivity at invention as well as attracted individuals with inventive potential to the area.Table 1. Annual patents received per million residents, by region, 1840–1911

1840–1849 1850–1859 1860–1869 1870–1871 1890–1891 1910–1911New England 55.5 175.6 483.3 775.8 772.0 534.3Middle Atlantic 51.7 129.4 332.3 563.4 607.0 488.6East North Central 16.6 57.3 210.3 312.3 429.9 442.3West North Central 9.5 22.9 95.4 146.5 248.7 272.0South 5.5 15.5 26.0 85.8 103.1 114.4West — 24.8 164.5 366.7 381.6 458.4U.S. average 27.5 91.5 195.7 325.4 360.4 334.2

The patenting rates have been computed from the cross-sectional samples and from information in (3, 12). The regional classifications are the censusclassifications, except that Maryland, Delaware, and the District of Columbia are included in the Middle Atlantic for the 1840s, 1850s, and 1860s, but inthe South for the later periods.

We use patent records, which contain information on full and partial assignments of patent rights, to examine the outlines of the emergingmarket for technology. Two of the three samples of patent records we analyze in this paper are drawn from the Annual Report of theCommissioner of Patents. The first consists of three cross-sections for the years 1870–1871, 1890–1891, and 1910–1911, totalling slightly over6600 patents. The second is a longitudinal sample that follows over their entire patenting careers all of the 562 patentees in the cross-sectionswhose surnames began with the letter “B” (the most common among patentees during this period). The latter data set is not yet complete, butwe report here on information retrieved for just over 4200 patents from 53 of the years from 1834 to 1936. For each patent, we collected thenames and addresses of both patentees and assignees. Additional relevant information, such as the characteristics of the locality in which thepatentee was located and other patents awarded to the patentee, was also linked to each patent. Finally, a new sample of assignment contractsrecently put in machine-readable form is the third data set we employ. The so-called Liber data set contains nearly 4600 contracts, assembledby collecting every contract filed with the Patent Office during January 1871, January 1891, or January 1911. This sample has the advantage ofproviding detailed information about sales or transfers of patents that were contracted after, as well as before, the date of issue.

Regional estimates of the proportions of patents assigned at issue were computed from the three cross-sections and are reported in Table 2.They suggest an association between rates of patenting per capita and rates of assignment, with the paces at which New England and theMiddle Atlantic states generated and traded patents far exceeding those in the rest of the country. In 1870, 26.5% and 20.6%, respectively, ofthe patents from those two regions were being assigned by the date they were issued, compared with 14.7% in the East North Central states andbelow 10% elsewhere. Though there was some convergence in proportional terms, the geographic correspondence between assignment ratesand patent rates remained.Table 2. Assignment of patents at issue by region, 1870–1911

1870–1871 1890–1891 1910–1911New England% assigned 26.5 (340) 40.8 (321) 50.0 (264)% of assignments to companies 33.3 56.5 75.0Middle Atlantic% assigned 20.6 (645) 29.1 (669) 36.1 (710)% of assignments to companies 22.6 50.8 72.7East North Central% assigned 14.7 (340) 27.9 (505) 32.3 (660)% of assignments to companies 12.0 47.5 68.1West North Central% assigned 9.0 (67) 21.8 (202) 17.5 (285)% of assignments to companies 0.0 36.4 46.0South% assigned 6.4 (140) 25.0 (216) 22.7 (322)% of assignments to companies 11.1 33.3 34.2West% assigned 0.0 (31) 25.4 (118) 21.4 (271)% of assignments to companies — 20.0 41.4All patents included foreign% assigned 18.5 (1,618) 29.1 (2,201) 30.5 (2,816)% of assignments to companies 23.7 47.2 64.8

The estimates were computed from the cross-sectional samples. Those assignments that were not to companies went to individuals. The numbers ofobservations in the respective cells are reported within parentheses.

LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY 12688

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 38: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Table 2 also shows that trade in patent rights increased in all regions through 1910, nearly doubling overall by this measure.Table 3. Descriptive statistics on assignments made before and after issue of patents

1870–1871 1890–1891 1910–1911New EnglandAssignment to patenting index 115.1 109.5 132.4% assigned after issue 70.4 31.2 30.1% geographic assignments 17.1 0.8 0.0Middle AtlanticAssignment to patenting index 100.7 94.8 116.3% assigned after issue 70.9 44.4 37.9% geographic assignments 19.1 1.9 0.7East North CentralAssignment to patenting index 96.3 118.1 104.9% assigned after issue 77.7 48.5 32.8% geographic assignments 34.3 5.7 1.8West North CentralAssignment to patenting index 90.7 110.1 73.5% assigned after issue 77.4 48.6 42.6% geographic assignments 41.9 13.0 2.6SouthAssignment to patenting index 60.0 68.9 68.0% assigned after issue 74.4 42.3 48.2% geographic assignments 20.9 6.2 2.5WestAssignment to patenting index 150.0 67.2 81.5% assigned after issue 59.1 57.4 36.0% geographic assignments 18.2 7.4 1.2Total domesticAssignment to patenting index 100.0 100.0 100.0% assigned after issue 72.3 44.1 36.5% geographic assignments 22.8 4.6 1.2Assignments to patents ratio 0.83 0.71 0.71Number 794 1,373 1,869

The assignment to patenting index was constructed by setting the ratio of the total number of assignments by U.S. patentees to the number of patentsawarded in the respective year equal to 100. The regional ratios were computed analogously, and the indexes report their values relative to the nationalaverage in the respective year. The % geographic assignments was calculated as the proportion of all assignments by patentees residing in the particularregion that transferred rights to the patent for a geographic area smaller than the U.S.

The Liber sample we have collected, which includes all assignment contracts—those made after issue as well as before—indicates thatthough the volume of trade in patented technology (as reflected in the number of assignment contracts) increased steadily over the nineteenthcentury, the ratio of the total number of contracts to the total number of patents peaked earlier than the proportion of patents assigned at issue.‡As shown in Table 3, the estimated ratio of assignments to patents actually fell from 0.83 in 1870–1871 to 0.71 in 1890–1891 and 1910–1911.Hence, the appearance in Table 2 of low levels of assignment in 1870, and of a substantial increase in assignments over time, really onlyreflects the low percentage made by the time of issue in 1870, and the increase in that percentage over time. The Liber data unambiguouslydemonstrate that there was already extensive trade in patented technologies by 1870, but that most of the activity at this early date occurredafter patents were issued. The early trade also appears distinctive in other respects. More than a quarter of the contracts filed in 1871 were forsecondary trades, or had someone other than the patentee or a family member as the assignor of the patent rights; even more interesting perhapsis that nearly a quarter were assignments of rights restricted to geographic areas smaller than the territory of the United States. Although thispractice became much less prevalent over time, patentees made extensive use of geographic assignments to extract the returns to inventionbefore national output markets emerged.

Although assignments made before the date of issue evidently constituted only a small proportion of all patents until late in the century,their patterns of regional variation appear to have been representative of the entire population of assignment contracts. Whether one looks atassignments at issue or all assignments, the regions with the highest patenting activity—New England, the Middle Atlantic, and East NorthCentral—were those with the highest propensities to trade patent rights. Whether one looks at assignments at issue or all assignments,therefore, the results are consistent with the hypothesis that the evolution of institutions and other conditions conducive to trade in technologydeveloped more rapidly in areas with higher patenting activity.

The growing proportion of inventors that were choosing to sell off the rights to their patents suggests that patentees were increasinglyfocusing their attention and resources on the pursuit of inventive activity. Indeed, the data we have on patenting over careers, presented inTable 4, are quite consistent with the view that there was a dramatic increase in specialization at invention over the course of the nineteenthcentury. The early 1800s were a relatively democratic era of invention, when the typical inventor filed only one or two patents over his lifetime,and when efforts at technological creativity were only one aspect of an individual’s work, if not a sideline altogether. Although such part-timeinventors continued to be significant contributors to patenting, their share fell sharply between the 1830s and 1870s, from over 70% to less than40%. Conversely, the share of patents accounted for by patentees with 10 or more career patents rose from less than 5% to more than 20%.There may have been other contributors to the sharp change in the distribution of patents across patentees, but this evidence that a majorincrease in the degree of specialization occurred as early as the middle third of the nineteenth century—before the emergence of researchlaboratories housed in large-scale enterprises—is important (7, 13).

The idea that the increase in specialization at invention was facilitated, if not promoted, by an enhanced ability to trade technologicalinformation is certainly consistent with the observation that transfers in patent rights became extensive during the period in which thesubstantial increase in specialization occurred. To establish more directly whether the two developments are related, Table 5 providescomparisons of the extent of long-term commitment to invention between patentees who assigned their patent rights away to companies andpatentees who did not. The logic would suggest that patentees who traded the rights to their inventions should have demonstrably greater long-term commitments to inventive activity over their career than those who did not do so. We test this implication by comparing the two groups inour “B” sample by their average number of “career” patents and the average length of the “career” in invention over all patents and over allpatentees (weighted and unweighted averages). What stands out is that patentees who assigned their patent rights away to companies registeredmany more patents over their careers, and also had longer careers, than those who retained the rights to their patents through the date of issue.Industrial sector and the degree of urbanization of the patentee’s county of residence are controlled for in Table 5, but the results are robust to ageneral multivariate analysis accounting for region and time period as well.

LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY 12689

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

‡In order for an assignment to be legally binding, a copy of the contract had to be deposited at the Patent Office within 3 months ofthe agreement. One cannot infer from the peak of the ratio of total assignments to total patents in 1871 that the proportion of patentsever assigned decreased afterwards. The declines over time in secondary assignments and in the proportion of geographic assignmentswould tend to reduce the ratio if the overall proportion of patents ever assigned remained constant.

Page 39: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Table 4. Distribution of patents by patentee commitment to patenting, 1790–1911No. of “career” patents by patentee1 patent, % 2 patents, % 3 patents, % 4–5 patents, % 6–9 patents, % 10+ patents, %

1790–1811 51.0 19.0 12.0 7.6 7.0 3.51812–1829 57.5 17.4 7.1 7.6 5.5 4.91830–1842 57.4 16.5 8.1 8.0 5.6 4.41870–1871 21.7 17.1 10.5 16.4 10.5 23.71890–1891 23.2 16.0 6.7 10.3 12.4 31.41910–1911 39.6 16.3 7.8 9.4 7.3 19.6

The distributions of patents awarded during the respective periods are reported by the number of patents ever received by the patentee over his career.The figures for 1790 to 1842 are from ref. 4, and those for the later three periods were computed from the “B” sample discussed in the text. Theincomplete state of this sample leads to underestimates of the shares of the most active patentees, especially for 1910–1911.

The significance of the ability to trade in technological information is also indicated by the relationship between the location of thepatentee and his career characteristics. Patentees who resided in geographic areas with high assignment rates (which typically had higherpatenting rates, and perhaps institutions more conducive to trade in technology), were more specialized at invention, even after controlling forwhether their individual patents were assigned. For example, even among patentees who did not assign their patents at issue, those inmetropolitan centers had the greatest number of career patents and the longest careers at invention—followed by those in counties with smallcities and rural counties, respectively. Also relevant is the finding that patentees who were engaged in sectors with more complex technologies,like electrical/energy and civil engineering, where more substantial investments in technical knowledge would be required for effectiveinvention, were more likely to sell their patent rights off at time of issue. This result is consistent with the view that patentees who hadexogenous reasons for specializing more at invention would be more inclined to avail themselves of the opportunity to extract the return to theirinvention by selling off the rights for commercialization to another party.

The finding that the most productive patentees were those who assigned to companies raises the fundamental issue of what precisebehavior or relationship was responsible. One possibility is that more and more patentees were employees of companies, and that their higherproductivity did not reflect greater inventiveness but instead access to the superior resources (for example, funds and legal assistance) providedby the company. Although it is undoubtedly correct that patentees increasingly became employees of their assignees over time, there are severalreasons to doubt that this trend explains our results. First, only one of the 34 patentees in our “B” sample with 20 or more career patentsassigned all of his patents at issue. Indeed, only about half of these highly productive patentees assigned more than 50% of their patents atissue. Second, as shown in Table 6, patentees who assigned to companies manifested considerable “contractual mobility,” defined as thenumber of different assignees (other than the patentee himself) that the patentee dealt with over his career. The data suggest that the mosthighly productive patentees, those with 20 or more career patents, were not tied to single assignees. When one computes the figures overinventors, only 20.6% of the patentees used only one assignee over their career. When one computes the analogous figures over patents, theproportion falls to 17.8%. These numbers seem small enough to undercut the argument that productive patentees were tied to their assignees,but these patentees appear even more independent when one recognizes that the percentages in the table pertain only to those patents that wereactually assigned. Given their remarkable contractual mobility (roughly 40% had four or more assignees over their careers), it is difficult tobelieve that the high productivity at patenting we observe was due to a stable employment relationship. On the contrary, the evidence is moreconsistent with the view that highly productive patentees behaved entrepreneurially and were generally in a position to switch assigneesfrequently.

CONCLUSIONWe can now provide an overview of the growth of a market for technology in the late-nineteenth century United States. Trade in

technology began to expand rapidly as early as the second third of the century as new channels of information emerged and patent agentsincreasingly took on the role of intermediaries. By 1870, these developments had already had a majorTable 5. Mean values on career patenting, by urbanization and industrial sector

Rural Urban Metrocenter

Agriculture/foods

Electric/energy

Engineering/construction

Manufacturing Transportation Miscellaneous/unknown

“Career”patents

15.1 24.3 37.6 13.1 51.9 37.6 20.4 28.4 25.6

%assigned

63.1 54.1 74.4 40.4 81.3 90.5 69.9 27.2 79.9

Length ofcareer

22.2 24.5 26.7 21.2 28.5 26.2 25.6 23.3 22.6

(n) (847) (709) (1918) (257) (676) (310) (1266) (557) (407)“Career”patents forPatenteeswhoassign tocompanies

33.3 35.9 55.2 21.3 88.3 47.4 31.6 34.2 40.8

Patenteeswho didnot assign

8.7 18.9 20.9 11.2 21.1 9.4 12.0 28.8 11.0

Length of“career”forPatenteeswhoassign tocompanies

33.9 27.2 30.0 29.3 32.1 35.7 30.5 23.4 27.4

Patenteeswho didnot assign

18.2 23.4 23.8 19.2 25.6 20.7 21.3 24.1 17.7

These estimates were computed from the “B” sample described in the text. The urbanization classification refers to the county in which the patenteeresided. Urban counties are those in which the largest city had a population greater than 25,000, but less than 100,000. Metro centers are counties wherethe largest city had a population of more than 100,000. “Career” patents refer to the total number of patents awarded to the patentee over the years wehave reviewed to date, and length of “career” is the number of years between the award of the last patent identified and the first. Patentees with only onecareer patent identified were treated as having a career of 1-year duration. The unit of observation for these mean values is the individual patent, but thequalitative results are the same if the means are computed with individual patentees as the unit of observation.

LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY 12690

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 40: [Proceedings of the National Academy of Sciences] (BookZZ.org)

effect on the behavior of inventors, who responded to the opportunities for gain represented by the growth of the market for inventions andbegan to specialize in, and become more productive at, invention. The greater complexity of technology and the rising fixed costs of inventiveactivity made such specialization increasingly desirable, but inventors required some assurance that they would be able to extract a return totheir efforts by ceding the products of their creativity before they could comfortably concentrate their resources and energies on invention. Theincreased volume of trade in patents provided that assurance.Table 6. Contractual mobility among patentees, by their number of “career” patents

No. of Different AssigneesNo. of“career”patents bypatentee

% assigned 0 1 2–3 4–5 6+ TotalNo. % No. % No. % No. % No. % No. %

Distribution ofpatentees

1 19.7 159 80.7 31 15.7 7 3.6 — — — — 197 35.12–5 21.1 129 59.2 54 24.8 30 13.8 4 1.8 1 0.5 218 38.96–10 31.4 31 44.2 15 21.4 21 30.0 3 4.3 — — 70 12.511–19 47.6 4 9.5 13 31.0 14 33.3 6 14.3 5 11.9 42 7.520+ 44.1 3 8.8 7 20.6 10 29.4 7 20.6 7 20.6 34 6.1Total 326 58.1 108 19.3 82 14.6 20 3.6 13 2.3 561

Distribution ofpatents

1 20.1 160 80.8 31 15.7 7 3.5 — — — — 198 5.62–5 24.0 357 55.6 166 25.9 104 16.2 11 1.7 4 0.6 642 18.56–10 30.4 225 41.2 126 23.1 173 31.5 23 4.2 — — 546 15.711–19 47.4 49 8.2 189 31.8 183 30.7 95 16.0 79 13.3 595 17.120+ 66.8 107 7.2 266 17.8 541 36.3 272 18.2 306 20.5 1,492 43.0Total 898 25.9 778 7.74 1,007 29.0 401 115 389 117 3.473

These estimates were computed from the “B” sample described in the text. The first panel presents the distribution of the 561 inventors in our samples,by the total number of patents received and the total number of different assignees (exclusive of the patentee) appearing at issue for those patents. Thesecond panel presents the distribution of patents, by the total number of patents received by each patent’s patentee, and by the number of differentassignees appearing at issue.

The market for technology did not, however, develop uniformly across the nation. As the regional breakdowns indicate, patents were morelikely to be assigned where patenting rates had long been high—the East North Central, the Middle Atlantic, and especially New England. Inthese areas, high patenting rates seem to have made investment in institutions that facilitated trade in technology a more attractive proposition,and the resulting greater ability to market inventions served to stimulate more invention. Hence regions that started out as centers of patentingactivity tended to maintain their advantage over time.

It was primarily in these regions, moreover, that the market for technology continued to evolve and mature. Although the volume of tradein patents was already high by 1870, over the next 40 years the nature of that trade changed in important ways. As time went on, for example,inventors on average were able to dispose of their patents earlier than before, selling an increasing proportion in advance of issue. Anotherchange was in the identity of the assignee. While at first patentees often chose to assign partial patent rights to local businessmen to raisecapital for the support of inventive activity or commercial development, they increasingly opted for relinquishing all stake in their inventions,assigning complete rights over to a company or another party. This might seem to suggest that the change in behavior was produced byinventors becoming employees of firms, but we do not think that this was mainly what was going on. The number of employment relationshipsbetween assignees and patentees was undoubtedly increasing during the late-nineteenth and early-twentieth centuries, but the contractualmobility revealed by our examination of individual patentees over their careers suggests that productive inventors were still free agents for themost part. Rather, it appears that the growth of intensely competitive national product markets, coupled with the existence of the patent system,created a powerful incentive for firms to become more active participants in the market for technology. This greater concern on the part offirms to obtain the rights to the most advanced technologies further enhanced the evolution of institutions conducive to trade in intellectualcapital, and the growing market for technology elicited a supply response from independent inventors. Inventors who assigned to companieswere the most specialized and productive of all.

Of course, the development of this market did not solve all of the information problems associated with trade in technology nor maketransactions involving patents frictionless. Anecdotal evidence suggests that many difficulties remained— that inventors were not always ableto find buyers for their patents at remunerative prices or mobilize capital to support their inventive activity. Many would later decide itadvantageous to exchange their independence for financial security and a supportive intellectual environment. At the same time, more and morecompanies would find it desirable to augment their internal technological capabilities, increasing their employment of inventors and sometimescreating formal research divisions. It is important, however, not to let our familiarity with large firms and their extensive research facilitiesobscure our understanding of the history of technological change. During the nineteenth century, it was primarily the development ofinstitutions that facilitated the exchange of technology in the market that enabled creative individuals to specialize in and become moreproductive at invention.

We acknowledge the excellent research assistance of Lisa Boehmer, Nancy Cole, Homan Dayani, Yael Elad, Gina Franco-Cruz, SvetlanaGacinovic, Jennifer Hendricks, Charles Kaljian, Anna Maris Lagiss, David Madero Suarez, Huagang Li, John Majewski, Yolanda McDonough,and Edward Saldana. We are also grateful for valuable advice and comments from B.Zorina Khan and David Mowery. The work has beensupported by grants from the National Science Foundation (SBR 9309–684) and the University of California, Los Angeles, Institute ofIndustrial Relations.1. Schmookler, J. (1966) Invention and Economic Growth (Harvard Univ. Press, Cambridge, MA).2. Griliches, Z. (1990) J. Econ Lit. 28, 1661–1707.3. Sokoloff, K.L. (1988) J. Econ. Hist. 48, 813–850.4. Sokoloff, K.L. & Khan, B.Z. (1990) J. Econ. Hist. 50, 363–378.5. Khan, B.Z. & Sokoloff, K.L. (1993) J. Econ. Hist. 53, 289–307.6. Hounshell, D.A. (1984) From the American System to Mass Production 1800–1932 (Johns Hopkins Univ. Press, Baltimore).

LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY 12691

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 41: [Proceedings of the National Academy of Sciences] (BookZZ.org)

7. Mowery, D.C. (1995) in Coordination and Information, eds. Lamoreaux, N.R. & Raff, D.M.G. (Univ. of Chicago Press, Chicago), pp. 147–176.8. Khan, B.Z. (1995) J. Econ. Hist. 55, 58–97.9. Machlup, F. (1958) An Economic Review of the Patent System (U.S. Government Printing Office, Washington, DC).10. Simonds, W.E. (1871) Practical Suggestions on the Sale of Patents (privately printed, Hartford, CT).11. Sokoloff, K.L. (1986) in Long-Term Factors in American Economic Growth, eds. Engerman, S.L. & Gallman, R.E. (Univ. of Chicago Press,

Chicago), pp. 679–736.12. U.S. Patent Office (1891) Annual Report of the Commissioner of Patents for the Year 1891 (U.S. Government Printing Office, Washington, DC).13. Chandler, A. (1977) The Visible Hand (Harvard Univ. Press, Cambridge, MA).

LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY 12692

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 42: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

National policies for technical change: Where are the increasingreturns to economic research?

KEITH PAVITTScience Policy Research Unit, University of Sussex, Falmer, Brighton, BN1 9RF, United KingdomABSTRACT Improvements over the past 30 years in statistical data, analysis, and related theory have strengthened the basis for

science and technology policy by confirming the importance of technical change in national economic performance. But two importantfeatures of scientific and technological activities in the Organization for Economic Cooperation and Development countries are still notaddressed adequately in mainstream economics: (i) the justification of public funding for basic research and (ii) persistent international differences in investment in research and development and related activities. In addition, one major gap is now emergingin our systems of empirical measurement—the development of software technology, especially in the service sector. There are thereforedangers of diminishing returns to the usefulness of economic research, which continues to rely completely on established theory andestablished statistical sources. Alternative propositions that deserve serious consideration are: (i) the economic usefulness of basicresearch is in the provision of (mainly tacit) skills rather than codified and applicable information; (ii) in developing and exploitingtechnological opportunities, institutional competencies are just as important as the incentive structures that they face; and (iii) softwaretechnology developed in traditional service sectors may now be a more important locus of technical change than software technologydeveloped in “high-tech” manufacturing.

From the classical writers of the 18th and 19th centuries to the growth accounting exercises of the 1950s and 1960s, the central importanceof technical change to economic growth and welfare has been widely recognized. Since then, our understanding—and consequent usefulness topolicy makers—have been strengthened by systematic improvements in comprehensive statistics on the research and development (R&D) andother activities that generate knowledge for technical change and by related econometric and theoretical analysis.

Of particular interest to national policy makers have been the growing number of studies showing that international differences in exportand growth performance countries can be explained (among other things) by differences in investment in “intangible capital,” whethermeasured in terms of education and skills (mainly for developing countries) or R&D activities (mainly for advanced countries). These studieshave recently been reviewed by Fagerberg (1) and Krugman (2). Behind the broad agreement on the economic importance of technical change,both reveal fundamental disagreements in theory and method. In particular, they contrast the formalism and analytical tractability ofmainstream neoclassical analysis with the realism and analytical complexity of the more dynamic evolutionary approach. Thus, Krugmanconcludes:

Today it is normal for trade theorists to think of world trade as largely driven by technological differences between countries;to think of technology as largely driven by cumulative processes of innovation and the diffusion of knowledge; to see apossible source of concern in the self-reinforcing character of technological advantage; and to argue that dynamic effects oftechnology on growth represent both the main gains from trade and the main costs of protection…the theory has becomemore exciting, more dynamic and much closer to the world view long held by insightful observers who were skeptical of theold conventional wisdom.Yet…the current mood in the field is one of at least mild discouragement. The reason is that the new approaches, even thoughthey depend on very special models, are too flexible. Too many things can happen… a clever graduate student can produce amodel to justify any policy, [ref. 2, p. 360.]Fagerberg finds similar tensions among the new growth theorists:…technological progress in conceived either as a “free good” (“manna from heaven”), as a by-product (externality), or as aresult of intentional R&D activities in private firms. All three perspectives have some merits. Basic research in universitiesand other public R&D institutions provides substantial inputs into the innovation process. Learning by doing, usinginteracting, etc., are important for technological progress. However… models that do not include the third source oftechnological progress (innovation…by intentional activities in private firms) overlook one of the most important sources oftechnological progress……important differences remain…while formal theory still adopts the traditional neo-classical perspective as profitmaximizers, endowed with perfect information and foresight, appreciative theorizing increasingly portrays firms asorganizations characterized by different capabilities (including technology) and strategies, and operating under considerableuncertainty with respect to future technological trends…Although some formal theories now acknowledge the importance offirms for technological progress, these theories essentially treat technology as “blueprints” and “designs” that can be tradedon markets. In contrast, appreciative theorizing often describes technology as organizationally embedded, tacit, cumulative incharacter, influenced by interaction between these firms and their environments, and geographically localized, [ref. 1, p. 1170.]

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviations: R&D, research and development; OECD, Organization for Economic Cooperation and Development.

NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH? 12693

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 43: [Proceedings of the National Academy of Sciences] (BookZZ.org)

As a student of science and technology policy—and therefore unencumbered by any externally imposed need to relate myanalyses to the assumptions and methods of mainstream neoclassical theory—I find what Krugman calls “more exciting,more dynamic” theorizing and what Fagerberg calls “appreciative” theorizing, far more useful in doing my job. More to thepoint of this paper, while the above differences have been largely irrelevant to past analyses of technology’s economicimportance, they are turning out to be critical in two important areas of policy for the future: the justification of publicsupport for basic research and the determinants of the level of private support of R&D. They will therefore need to beaddressed more explicitly in future. So, too, will the largely uncharted and unmeasured world of software technology.

THE USEFULNESS OF BASIC RESEARCHThe Production of Useful Information? In the past, the case for public policy for basic research has been strongly supported by

economic analysis. Governments provide by far the largest proportion of the funding for such research in the Organization for EconomicCooperation and Development (OECD) countries. The well-known justification for such subsidy was provided by Nelson (3) and Arrow (4):the economically useful output of basic research is codified information, which has the property of a “public good” in being costly to produceand virtually costless to transfer, use, and reuse. It is therefore economically efficient to make the results of basic research freely available to allpotential users. But this reduces the incentive of private agents to fund it, since they cannot appropriate the economic benefits of its results;hence the need for public subsidy for basic research, the results of which are made public.

This formulation was very influential in the 1960s and 1970s, but began to fray at the edges in the 1980s. The analyses of Nelson andArrow implicitly assumed a closed economy. In an increasingly open and interdependent world, the very public good characteristics that justifypublic subsidy to basic research also make its results available for use in any country, thereby creating a “free rider” problem. In this context,Japanese firms in particular have been accused of dipping into the world’s stock of freely available scientific knowledge, without adding muchto it themselves.

But the main problem has been in the difficulty of measuring the national economics benefits (or “spillovers”) of national investments inbasic research. Countries with the best record in basic research (United States and United Kingdom) have performed less well technologicallyand economically than Germany and Japan. This should be perplexing—even discouraging—to the new growth theorists who give centralimportance to policies to stimulate technological spillovers, where public support to basic research should therefore be one of the main policyinstruments to promote technical change. Yet the experiences of Germany and Japan, especially when compared with the opposite experienceof the United Kingdom, suggest that the causal linkages run the other way—not from basic research to technical change, but from technicalchange to basic research. In all three countries, trends in relative performance in basic research since World War II have lagged relativeperformance in technical change. This is not an original observation. More than one hundred years ago, de Tocqueville (5) and then Marx (6)saw that the technological dynamism of early capitalism would stimulate demand for basic research knowledge, as well as resources,techniques, and data for its execution.

At a more detailed level, it has also proved difficult to find convincing and comprehensive evidence of the direct technological benefit ofthe information provided by basic research. This is reflected in Table 1, which shows the frequency with which U.S. patents granted in 1994cite (i.e., are related to) other patents, and the frequency with which they cite science-refereed journals and other sources. In total, informationfrom refereed journals provide only 7.2% [= 0.9/(10.9+0.9+0.7), from last row of Table 1] of the information inputs into patented inventions,whereas academic research accounts for ≈17% of all R&D in the United States and in the OECD as a whole. Since universities in the USAprovide ≈70% of refereed journal papers, academic research probably supplies less than a third of the information inputs into patentedinventions than its share of total R&D would lead us to expect.

Furthermore, the direct economic benefits of the information provided by basic research are very unevenly spread amongst sectors,including among relatively R&D-intensive sectors. Table 1 shows that the intensity of use of published knowledge is particularly high in drugs,followed by other chemicals, while being virtually nonexistent in aircraft, motor vehicles, and nonelectrical machinery. Nearly half the citationsjournals are from chemicals, ≈37.5% from electronic-related products and only just over 5% from nonelectrical machinery and transportation.And in spite of this apparent lack of directTable 1. Citing patterns in U.S. patents, 1994

No. of citations per patent to Share of all citations to journalsManufacturing sector No. of patents Other patents Science journals OtherChemicals (less drugs) 10,592 9.8 2.5 1.2 29.1Drugs 2,568 7.8 7.3 1.8 20.6Instruments 14,950 11.8 1.0 0.7 16.3Electronic equipment 16,108 8.8 0.7 0.6 12.2Electrical equipment 6,631 10.0 0.6 0.6 4.4Office and computing 5,501 10.0 0.7 1.0 4.3Nonelectrical machinery 15,001 12.2 0.2 0.5 3.3Rubber and miscellaneous plastic 4,344 12.4 0.4 0.6 1.9Other 8,477 12.2 0.2 0.4 1.9Metal products 6,645 11.6 0.2 0.4 1.5Primary metals 918 10.5 0.8 0.7 1.0Building materials 1,856 12.6 0.5 0.7 1.0Food 596 15.1 1.3 1.6 0.9Oil and gas 998 15.0 0.6 0.9 0.7Motor vehicles and transportation 3,223 11.3 0.1 0.3 0.4Textiles 567 12.4 0.3 0.8 0.2Aircraft 905 11.6 0.1 0.3 0.1Total 99,898 10.9 0.9 0.7 100.0

Data taken from D.Olivastro (CHI Research, Haddon Heights, NJ; personal communication).

NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH? 12694

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 44: [Proceedings of the National Academy of Sciences] (BookZZ.org)

usefulness, many successful British firms recently advised the Government to continue to allow universities to concentrate on long-termacademic research and training and to caution against diverting them to more immediately and obviously useful goals (7).

We also find that, in spite of the small direct impact on invention of published knowledge and contrary to the expectations of themainstream theory, large firms in some sectors both undertake extensive amounts of basic research and then publish the results. About 9% ofU.S. journal publications come from firms. And Hicks et al. (8) have shown that large European and Japanese firms in the chemicals andelectrical/ electronic industries each publish >200 and sometimes up to 500 papers a year, which is as much as a medium-sized European orJapanese university.

The Capacity to Solve Complex Problems. Thus business practitioners persist in supporting both privately and publicly funded basicresearch, despite its apparently small direct contribution to inventive and innovative activities. The reason is that the benefits that they identifyfrom public and corporate support for basic research are much broader than the “information,” “discoveries,” and “ideas” that tend to bestressed by economists, sociologists, and academic scientists. Practitioners attach smaller importance to these contributions than to theprovision of trained researchers, improved research techniques and instrumentation, background (i.e., tacit) knowledge, and membership ofprofessional networks (see, in particular, refs. 9–14)

In general terms, basic research and related training improve corporate (and other) capacities to solve complex problems. According to oneeminent engineer:

…we construct and operate…systems based on prior experiences, and we innovate in them by the human design feedbackmode…first, we look at the system and ask ourselves “How can we do it better?”; second, we make some change, andobserve the system to see if our expectation of “better” is fulfilled; third, we repeat this cycle of improvements over and over.This cyclic, human design feed back mode has also been called “learning-by-doing,” “learning by using,” “trial and error,”and even “muddling through” or “barefoot empiricism” … Human design processes can be quite rational or largely intuitive,but by whatever name, and however rational or intuitive…it is an important process not only in design but also in research,development, and technical and social innovations because it is often the only method available. [ref. 15, p. 63.]Most of the contributions are person-embodied and institution-embodied tacit knowledge, rather than information-based codified

knowledge. This explains why the benefits of basic research turn out to be localized rather than available indifferently to the whole world (8,16, 17). For corporations, scientific publications are signals to academic researchers about fields of corporate interest in their (the academicresearchers’) tacit knowledge (18). And Japan has certainly not been a free rider on the world’s basic research, since nearly all the R&Dpractitioners in their corporations were trained with Japanese resources in Japanese universities (19).

Why Public Subsidy? These conclusions suggest that the justification for public subsidy for basic research, in terms of completecodification and nonappropriable nature of immediately applicable knowledge, is a weak one. The results of basic research are rarelyimmediately applicable, and making them so also increases their appropriable nature, since, in seeking potential applications, firms learn howto combine the results of basic research with other firm-specific assets, and this can rarely be imitated overnight, if only because of largecomponents of tacit knowledge (20–22). In three other dimensions, the case for public subsidy is stronger.

The first was originally stressed strongly by Nelson (3); namely, the considerable uncertainties before the event in knowing if, when, andwhere the results of basic research might be applied. The probabilities of application will be greater with an open and flexible interface betweenbasic research and application, which implies public subsidy for the former.

A second, and potentially new, justification grows out of the internationalization of the technological activities of large firms. Facilities forbasic research and training can be considered as an increasingly important part of the infrastructure for downstream technological andproduction activities. Countries may therefore decide to subsidize them, to attract foreign firms or even to retain national ones.

The final and most important justification for public subsidy is training in research skills, since private firms cannot fully benefit fromproviding it when researchers, once trained, can and do move elsewhere. There is, in addition, the important insight of Dasgupta and David (23)that, since the results of basic research are public and those of applied research and development often are not, training through basic researchenables more informed choices and recruitment into the technological research community.

UNEVEN TECHNOLOGICAL DEVELOPMENT AMONGST COUNTRIESEvidence. Empirical studies have shown that technological activities financed by business firms largely determine the capacity of firms

and countries both to exploit the benefits of local basic research and to imitate technological applications originally developed elsewhere (11,24). Thus, although the output of R&D activities have some characteristic of a public good, they are certainly not a free good, since theirapplication often require further investments in technological application (to transform the results of basic research into innovations) or reverseengineering (to imitate a product already developed elsewhere). This helps explain why international differences in economic performance arepartially explained by differences in proxy measures of investments in technological application, such as R&D expenditures, patenting, andskill levels.

Another important gap in our understanding is the persistent international differences in intangible investments in technologicalapplication. Even amongst the OECD countries, they are quite marked. Using census data, Table 2 shows that within Western Europe there areconsiderable difference in the level of training of the non-university-trained workforce.Table 2. Qualifications of the workforce in five European countries

Percentage of workforceLevel of qualification Britain* Netherlands† Germany‡ France* Switzerland§

University degrees 10 8 11 7 11Higher technician diplomas 7 19 7 7 9Craft/lower technical diplomas 20 38 56 33 57No vocational qualifications 63 35 26 53 23Total 100 100 100 100 100

Data taken from ref. 25. Data shown are from the following years: *, 1988; †, 1989; ‡, 1987; and §, 1991.

NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH? 12695

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 45: [Proceedings of the National Academy of Sciences] (BookZZ.org)

These broad statistical difference are confirmed by more detailed comparisons of educational attainment in specific subjects, and theireconomic importance is confirmed by marked international differences in productivity and product quality (25). There is also partial evidencethat the United States resembles the United Kingdom, with a largely unqualified workforce, while Japan and the East Asian tigers resembleGermany and Switzerland (26).

In addition, OECD data show no signs of convergence among the member countries in the proportion of gross domestic produce spent onbusiness-funded R&D activities. Japan, Germany, and some of its neighbors had already caught up with the U.S. level in the early to mid-1970s(19). At least until 1989, they were forging ahead, which could have disquieting implications for future international patterns of economicgrowth, especially since there are also signs of the end of productivity convergence amongst the OECD countries (see, for example, ref. 27).

In spite of their major implications for both science and economic policies, relatively little attention has been paid to explaining theseinternational differences, particularly when they are supported. The conventional explanations are in terms of either macroeconomic conditions(e.g., Japan has an advantage over the United States in investment and R&D because of differences in the cost of capital) or in terms of marketfailure (e.g., given lack of labor mobility, Japanese firms have greater incentives to invest in workforce training; see ref. 28).

Institutional Failure. But while these factors may have some importance, they may not be the whole story. Some of the internationaldifferences have been long and persistent, and none more so (and none more studied) than the differences between the United Kingdom andGermany, which date back to at least the beginning of this century, and which have persisted through the various economic conditionsassociated with imperialism, Labour Party corporatism, and Thatcherite liberalism in the United Kingdom, and imperialism, republicanism(including the great inflation of 1924), nazism, and federalism in Germany (29). The differences in performance can be traced to persistentdifferences in institutions (30, 31), their incentive structures, and their associated competencies (i.e., tacit skills and routines) that change onlyslowly (if at all) in response to international differences in economic incentives.

One of the most persistent differences has been in the proportion of corporate resources spent on R&D and related activities. New light isnow being thrown on this subject by improved international data on corporate R&D performance. Table 3 shows that, in spite of relatively highprofit rates and low “cost of funds,” the major U.K. and U.S. firms spend relatively low proportions of their sales on R&D. Similarly, despitehigher cost of funds, Japanese firms spend higher shares of profits and sales on R&D than U.S. firms. Preliminary results of regression analysissuggest that each firm’s R&D/sales ratio is influenced significantly by its profits/sales ratio and by country-specific (i.e., institutional) effects.However, each firm’s cost of funds/profits ratio turns out not to be a significant influence, except for the subpopulation of U.S. firms.

These differences cannot be explained away very easily. In a matched sample of firms of similar size in the United Kingdom andGermany, Mayer (33) and his colleagues found that, in the period from 1982 to 1988, the proportion of earnings paid out as dividends were 2 to3 times as high in the U.K. firms. Tax differences could not explain the difference; indeed, retentions are particularly heavily discouraged inGermany. Nor could differences in inflation or in investments requirements explain it. Mayer attributes the differences to the structures ofownership and control. Ownership in the United Kingdom is dispersed, and control exerted through corporate takeovers. In Germany,ownership is concentrated in large corporate groupings, including the banks, and systems of control involve suppliers, purchasers, banks, andemployees, as well as shareholders. On this basis, he concludes that the U.K. system has two drawbacks:

[F]irst…the separation of ownership and control… makes equity finance expensive, which causes the level of dividends inthe UK to be high and inflexible in relation to that in countries where investors are more closely involved. Second, theinterests of other stakeholders are not included. This discourages their participation in corporate investment.UK-style corporate ownership is therefore likely to be least well suited to co-operative activities that involve several differentstakeholders, e.g. product development, the development of new markets, and specialised products that require skilled labourforces, [ref. 33, p. 191.]I would only add that the U.K. financial system is likely to be more effective in the arms-length evaluation of corporate R&D investments

that are focused on visible, discrete projects that can be evaluated individually—for example, aircraft, oil fields, and pharmaceuticals. It will beless effective when corporate R&D consists of a continuous stream of projects and products, with strong learning linkages amongst them—forexample, civilian electronics.

Similar (and independently derived) analyses have emerged in the USA, especially from a number of analysts of corporateTable 3. Own R&D expenditures by world’s 200 largest R&D spenders in 1994

R&D as percentage of Profits/sales, % Costs of funds/profits, %Country (n) Sales Profits* Cost of funds†

Sweden (7) 9.2 73.4 194.3 12.5 37.8Switzerland (7) 6.9 69.0 140.4 10.0 49.1Netherlands (3) 5.6 103.8 201.0 5.4 51.6Japan (60) 5.5 204.0 185.6 2.7 109.9Germany (16) 4.9 149.0 202.9 3.2 73.4France (18) 4.6 256.5 111.9 1.8 229.2United States (67) 4.2 43.8 96.6 9.6 45.3United Kingdom (12) 2.6 23.7 52.3 11.0 45.3Italy (4) 2.3 N/A 34.0 N/A N/ATotal (200) 4.7 72.1 119.1 6.5 63.1

Data taken from ref. 32. n, No. of firms; N/A, not applicable.*Profits represent profits before tax, as disclosed in the accounts.†Cost of funds represents (equity and preference dividends appropriated against current year profits) + (interest servicing costs on debt) + (otherfinancing contracts, such as finance leases).

NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH? 12696

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 46: [Proceedings of the National Academy of Sciences] (BookZZ.org)

behavior at Harvard Business School (34, 35). In addition to deficiencies in the financial system, they stress the importance of command andcontrol systems installed by corporate managers. In particular, they point to the growing power of business school graduates, who are welltrained to apply financial and organizational techniques, but have no knowledge of technology. They maximize their own advantage byinstalling decentralized systems of development, production, and marketing, with resource allocations and monitoring determined centrally byshort-term financial criteria. These systems are intrinsically incapable of exploiting all the benefits of investments in technological activities,given their short-term performance horizons, their neglect of the intangible benefits in opening new technological options, and their inability toexploit opportunities that cut across established divisional boundaries. Managers with this type of competence therefore tend to underinvest intechnological activities.

Institutions and Changing Technologies. But given above deficiencies, how did the United States maintain its productivity advance overthe other OECD countries from 1870 to 1950? According to a recent paper by Abramovitz and David [ref. 36; similar arguments have beenmade by Freeman et al. (37), Nelson and Wright (38), and von Tunzelmann (39)], the nature of technical progress in this period was resource-intensive, capital-using, and scale-dependent—symbolized by the large-scale production of steel, oil, and the automobile. Unlike all othercountries, the United States had a unique combination of the abundant natural resources, a large market, scarce labor, and financial resourcesbest able to exploit this technological trajectory. These advantages began to be eroded after World War II, with new resource discoveries, theintegration of national markets, and the improvements in transportation technologies. Furthermore, the nature and source of technology hasbeen changing, with greater emphasis on intangible assets like training and R&D and lesser emphasis on economies of scale. Given thesetendencies, Abramovitz and David foresee convergence amongst the OECD countries in future. The data in Tables 2 and 3 cast some doubt onthis.

Is Uneven Technological Development Self-Correcting? But can we expect uneven international patterns of technological developmentto be self-correcting in future? In an increasingly integrated world market, there are powerful pressures for the international diffusion of thebest technological and related business practices through the international expansion of best practice firms, and also for imitation throughlearning and investment by laggard firms. But diffusion and imitation are not easy or automatic, for at least three sets of reasons.

First, technological (and related managerial) competencies, including imitative ones, take a long time to learn, and are specific toparticular fields and to particular inducement mechanisms. For example, U.S. strength in chemical engineering was strongly influenced initiallyby the opportunities for (and problems of) exploiting local petroleum resources (40). More generally, sectoral patterns of technological strength(and weakness) persist over periods of at least 20–30 years (19, 41).

Second, the location and rate of international diffusion and imitation of best practice depend on the cost and quality of the local labor force(among other things). With the growing internationalization of production, firms depend less on any specific labor market and are therefore lesslikely to commit resources in investment in local human capital. In other words, firms can adjust to local skill (or unskilled) endowments, ratherthan attempt to change them. National policies to develop human capital (including policies to encourage local firms to do so) therefore becomeof central importance.

Third, education and training systems change only slowly, and are subject to demands in addition to those of economic utility. In additionthere may be self-reinforcing tendencies intrinsic in national systems of education, management, and finance. For example:Table 4. The growth of U.S. science and engineering employment in life science, computing, and servicesField Ratio, no. of employees in 1992/no. of employees in 1980All fields 1.44Life sciences 3.12Computer specialists 2.03Manufacturing sectors 1.30Nonmanufacturing sectors 1.69Financial services 2.37Computer services 4.10

Data taken from ref. 45.

• The British and U.S. structure of human capital, with well-qualified graduates and a poorly educated workforce, allows comparativeadvantage in sectors requiring this mix of competencies, like software, pharmaceuticals, and financial services. The dynamic successof these sectors in international markets reinforces demand for the same mix of competencies. In Germany, Japan and theirneighboring countries, the dynamics will, on the contrary, reinforce demands in sectors using a skilled workforce.

• Decentralized corporate management systems based on financial controls breed managers in the same mold, whose competencies andsystems of command and control are not adequate for the funding of continuous and complex technical change. Firms managed bythese systems therefore tend to move out (or are forced out) of sectors requiring such technical change. See, for example, Geenen’sITT in the United States, and Weinstock’s General Electric Company in the United Kingdom (35, 42).

• The British financial system develops and rewards short-term trading competencies in buying and selling corporate shares on the basisof expectations about yields, while the German system develops longer-term investment competencies in dealing with shares on thebasis of expected growth. These competencies emerge from different systems of training and experience and are largely tacit. It istherefore difficult, costly, and time-consuming to change from one to the other. And there may be no incentive to do so, whensatisfactory rates of return can be found in both activities.

Needless the say, these trends will be reinforced by explicit or implicit policy models that advocate “sticking to existing comparativeadvantage,” or “reinforcing existing competencies.”

THE MEASUREMENT OF SOFTWARE TECHNOLOGYThe institutional and national characteristics required to exploit emerging technological opportunities depend on the nature and locus of

these opportunities. Our apparatus for measuring and analyzing technological activities is becomingTable 5. Industries’ percentages of business employment of scientists and engineers, 1992Field Employment of scientists and engineers, % (computer specialists, %)Manufacturing 48.1 (10.9)Nonmanufacturing 51.9 (23.7)Engineering services 9.1 (3.2)Computer services 8.3 (51.8)Financial services 6.1 (58.5)Trade 5.2(25.5)

Data taken from ref. 45.

NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH? 12697

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 47: [Proceedings of the National Academy of Sciences] (BookZZ.org)

obsolete, since the conventional R&D statistics do not deal adequately with software technology, to which we now turn.Table 6. Differing policies for basic research

Assumptions on the nature of useful knowledgeSubject Codified information Tacit know-howInternational Free riders Strengthen intellectual property rights;

restrict international diffusionStrengthen local and international networks

Japan’s and Germany’s bettertechnological performance than UnitedStates and United Kingdom with less basicresearch

More spillovers by linking basic researchto application

Increase business investment intechnological activities

Small impact of basic research on patenting Reduce public funding of basic research Stress unmeasured benefits of basic researchLarge business investment in publishedbasic research

Public relations and conspicuousintellectual consumption

A necessary investment in signals to theacademic research community

There is no single satisfactory proxy measure for the activities generating technical change. The official R&D statistics are certainly auseful beginning, but systematic data on other measures show that they considerably underestimate both the innovations generated in firms with<1000 employees (where most firms do not have separately accountable R&D departments) and in mechanical technologies (the generation ofwhich is dispersed a wide variety of product groups; refs. 43 and 44).

A further source of inaccuracy is now emerging with the growth in importance of software technology, for the following reasons:

• One revolutionary feature of software technology is that it increases the potential applications of technology, not only in the sphere ofproduction, but also in the spheres of design, distribution, coordination, and control. As a consequence, the locus of technologicalchange is no longer almost completely in the manufacturing sector, but also in services. In all OECD countries, a high share ofinstalled computing capacity in the United States is in services, which have recently overtaken manufacturing as the main employersof scientists and engineers (see Tables 4 and 5).

• Established R&D surveys tend to neglect firms in the service sector. According to the official U.S. survey, computer and engineeringservices accounted in 1991 for only 4.2% of total company funded R&D compared with >8% of science and engineeringemployment. The Canadian statistical survey has done better: in 1995, ≈30% of all measured business R&D was in services, ofwhich ≈12% was in trade and finance (46).

• This small presence of software in present surveys may also reflect the structural characteristics of software development. Likemechanical machinery, software can be considered as a capital good, in that the former processes materials into products, and thelatter processes information into services. Both are developed by user firms as part of complex products or production systems, aswell as by small and specialized suppliers of machinery and applications software (for machinery, see ref. 47). As such, a highproportion of software development will be hidden in the R&D activities of firms making other products and in firms too small forthe establishment of a conventional R&D department.

CONCLUSIONSThe unifying theme of this paper is that differences among economists about the nature, sources, and measurement of technical change

will be of much greater relevance to policy formation in the future than they were in the past. These differences are at their most fundamentalover the nature of useful technological knowledge, the functions of the business firm, and the location of the activities generating technologicalchange. They are summarized, and their analytical and policy conclusions are contrasted, in Tables 6, 7, and 8. On the whole, the empiricalevidence supports the assumptions underlying the right columns, rather than those on the left.

Basic Research. The main economic value of basic research is not in the provision of codified information, but in the capacity to solvecomplex technological problems, involving tacit research skills, techniques, and instrumentation and membership in national and internationalresearch networks. Again, there is nothing original in this:

[t]he responsibility for the creation of new scientific knowledge—and for most of its application—rests on that small body ofmen and women who understand the fundamental laws of nature and are skilled in the techniques of scientific research, [ref.48, p. 7.]Exclusive emphasis on the economic importance of codified-information:

• exaggerates the importance of the international free rider problem and encourages (ultimately self-defeating) techno-nationalism;• reinforces a constricted view of the practical relevance of basic research by concentrating on direct (and more easily measurable)

contributions, to the neglect of indirect ones;• concentrates excessively on policies to promote externalities, to the neglect of policies to promote the demand for skills to solve

complex technological problems (49, 50).

Uneven Technological Development. In this context, too little attention has been to the persistent international differences, even amongthe advanced OECD countries, in investments in R&D, skills, and other intangible capital to solve complex problems. Explanations in terms ofmacroeconomic policies and market failure are incomplete, since they concentrate entirely on incentives and ignore the competencies torespond to them. Observed “inertia” in responding to incentives is not just a consequence of stupidity or self-interest, but also of cognitivelimits on how quickly individuals and institutions can learn to new competencies. Those adults who have tried to learn a foreign language fromscratch will well understand the problem. Otherwise, the standard demonstration is to offer economists $2 million to qualify as a surgeonTable 7. Differing policies for corporate technological activities

Assumptions on the functions of business firmsSubject Optimizing resource allocations based on

market signalsLearning to do better and new things

Inadequate business investment intechnology compared to foreigncompetition

R&D subsidies and tax incentives; reducecost of capital; increase profits

Improve worker and manager skills;improve (through corporate governance) theevaluation of intangible competencies

NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH? 12698

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 48: [Proceedings of the National Academy of Sciences] (BookZZ.org)

within 1 year. (Some observers have been reluctant to make the reverse offer.)Table 8. Differences in the measurement of technological activities

Assumptions on the nature of technological activitiesSubject Formal R&D Formal and informal R&D including

software technologyThe distribution of technological activities Mainly in large firms, manufacturing, and

electronics/chemicals/transportationAlso in smaller firms in nonelectricalmachinery and large and small firms inservices

These competencies are located not only in firms, but also in financial, educational, and management institutions. Institutional practicesthat lead to under- or misinvestment in technological and related competencies are not improved automatically through the workings of themarket. Indeed, they may well be self-reinforcing (Table 7).

Software Technology. Although R&D statistics have been an invaluable source of information for policy debate, implementation, andanalysis, they have always had a bias toward the technological activities of large firms compared with small ones and toward electrical andchemical technologies compared with mechanical engineering. The bias is now becoming even greater with the increasing development ofsoftware technology in the service sector, while R&D surveys concentrate on manufacturing (Table 8).

As a consequence, statistical and econometric analysis will increasingly be based on incomplete and potentially misleading data. Perhapsmore worrying, some important locations of rapid technological change will be missed or ignored. While we are bedazzled by the “high-tech”activities of Seattle and Silicon Valley, the major technological revolution may well be happening among the distribution systems of the oldestand most venal of the capitalists: the money lenders (banks and other financial services), the grocers (supermarket chains), and the traders(textiles, clothing, and other consumer goods).

To conclude, if economic analysis is to continue to inform science and technology policy making, it must play greater attention to theempirical evidence on the nature and locus of technology and the activities that generate it and spend more time collecting new and necessarystatistics in addition to exploiting those that are already available. That the prevailing norms and incentive structures in the economicsprofession do not lend themselves easily to these requirements is a pity, just as much for the economists as for the policy makers, who will seektheir advice and insights elsewhere.

This paper has benefited from comments on an earlier draft by Prof. Robert Evenson. It draws on the results of research undertaken in theESRC (Economic and Social Research Council)-funded Centre for Science, Technology, Energy and the Environment Policy (STEEP) at theScience Policy Research Unit (SPRU), University of Sussex.1. Fagerberg, J, (1994) J. Econ. Lit. 32, 1147–1175.2. Krugman, P. (1995) in Handbook of the Economics of Innovation and Technological Change, ed. Stoneman, P. (Blackwell, Oxford), pp. 342–365.3. Nelson, R. (1959) J. Polit. Econ. 67, 297–306.4. Arrow, K. (1962) in The Rate and Direction of Inventive Activity, ed. Nelson, R. (Princeton Univ. Press, Princeton), pp. 609–625.5. de Tocqueville, A. (1840) Democracy in America (Vintage Classic, New York), reprinted 1980.6. Rosenberg, N. (1976) in Perspectives on Technology, (Cambridge Univ. Press, Cambridge), pp. 126–138.7. Lyall, K. (1993) M.Sc. dissertation (University of Sussex, Sussex, U.K.).8. Hicks, D., Izard, P. & Martin, B. (1996) Res. Policy 23, 359–378.9. Brooks, H. (1994) Res. Policy 23, 477–486.10. Faulkner, W. & Senker, J. (1995) Knowledge Frontiers (Clarendon, Oxford).11. Gibbons, M. & Johnston, R. (1974) Res. Policy 3, 220–242.12. Klevorick, A., Levin, R., Nelson, R. & Winter, S. (1995) Res. Policy 24, 185–205.13. Mansfield, E. (1995) Rev. Econ. Stat. 77, 55–62.14. Rosenberg, N. & Nelson, R. (1994) Res. Policy 23, 323–348.15. Kline, S. (1995) Conceptual Foundations for Multi-Disciplinary Thinking (Stanford Univ. Press, Stanford, CA).16. Jaffe, A. (1989) Am. Econ. Rev. 79, 957–970.17. Narin, F. (1992) CHI Res. 1, 1–2.18. Hicks, D. (1995) Ind. Corp. Change 4, 401–424.19. Patel, P. & Pavitt, K. (1994) Ind. Corp. Change 3, 759–787.20. Galimberti, I. (1993) D. Phil, thesis (University of Sussex, Sussex, U.K.).21. Miyazaki, K. (1995) Building Competencies in the Firm: Lessons from Japanese and European Opto-Electronics (Macmillan, Basingstoke, U.K.).22. Sharp, M. (1991) in Technology and Investment, eds. Deiaco, E., Hornell, E. & Vickery, G. (Pinter, London), pp. 93–114.23. Dasgupta, P. & David, P. (1994) Res. Policy 23, 487–521.24. Cohen, W. & Levinthal, D. (1989) Econ. J. 99, 569–596.25. Prais, S. (1993) Economic Performance and Education: The Nature of Britain’s Deficiencies (National Institute for Economic and Social Research,

London), Discussion Paper 52.26. Newton, K., de Broucker, P., McDougal, G., McMullen, K., Schweitzer, T. & Siedule, T. (1992) Education and Training in Canada (Canada

Communication Group, Ottawa).27. Soete, L. & Verspagen, B. (1993) in Explaining Economic Growth, eds. Szirmai, A., van Ark, B. & Pilat, D. (Elsevier, Amsterdam).28. Teece, D., ed. (1987) The Competitive Challenge: Strategies for Industrial Innovation and Renewal (Ballinger, Cambridge, MA).29. Patel, P. & Pavitt, K. (1989) Natl. Westminster Bank Q. Rev. May, 27–42.30. Keck, O. (1993) in National Innovation Systems: A Comparative Analysis, ed. Nelson, R. (Oxford Univ. Press, New York), pp. 115–157.31. Walker, W. (1993) in National Innovation Systems: A Comparative Analysis, ed. Nelson, R. (Oxford Univ. Press, New York), pp. 158–191.32. Company Reporting Ltd. (1995) The 1995 UK R&D Scoreboard (Company Reporting Ltd., Edinburgh).33. Mayer, C. (1994) Capital Markets and Corporate Performance, eds. Dimsdale, N. & Prevezer, M. (Clarendon, Oxford).34. Abernathy, W. & Hayes, R. (1980) Harvard Bus. Rev. July/ August, 67–77.35. Chandler, A. (1992) Ind. Corp. Change, 1, 263–284.36. Abramovitz, M. & David, P. (1994) Convergence and Deferred Catch-Up: Productivity Leadership and the Waning of American Exceptionalism

(Center for Economic Policy Research, Stanford, CA), CEPR Publication 401.37. Freeman, C., Clark, J. & Soete, L. (1982) Unemployment and Technical Innovation (Pinter, London).38. Nelson, R. & Wright, G. (1992) J. Econ. Lit. 30, 1931–1964.39. von Tunzelmann, N. (1995) Technology and Industrial Progress: the Foundation of Economic Growth (Elgar, Aldershot, U.K.).40. Landau, R. & Rosenberg, N. (1992) in Technology and the Wealth of Nations, eds. Rosenberg, N., Landau, R. & Mowery, D. (Stanford Univ. Press,

Stanford, CA), pp. 73–119.41. Archibugi, D. & Pianta, M. (1992) The Technological Specialisation of Advanced Countries (Kluwer Academic, Dordrecht, the Netherlands).42. Anonymous (1995) Economist June 17, 86–92.43. Pavitt, K., Robson, M. & Townsend, J. (1987) J. Ind. Econ. 35, 297–316.44. Patel, P. & Pavitt, K. (1995) in Handbook of the Economics of Innovation and Technological Change, ed. Stoneman, P. (Blackwell, Oxford), pp. 14–

51.45. National Science Board-National Science Foundation (1993) Science and Engineering Indicators 1993 (U.S. Government Printing Office,

Washington, DC).

NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH? 12699

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 49: [Proceedings of the National Academy of Sciences] (BookZZ.org)

46. Statistics Canada (1996) Service Bull. 20, 1–8.47. Patel, P. & Pavitt, K. (1996) Res. Policy 23, 533–546.48. Bush, V. (1945) Science and the Endless and Frontier (National Science Foundation , Washington, DC), reprinted 1960.49. Mowery, D (1983) Policy Sci. 16, 27–43.50. Metcalfe, (1995) in Handbook of the Economics of Innovation and Technological Change, ed. Stoneman, P. (Blackwell, Oxford), pp. 409–512.

NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH? 12700

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 50: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pokes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

Are the returns to technological change in health care declining?

MARK MCCLELLANDepartment of Economics, Stanford University, Stanford, CA 94305–6072, and National Bureau of Economic Research, 204 Junipero

Serra, Stanford, CA 94305ABSTRACT Whether the U.S. health care system supports too much technological change—so that new technologies of low value

are adopted, or worthwhile technologies become overused—is a controversial question. This paper analyzes the marginal value oftechnological change for elderly heart attack patients in 1984–1990. It estimates the additional benefits and costs of treatment byhospitals that are likely to adopt new technologies first or use them most intensively. If the overall value of the additional treatments is declining, then the benefits of treatment by such intensive hospitals relative to other hospitals should decline, and the additional costsof treatment by such hospitals should rise. To account for unmeasured changes in patient mix across hospitals that might bias theresults, instrumental-variables methods are used to estimate the incremental mortality benefits and costs. The results do not supportthe view that the returns to technological change are declining. However, the incremental value of treatment by intensive hospitals islow throughout the study period, supporting the view that new technologies are overused.

What is the value of technological change in health care? More use of more intensive medical technologies is the principal cause ofmedical expenditure growth (1, 2). While technological change is presumed to be socially beneficial in most industries, judgments abouttechnological change in health care are mixed. On one hand, declining competition or a worsening of other market failures hardly seems able toexplain more than a fraction of medical expenditure growth. In this view, the remainder appears to reflect optimizing judgments by purchasersabout new and improved technologies, suggesting they are better off (3). On the other hand, many unusual features of the health care industry—including health insurance, tax subsidization, and uncertainty—may support an environment of health care production that encourages wastefultechnological change (4). In this view, the value of health care at the margin should be low and falling over time, as minimally effectivetechnologies continue to be adopted, leading to growth in inefficiency in the industry. Given the potential magnitude of the welfare questions atstake, which of these views is correct is a crucial policy question.

This paper presents new evidence on the marginal value of changes in medical technology. The analysis estimates the incrementaldifferences in mortality and hospital costs resulting from treatment by different types of hospitals for all elderly patients with acute myocardialinfarction (AMI) in 1984, 1987, and 1990. The marginal effects are estimated using instrumental-variables (IV) methods developed andextensively validated previously (5, 6). The methods applied here use similar IVs, based on differential physical access to different types ofhospitals. But the methods differ somewhat from the previous studies, in that they are designed to estimate the consequences of alltechnological changes during the time period. In particular, the methods compare trends in the net effects on mortality and costs of treatment bymore intensive hospitals for “marginal” patients, patients whose hospital choice differs across the IV groups. Thus, the IV methods estimate theeffects of the additional technologies available at more intensive hospitals on incremental AMI patients, those whose admission choice andhence treatment is affected by differential access to intensive hospitals.

The dimensions in which intensity of medical care can vary are numerous, ranging across many drugs, devices, and procedures even for aparticular medical condition such as AMI. The principal goal of this paper is not to assess returns to the adoption or diffusion of a particulartechnology, but to assess how technological changes in all of these dimensions are contributing collectively to changes in the expenditure andoutcome consequences of being treated by more intensive hospitals. Because new technologies tend to be adopted first and applied more widelyat such hospitals, comparing fixed groups of hospitals that differ in technological capabilities over time provides a method for summarizing thetotal returns to technological change. If the more intensive hospitals are applying more technologies over time that increase expenditures buthave minimal benefits for patients, then the differential returns to being treated by a more intensive hospital over time should decline. On theother hand, if the technological developments are comparable in value to or better than existing technologies, then the differential returns totreatment by a more intensive hospital should not fall. In addition, the levels of the marginal expenditure/benefit ratios in each year providequantitative guidance about whether the level of technological intensity at a point in time is too high or too low.

DATAPatient cohorts with information summarizing characteristics, treatments, costs, and mortality outcomes for all elderly Americans

hospitalized with new AMIs (primary diagnosis of ICD9 code 410) in 1984, 1987, and 1990 were created from comprehensive longitudinalmedical claims provided by the Health Care Financing Administration. Claims included information on principal and secondary diagnoses,major treatments, and costs for all hospital discharges through 1992. Measures of observable treatment intensity included the use of intensivecardiac procedures (catheterization, angioplasty, and bypass surgery), number of hospital admissions, total number of hospital days, and totaldays in a special care unit (intensive care unit or coronary care unit) during various time periods after AMI. Survival dated from the time ofAMI was measured using death date reports for all patients validated by the Social Security Administration.

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviations: AMI, acute myocardial infarction; IV, instrumental-variables.

ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING? 12701

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 51: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Hospital costs for various time periods after AMI were calculated by multiplying reported departmental charges for each admission by therelevant departmental cost-to-charge ratio, and adding in per diem costs based on each hospital’s annual Medicare cost reports (7). Reportedcosts reflect accounting conventions and potentially idiosyncratic cost allocation practices, and so may differ from true economic costs.However, as the results that follow illustrate, reported costs are highly correlated with real resource use, and the methods that follow focus ondifferences in cost trends rather than absolute cost levels.

Application of exclusion criteria developed in previous work led to an analytic sample of ≈646,000 patients. These AMI cohort creationmethods have been described and validated in detail previously (8, 9); for example, validation studies using linked medical record data indicatethat >99.5% of cases identified using these criteria represent true AMIs.

Two principal dimensions of hospital technological capabilities were measured: hospital volume and capacity to perform intensive cardiacprocedures. A hospital’s capability to perform catheterization and revascularization over time was determined from hospital claims forperforming these procedures, using techniques applied previously (7). For example, a hospital was categorized as a “catheterization hospital” ina given year if at least three catheterizations were performed on elderly AMI patients. Hospitals performing catheterization after 1984 but not in1984 were categorized as acquiring catheterization capability. Procedure capability was emphasized because previous research has documentedthat technology adoption has a substantial impact on technology use and costs. Hospitals were classified as high-volume or not by summingtheir total number of initial elderly AMI admissions and dividing them into two groups based on whether or not their volume was above themedian volume over the entire time period (≈75 AMIs per year).

Patient zip code of residence at the time of AMI was used to calculate each patient’s distance to the nearest hospital with each level ofprocedure capability (no procedure capacity, procedure capacity, acquired procedure capacity) and to the nearest high-volume hospital. Thepatient’s differential distance to a specialized type of hospital was the difference between the estimated distance to the nearest hospital of thattype minus estimated distance to the nearest hospital. These distance measures are highly correlated with travel times to hospitals (10), and inany case random errors in distance measurement do not lead to inconsistent estimation of treatment effects using the grouped-data methodsdeveloped here.

TRENDS IN AMI TREATMENTS, COSTS, AND OUTCOMESTable 1 describes the elderly AMI population in 1984, 1987, and 1990. The number of new AMIs declined slightly over time and average

age increased, consistent with national trends in AMI incidence. Though the demographic composition of the cohorts was otherwise similarover time, comorbidities recorded at the time of initial admission suggest that the acuity of AMI patients may have increased slightly. Inparticular, the incidence of virtually all serious comorbidities increased steadily between 1984 and 1990. These trends may also reflectincreasing attention to coding practices over time, though evidence from chart abstractions suggests that “upcoding” has declined (11). Agrowing share of patients were admitted initially to hospitals that performed catheterization and revascularization. This trend reflected bothsubstantial adoption of these technologies by hospitals—around 19% of patients were admitted initially to hospitals that adopted technologybetween 1984 and 1990 —and a more modest trend toward more initial selection of these intensive hospitals for AMI treatment. As a result, theshare of patients admitted to hospitals that did not perform catheterization declined from 44% to 39%, and the share of patients admitted tohigh-volume hospitals increased from 45% to 48%.

The AMI cohorts differed substantially in treatment and costs. Catheterization rates in the 90-day episode of care after AMI increasedfrom 9% in 1984 to 34% in 1990. Use of coronary artery bypass surgery (bypass) also increasedTable 1. U.S. elderly AMI patients, 1984–1990: Trends in characteristics, treatments, outcomes, and expenditures

Year of AMIVariable 1984 (n=220,345) 1987 (n=215,301) 1990 (n=211,259)Age (SD) 75.6 (7.0) 75.9 (7.2) 76.2 (7.3)Female 48.7 49.9 49.8Black 5.3 5.6 5.7Rural 29.5 30.4 30.1Cancer 1.1 1.5 1.6Pulmonary disease 8.3 11.3 12.8Dementia 0.7 1.0 1.2Diabetes 13.9 17.9 18.8Renal disease 3.3 5.1 6.1Cerebrovascular disease 2.1 2.6 2.8Initial admit to hospital with catheterization by 1984 37.5 38.4 40.7Initial admit to hospital adopting catheterization 1985–1990 18.1 19.0 20.0Initial admit to high-volume hospital 44.9 46.0 48.790-day catheterization rate 9.3 24.0 33.990-Day PTCA rate 1.1 5.6 10.590-Day CABG rate 4.8 8.3 11.71-year admissions 1.96 1.99 2.101-year total hospital days 20.5 19.4 20.41-year total special care unit days 6.0 6.8 7.31-day mortality rate 8.9 8.3 7.21-year mortality rate 40.0 39.0 35.62-year mortality rate 47.3 46.0 42.51-year total hospital costs (1991 dollars) $12,864 $14,228 $16,7882-year total hospital costs (1991 dollars) $14,142 $15,571 $18,301

PTCA, percutaneous transluminal coronary angioplasty; CABG, coronary artery bypass graft surgery.

ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING? 12702

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 52: [Proceedings of the National Academy of Sciences] (BookZZ.org)

steadily, from 4.8% to 11.7% of patients, and use of percutaneous transluminal coronary angioplasty (angioplasty) grew dramatically, from 1%of patients in 1984 to 11% in 1990. These major changes in AMI treatment intensity were associated with substantial cost growth: total hospitalcosts for elderly AMI patients increased by >4% per year in real terms, and most of this expenditure growth was associated with more frequentuse in intensive cardiac procedures (2). Of course, the use of other technologies also changed during this period. Substantial changes in cardiacdrug use occurred, including the widespread adoption of thrombolytic drugs after 1987 (12). These substantial changes in the intensity oftreating AMI in the elderly have had little impact on time spent in the hospital; average hospital days in the year after AMI declined slightlybetween 1984 and 1987, and increased slightly since. However, average days spent in an intensive care unit or critical care unit have increasedby around 20%, from 5.2 to 6.2 days during the 90-day episode after AMI and from 6.0 to 7.3 days during the year after AMI.

The growth in intensity of treatment has been associated with improvements in survival: 1-year mortality fell by 4.4 percentage points(from 40.0 to 35.6%) and 2-year mortality fell by 4.8 percentage points (from 47.1 to 42.3%). More than one-third of this mortality declinearose within the first day after AMI. Though procedure use grew throughout the sample period, and especially before 1987, the mortalitychanges were concentrated after 1987. For example, 1-year mortality declined by an average of 0.3 percentage points per year between 1984and 1987, and by 1.1 percentage points per year between 1987 and 1990.

DIFFERENCES IN TREATMENT INTENSITY ACROSS HOSPITAL TYPESEstimating the marginal effects of AMI treatment on outcomes and costs requires comparisons of alternative levels of treatment intensity.

Differences in hospital characteristics provide a basis for such comparisons. As Table 2 suggests, hospitals grouped on the basis ofcatheterization capabilities differ substantially in a range of technological capabilities for AMI treatment. Hospitals that had catheterization andrevascularization capabilities by 1984 tended to be high-volume hospitals in urban areas. These hospitals are generally larger and more capableof providing many aspects of intensive treatment, including coronary care unit or intensive care unit care as well as care from specializedcardiology staff, and they are more likely to use medical practices that reflect current clinical knowledge (13). Noncatheterization hospitalstended to be smaller and were more likely to be located in rural areas with fewer emergency response capabilities. Hospitals that acquired thecapacity to perform cardiac procedures during the study period appear to have intermediate technological capabilities in these other dimensions.

The hospitals differed to some extent in patient mix: hospitals with catheterization capabilities were more likely to treat younger, malepatients, and these differences increased over time. These observable differences in patients selecting each hospital for initial admission arepresumably associated with unobserved differences as well (14).

Table 3 shows that patients admitted to hospitals with the most intensive technologies were much more likely to receive these treatments.Catheterization rates for 1984 AMI patients were approximately 7.9 percentage points higher for patients initially admitted to hospitals withcatheterization capabilities than for patients initially admitted to hospitals without catheterization. Acquiring catheterization had a fundamentaleffect on treatment intensity: in 1990, catheterization rates for patients admitted to hospitals that adopted catheterization during the study periodwere closer to rates at hospitals that had previously adopted catheterization (rates 0.1 percentage points lower than noncatheterization hospitalsin 1984 but 10 points higher in 1990). Moreover, catheterization rates grew more rapidly at catheterization than noncatheterization hospitals: 90-day catheterization rates grew by 17 percentage points for patients initially admitted to noncatheterization hospitals and by 28 percentage pointsfor patients initially admitted to hospitals that had or acquired catheterization. Especially because of differential trends in use of angioplasty,revascularization rates also differed proportionally over time (e.g., 4.1% at noncatheterization hospitals versus 7.5% at catheterization hospitalsin 1984; 16.1% versus 28.4% at catheterization hospitals in 1990).

The differences in catheterization and revascularization use were correlated with other dimensions of treatment intensity. Hospitals withcatheterization used slightly more hospital days and more special-care unit days. However, they used fewer hospital admissions, mainlybecause of fewer transfers or readmissions associated with performing cardiac procedures, and their readmission rates declined over timerelative to noncatheterization hospitals (14). Differences in intensity were also associated with substantial differences in hospital costs. Forexample, in 1984, total hospital costs in the year after AMI differed on average by $2300 (in 1991 dollars) between hospitals that alwaysperformed catheterization and those that never did. By 1990, this difference had increased to $3400.

This comparison suggests that the alternative hospital types provide a gradation of levels of AMI procedure intensity with associatedgradations in costs, and that high-volume hospitals are more likely to provide more costly, intensive technologies other than cardiac procedures.Table 3 shows that patients treated by different types of hospitals also differed in mortality outcomes. In 1984, 1-year mortality was 1.6percentage points lower at hospitals with catheterization capabilities than at nonprocedure hospitals; by 1990, this difference increased to 2.8percentage points. Mortality rates at hospitals adopting catheterization and revascularization were intermediate between these two groups, butalso improved over time relative to the rates for hospitals that did not acquire catheterization.

These simple descriptive results suggest that technological change has been more dramatic at hospitals with catheterizaTable 2. U.S. elderly AMI patients, 1984–1990: Hospital and patient characteristics by hospital type at initial admissionHospital type n Patient share Age (SD) Black, % Rural, % High volume, %1984Never adopted catheterization 97,803 44.4 75.9 (7.1) 4.8 50.6 19.3Adopted catheterization, 1985–1990 39,895 18.1 75.4 (7.0) 4.5 18.6 52.3Adopted catheterization by 1984 82,647 37.5 75.4 (7.0) 6.4 10.0 71.6High volume 98,936 44.9 75.5 (7.0) 4.5 11.8 100.01990Never adopted catheterization 82,896 39.2 76.7 (7.4) 4.8 51.2 22.5Adopted catheterization, 1985–1990 42,340 20.0 76.1 (7.3) 5.1 19.9 51.5Adopted catheterization by 1984 86,023 40.7 75.7(7.2) 6.9 14.9 72.6High volume 102,908 48.7 75.9 (7.2) 5.1 15.4 100.0

ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING? 12703

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 53: [Proceedings of the National Academy of Sciences] (BookZZ.org)

tion or acquiring catheterization, and that this differential trend has been associated with somewhat greater mortality reductions and costgrowth. Unfortunately, unobserved case-mix differences across these hospital groups complicate inferences about marginal effectiveness basedon these expenditure and outcome results. For example, differences in age between patients treated at hospitals capable of performingcatheterization and other hospitals increased during this time period; thus, the observable characteristics of their patient mix suggest that thesehospitals attracted AMI patients who tended to be better candidates for invasive procedures. If the patients differed in unobserved respects aswell, then these conditional-mean comparisons of both expenditures and outcomes would be biased (15). For example, patients with longersurvival times, who would tend to have higher costs and longer survival regardless of where they were treated, may have become more likely tobe treated at the intensive hospitals over time.

IV ESTIMATES OF THE RETURNS TO MORE INTENSIVE AMI CAREThe idea of the IV methods, which are described in more detail in previous work (5–7), is to compare groups of patients with similar

health characteristics that differ substantially in treatment received for reasons that are unrelated to health status. Table 4, which dividespatients into groups with small and large differential distances to alternative hospital types, illustrates the idea. Table 4 describes two IVgroups: patients relatively near to or far from catheterization hospitals, and patients relatively near to or far from high-volume hospitals. Thesubgroups are approximately equal-sized based on whether theTable 4. U.S. elderly AMI patients, 1984–1990: Trends for differential distance groups

Year of AMI1984 1990Adopted catheterizationbefore 1984

High volume Adopted catheterizationbefore 1984

High volume

Variable Near Far Near Far Near Far Near FarPatient share 43.4 56.6 50.2 49.8 43.2 56.8 50.0 50.0Age (SD) 75.7 (7.1) 75.6 (7.0) 75.6 (7.0) 75.6 (7.0) 76.2 (7.4) 76.1 (7.3) 76.2 (7.3) 76.1 (7.3)Female 50.0 47.8 49.7 47.8 50.8 49.1 50.9 48.8Black 7.2 3.9 5.9 4.8 7.8 4.2 6.4 5.1Rural 4.9 48.5 7.7 51.7 5.2 48.3 8.6 51.7Cancer 1.2 1.1 1.1 1.1 1.6 1.5 1.6 1.5Pulmonary disease 8.1 8.4 7.7 8.8 12.2 13.2 12.4 13.2Dementia 0.7 0.7 0.6 0.7 1.1 1.2 1.1 1.2Diabetes 14.1 13.8 13.5 14.3 18.6 19.0 18.9 18.8Renal disease 3.6 3.1 3.2 3.4 6.6 5.8 6.4 5.8Cerebrovasculardisease

2.1 2.1 2.0 2.2 2.9 2.6 2.8 2.7

Initial admit tohospital withcatheterization by1984

70.2 12.4 51.9 22.9 73.3 16.9 52.7 28.7

Initial admit tohospital adoptingcatheterization,1984–1990

11.6 23.0 21.2 15.0 12.7 25.3 23.2 16.7

Initial admit tohigh-volumehospital

61.3 32.3 75.0 14.5 63.9 37.6 78.1 19.3

90-daycatheterization rate

11.2 7.9 9.1 9.6 37.5 31.3 34.5 33.3

90-day PTCA rate 1.3 0.8 1.0 1.1 11.8 9.6 10.5 10.590-day CABG rate 5.5 4.2 4.9 4.6 12.3 11.2 11.8 11.61-day mortality rate 8.3 9.3 8.1 9.7 6.7 7.6 6.4 8.01-year mortality rate 39.8 40.2 39.4 40.6 35.5 35.7 35.1 36.02-year mortality rate 47.1 47.4 46.9 47.7 42.3 42.7 42.2 42.91-year totalhospital costs(1991 dollars)

$14,392 $11,338 $13,897 $11,830 $18,076 $15,566 $17,735 $15,858

ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING? 12704

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 54: [Proceedings of the National Academy of Sciences] (BookZZ.org)

patient’s distance to the nearest specialized hospital minus distance to the nearest nonspecialized hospital was more or less than 2.8 miles for acatheterization hospital and 1.8 miles for a high-volume hospital. Observable health characteristics including age and the incidence of comorbiddiseases are distributed very similarly between the near and far groups, suggesting that unobserved health characteristics are distributedsimilarly as well (the studies cited previously have evaluated this assumption extensively).

Despite having virtually identical measured health characteristics, the groups have large differences in likelihood of admission to differentkinds of hospitals, and as a result differ in intensity of treatment. Patients relatively near to hospitals performing catheterization are much morelikely to be admitted to catheterization hospitals for AMI treatment, and they are significantly more likely to undergo catheterization in allyears. Similarly, patients near to high-volume hospitals are much more likely to be admitted to high-volume hospitals, and consequently aresignificantly more likely to be treated by specialized medical staff, in a special-care care unit, and with other dimensions of higher-intensitycare. But use of catheterization and revascularization procedures in these patients differs much less than for patients “near” and “far” withrespect to catheterization hospitals. Thus, variation in access to a high-volume hospital provides some variation in dimensions of treatmentintensity other than cardiac procedure use.

In contrast to a clinical trial, treatment rates are not 100% and zero percent in the near and far groups; rather, the higher treatment rates inthe near group suggest that an incremental subset of patients is treated differently as a result of their differential distance. This is the sense inwhich the IV comparisons are “marginal.” For example, patients near to a catheterization hospital were 56 percentage points more likely to beadmitted to a catheterization hospital in 1984 and 54 percentage points more likely to be admitted to one in 1990. The initial admission rates tothe various types of hospitals and the incremental differences in these rates are very similar across years. This stability in the relationship ofdifferential distance to hospital choice suggests that the IV methods are contrasting similar incremental patients across years.

Table 4 also shows the implications for outcomes and costs of these incremental differences in treatment intensity. In both IVcomparisons, mortality in the “near” (more intensive) group is slightly lower, but the mortality differentials are smaller than the raw mortalitydifferences of Table 3. These IV results suggest that the additional technologies used in treatment at more intensive hospitals lead to small butpossibly significant improvements in survival. Moreover, the differences in survival arise early after AMI; 1-day mortality differentials arelarger than the longer-term differentials. The mortality differentials are as large or larger in 1990 as in 1984; these simple comparisons do notsuggest that incremental mortality effects of more intensive treatments are falling over time.

Table 4 also demonstrates that more use of intensive treatments in both years is associated with substantially higher costs of AMI care, butthat the costs differentials are diminishing. For example, average 1-year hospital costs for patients near to catheterization hospitals were $3000higher in 1984 than patients further away. This difference fell to around $2500 by 1990, even though the differences in admission rates to thealternative hospitals did not change much over time. Costs for patients near to high-volume hospitals were also considerably higher thanexpenditures for patients farther away in both years, and this difference did not change much over time. In contrast to the comparisons ofTable 2 that did not account for changes in patient selection, then, the IV comparisons provide no evidence that the additional technologies usedby more intensive hospitals are becoming relatively more costly over time. However, while the health-related characteristics of the IV groupsappear more similar than the characteristics of patients treated by different types of hospitals, these simple comparisons do not eliminate allsources of outcome differences other than hospital technologies. For example, the “near” patients were much more likely to reside in urbanareas, where prices were higher and more advanced emergency response technologies might be available.

Other potentially important patterns are evident in the simple comparisons of Table 4. First, most of the mortality gains and expendituregrowth appear to be “inframarginal,” in the sense that the differences across years in costs and mortality are substantially larger than thedifferences across distance groups within a year. Thus, the 1984–1990 period appears to have been associated with substantial general trends incosts and outcomes that affected the whole of the AMI population.

Second, though intensive cardiac procedures became much more widely used during this period, the results provide little evidence thathigher rates of cardiac procedure use are responsible for the mortality gains. The aggregate time trend results showed the largest share ofmortality improvements arising after 1987, but the most rapid growth in procedure use occurred before 1987. In addition, mortality differencesare somewhat larger for groups near and far to high-volume hospitals, but differences in catheterization rates for these groups are much smaller.Substantial mortality differentials arise within 1 day of AMI. Almost no revascularization procedures were performed within 1 day in 1984, anda relatively small share of the procedures were performed within 1 day even in 1990, suggesting that the use of other technologies isresponsible for at least part of the inframarginal and incremental mortality differences.

If the near and far groups are balanced, so that no characteristics that are directly associated with outcomes differ between the groups, thena nonparametric IV estimate of the average incremental effect of admission to a catheterization hospital is given by

[1]

where and denote, respectively, conditional mean outcome and initial admission rates in each distance group. For example, fromTable 4, the IV estimate of the 1-year incremental mortality effect of treatment by a high-volume hospital in 1984 is [39.4–40.6]/[75.1–14.7]= –1.99 percentage points with a standard error of 0.86 percentage points.

While instructive, these two-group comparisons do not account for some important observable differences between the groups. Inparticular, patients in the near group are more likely to be urban and more likely to be black, reflecting the fact that differential distances tend tobe smaller in urban areas. Urban patients generally have more access to emergency response systems, leading to lower acute mortality, andurban prices tend to be higher, so that expenditure differences reflect price differences. Though observable demographic and healthcharacteristics otherwise appear to be balanced between the two groups, a more careful quantification of their association with mortality andexpenditure outcomes conditional on demographic characteristics is worthwhile. In addition, much variation in differential distances, andconsequently in likelihood of treatment by alternative hospital types, occurs within the near and far groups. For example, in 1990 patients witha differential distance to catheterization of zero or less have a probability of admission to a catheterization hospital 80 percentage points higherand catheterization rates 10 percentage points higher than patients with a differential distance of over 20 miles. Simple two-group conditional-mean comparisons do not exploit this potentially useful variation. Finally, the two-group methods do not generally permit estimation of theincremental effects associated with multiple hospital types;

ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING? 12705

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 55: [Proceedings of the National Academy of Sciences] (BookZZ.org)

because access to different kinds of intensive hospitals is correlated, comparisons that account jointly for access to each type of specializedhospital would help distinguish their incremental effects.

ESTIMATES OF THE MARGINAL EFFECTS OF TECHNOLOGICAL CHANGEMore comprehensive IV estimation methods can be used to account for these problems while preserving the minimally parametric,

conditional-mean structure of the simple comparisons. The methods are fully described elsewhere (5); they involve estimation of linear IVmodels of the form

yi=xiµ+hiγ+ui.[2]In these models, x is a fully-saturated vector of indicator variables to capture average demographic effects and their interactions for cells

based on the following characteristics: gender (male/female), age group (65–69, 70–74, 75–79, 80– 84, 85–89, 90 and over), race (black ornonblack), and urban or rural status. Demographic cell sizes were quite large. For nonblacks they were typically on the order of 6000 or morepatients in each year; the smallest cell, rural black males aged 90 and over in 1984, included 82 persons. Because fully interacted cells areincluded in the model, µ provides a nonparametric estimate of the conditional mean outcome for each demographic cell. The models alsoincluded a full set of effects for metropolitan statistical areas and for rural areas for particular states. The incremental-average treatment effectsof interest are represented by hi, a vector of indicator variables denoting patient’s hospital type at initial admission in terms of catheterizationadoption (by 1984, between 1985 and 1990, never) and hospital volume, based on average volume across all years (or across the years forwhich the hospital is included in the sample for hospitals that close). Thus, three incremental treatment effects were included in all models, withlow-volume, never-adopting hospitals as the baseline group.

Because hospital choices reflect unobserved patient heterogeneity, differential distances are used as IVs for hospital choice. Differentialdistances were also incorporated in this model in a minimally parametric way, generalizing the simple two-group IV comparisons of Table 4.The following right-closed intervals were used to construct groups for differential distance to the intensive hospital types (high volume, adoptedcatheterization by 1984, adopted catheterization between 1985–1990): 0, 0–1.5, 1.5–3, 3–6, 6–10, 10–15, 15–20, 20–25, 25–40, and over 40.To capture potential differences in distance effects for rural patients, rural differential-distance interactions were included based on differentialdistances of 0–10, 10–40, and over 40. While the zero-distance cells were the largest, all other cell combinations included at least severalhundred observations. Results were not sensitive to alternative specifications of the urban and rural differential-distance variables. With allfirst- and second-stage variables entered as indicators, and with relatively large sample sizes in each cell, the estimation methods were designedto recover weighted-average estimates of the incremental effects without making any substantive parametric or distributional assumptions. Themodeling strategy is equivalent to a grouped-data estimation strategy with weighted demographic cell-IV interactions as the unit of observation.The resulting IV estimates are weighted-average estimates of incremental treatment effects, with weights determined by the number of patientswhose admission status shifts across the IV groups (16).

Table 5 presents IV estimates of the mortality and cost differences across the alternative hospital types. The incremental mortality effectsare all estimated rather precisely (standard errors even for long-term mortality of 0.7 percentage points or less), and generally confirm thefindings in Tables 3 and 4 that greater intensity leads to lower mortality in all time periods. However, incremental effects of each hospital typeshow distinctive trends over time. Admission to a high-volume hospital led to substantially lower short-term and long-term mortality in 1984,compared with every other hospital type. The incremental mortality benefit peaked at –1.4 percentage points at 1 year, but it was substantial (–1.2 percentage points) even at 2 years after AMI. Much of this mortality effect arose within 1 day of AMI. In 1987, the incremental benefits oftreatment by a high-volume hospital showed a similar pattern, the 1-year effect was— 1.6 percentage points, and the 2-year effect was –1.2percentage points. In 1990, the acute mortality benefits were slightly larger, but the 2-year mortality benefit was only –0.8 percentage points(with a standard error of 0.6 percentage points). Given the substantial aggregate decline in mortality during the 1984–1990 time period, theseresults indicate that mortality improvements at other hospital types outpaced improvements at the high-volume hospitals.

In contrast to the estimated high-volume hospital effects, the incremental benefits associated with initial admission to hospitals withcatheterization capabilities by 1984 fell over time for short-term mortality and increased over time for long-term mortality. In 1984, mortalityeffects were negative only during the acute period after AMI. In 1987, mortality effects were negative but not significant for very short-termoutcomes, and essentially disappeared at longer time intervals. In 1990, the short-term mortality benefits were small (only –0.2 percentagepoints at 30 days) but increased over time, toTable 5. IV estimates of marginal effects of treatment by intensive hospitals

Mortality Hospital costs1 day 1 year 2 year 1 year 2 year

1984Adopted catheterization before 1984 –0.84 (0.34) –0.11 (0.58) 0.13 (0.59) 2336 (173) 2397 (191)High volume –0.83 (0.31) –1.36(0.53) –1.19(0.54) 1101 (159) 1200 (175)Adopted catheterization, 1984–1987 0.37 (0.47) 0.61 (0.79) –0.22 (0.80) 667 (239) 619 (261)Adopted catheterization, 1988–1990 –0.33 (0.39) 0.61 (0.66) 0.38 (0.67) –522 (200) –637(219)1987Adopted catheterization before 1984 –0.48 (0.34) 0.41 (0.58) –0.13 (0.59) 3110 (201) 3164 (217)High volume –1.32(0.30) –1.61 (0.52) –1.31 (0.53) 393 (181) 506 (196)Adopted catheterization, 1984–1987 0.65 (0.45) –0.85 (0.77) –0.80 (0.79) 2230 (268) 2351 (290)Adopted catheterization, 1988–1990 –0.13 (0.38) 1.35 (0.66) 1.66 (0.67) 739 (229) 821 (248)1990Adopted catheterization before 1984 –0.00 (0.33) –0.46 (0.60) –1.07(0.61) 2391 (244) 2582 (263)High volume –1.81(0.31) –1.29(0.52) –0.82(0.57) 846 (230) 905 (247)Adopted catheterization, 1984–1987 –0.34(0.44) –1.08(0.79) –0.65 (0.81) 1798 (322) 1875 (347)Adopted catheterization, 1988–1990 –0.07 (0.47) –1.29(0.67) –0.82(0.69) 1324 (420) 1370 (452)

Table reports estimated marginal effect (SD).

ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING? 12706

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 56: [Proceedings of the National Academy of Sciences] (BookZZ.org)

over 1 percentage point by 2 years. Additional estimation procedures (not reported here) examined the extent to which the differential trend wasconcentrated in the most intensive hospitals, those with both procedure capabilities and a high volume of AMI patients. Such interaction effectswere never significant, though by 1990 the interaction point estimates were on the order of –0.5 percentage points, suggesting the relativeoutcome benefits in 1990 were somewhat greater in the largest catheterization hospitals.

Table 5 also reports incremental effects associated with hospitals that developed the capacity to perform cardiac catheterization between1984 and 1990. For these adopting hospitals, point estimates of mortality effects in 1984 tended to be slightly less positive than estimates forearly-adopting hospitals. In 1987, mortality outcomes for hospitals that adopted catheterization in 1985–1987 were somewhat better than at theearly-adopting hospitals, but were statistically insignificant (under 1 percentage point). Hospitals that had not yet adopted catheterization hadsignificantly worse long-term outcomes in this time period. In 1990, compared with early-adopting hospitals, point estimates showed somewhatgreater short-term benefits and slightly smaller effect sizes by 2 years. These results are generally consistent with previous studies (16), whichfound that hospitals adopting catheterization in the late 1980s tended to do so following periods of relatively bad outcomes, and that mortalityimprovements after adoption tended to arise acutely after AMI (e.g., within 1–3 days).

Trends in incremental costs also differed substantially across the hospital groups. These effects were estimated precisely (standard errorsgenerally under $300). Hospitals adopting catheterization by 1984 were substantially more costly than nonintensive hospitals, by around $2300to $2500 at 1–2 years, but the difference remained unchanged over time even as average costs grew substantially. Hospitals adoptingcatheterization between 1984 and 1990 developed substantially higher costs after adoption, suggesting that the adoption of catheterization ledto relatively more costly care. For example, in 1984, 1-year hospital costs were only $670 higher at hospitals that would adopt catheterizationbetween 1985 and 1987 compared with nonintensive hospitals; in 1987, after adoption, this difference had increased to $2230. Treatment athigh-volume hospitals was associated with somewhat higher costs, around $600 at 1 and 2 years in 1984 and around $900 in 1990, but theincremental differences were considerably smaller than for catheterization hospitals.

Further research using similar methods has examined the contribution of observable dimensions of treatment intensity and expenditures tothese incremental mortality and cost differences (see ref. 14 for details). The principal source of the persistent cost differences betweencatheterization and noncatheterization hospitals appears to be procedure use. For example, hospitals that adopted catheterization early used theprocedure much more often than all other hospital types: catheterization rates for patients initially treated at these hospitals were 5.7 percentagepoints higher than at noncatheterization hospitals in 1984, 11.3 percentage points higher in 1987, and 13.7 percentage points higher in 1990.Further, hospitals adopting catheterization showed the emergence of treatment patterns that rely more heavily on cardiac procedures. In 1984,catheterization rates at these hospitals were the same as catheterization rates at hospitals that did not adopt, but by 1990 patients treated by thesehospitals were over 7 percentage points more likely to undergo catheterization than patients admitted to hospitals that did not adopt. For bothhospital types, differences in revascularization procedure use were proportional. Differences in cardiac procedure use associated with hospitalcapabilities have been reported previously (17, 18), but few studies have attempted to account for unobserved differences in patient mix whichare likely correlated with procedure use. Here, the effect estimates are approximately one-third smaller than in simple descriptive comparisons(and also smaller than in comparisons adjusted for observable patient mix characteristics), indicating that part of the large differences inpractice patterns is attributable to selection bias or “case mix.”

Even though absolute differences in procedure use increased between catheterization and noncatheterization hospitals, cost differences didnot increase proportionally. As Tables 3 and 4 suggested, this relative reduction in cost differences appears to result from a trend toward fewertransfers or readmissions for AMI patients treated at catheterization hospitals. Patients initially treated at noncatheterization hospitals must bereadmitted to undergo cardiac procedures; as the use of intensive procedures has risen substantially for all patient groups, these acutereadmissions for procedures have increased. Long-term rehospitalization rates with cardiac complications including recurrent ischemic heartdisease symptoms and (to a lesser extent) recurrent AMIs have fallen by several percentage points at catheterization compared withnoncatheterization hospitals. Additionally, use of intensive-care days has increased at high-volume hospitals.

As a result of features of Medicare’s hospital payment system, hospital expenditure trends have differed substantially from the cost trends.In particular, expenditure differentials that roughly paralleled the cost differential between catheterization and noncatheterization hospitals in1984 were almost completely eliminated by 1990. Medicare’s diagnosis-related group payments are hospitalization-based, and the trendstoward fewer transfers and readmissions for patients initially treated by catheterization hospitals reduced expenditure growth. In addition, theHealth Care Financing Administration reduced payments for angioplasty by almost 50% before 1990. In contrast, reimbursement policychanges leading to additional payments for smaller hospitals and for major teaching hospitals augmented expenditures for patients treated atthese hospitals.

DISCUSSIONThese estimates of the incremental effects of treatment by more intensive hospitals over time provide new evidence on the marginal value

of technological progress in health care. Technological change in AMI treatment was dramatic in the 1980s. Was this technological changeworthwhile? These results provide little support for the view that the marginal value of technological change is declining. Rather, hospitals thatadopted catheterization either before or during the study period experienced mortality improvements relative to other hospitals and have hadimproving expenditure/benefit ratios. The incremental effect of treatment at a high-volume hospital declined slightly between 1984 and 1990,but remained substantial, at least to 1 year after AMI. These incremental mortality benefits have persisted in the presence of substantial across-the-board improvement in AMI outcomes, particularly after 1987.

The incremental mortality effects of more intensive treatment result in higher costs of care. The cost differences associated with moreaggressive procedure use have remained stable over time, and there is some evidence that higher initial costs associated with more procedureuse lead to later cost savings in terms of avoided readmissions and complications. Based on the estimated expenditures and benefits, the “best-guess” estimate of a marginal cost to mortality effect ratio for hospitals with catheterization capabilities in 1990 was around $250,000 peradditional AMI survivor to 2 years; this ratio has improved substantially since 1984. The cost differences associated with high-volume hospitalsalso improved somewhat over time. In 1990, an analogous cost/mortality effect ratio for high-volume hospitals was around $110,000 peradditional

ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING? 12707

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 57: [Proceedings of the National Academy of Sciences] (BookZZ.org)

2-year survivor. These estimates are similar to estimates obtained using other IV methods, and would probably be substantially higher if othermedical costs (e.g., physician and ambulatory medical costs) were also included.

Thus, there is little evidence that the marginal cost-effectiveness of technological change is declining. On the other hand, the cost-effectiveness ratios are rather large, at least based on judgments by many investigators about “appropriate” ratios for guiding medicalinterventions (19). While the marginal effectiveness of the additional technologies available at the most intensive hospitals appears to beincreasing, it may still be low.

The improvements in cost-effectiveness ratios suggests that Medicare policy for hospital reimbursement is having some desirable effects.In particular, the “high-powered” incentives provided by fixed payments per hospitalization may be discouraging the adoption of low-benefit,high-cost technologies. Moreover, the substantial improvements in AMI mortality since 1984 do not support the view that the payment reformshave adversely affected outcomes for elderly AMI patients. However, Medicare hospital reimbursement incentives are not high-powered in atleast two important respects (1). First, the provision of intensive procedures—including cardiac procedures—leads to a different paymentclassification, and consequently substantially higher reimbursement. Thus, the higher costs of providing cardiac procedures during anadmission may be largely offset. Second, treatment of a chronic disease using methods that require multiple hospital admissions result in higherpayments, compared with treatments provided during a single admission. The changes in the effects of incremental technologies described heresuggest that, in fact, these incentives may be affecting the nature of new technological change. In particular, technologies developed by cardiac-procedure hospitals appear to be associated with the provision of more intensive procedures, whereas technologies adopted by high-volumehospitals appear to be increasingly associated with multiple admissions for subsequent care. These differential patterns may be coincidental, butthey are suggestive of a potentially important underlying relationship with reimbursement incentives.

I thank Jeffrey Geppert for outstanding research assistance and the National Institute on Aging for financial support, and participants in theNational Academy of Sciences Colloquium for helpful comments.1. McClellan, M. (1996) in Advances in the Economics of Aging, ed. Wise, D. (Univ. of Chicago Press, Chicago).2. Cutler, D. & McClellan, M. (1995) in Topics in the Economics of Aging, ed. Wise, D. (Univ. of Chicago Press, Chicago), Vol. 6, in press.3. Newhouse, J.P. (1992) J. Econ. Perspect. 6, 3–22.4. Weisbrod, B. (1992) J. Econ. Lit. 29, 523–552.5. McClellan, M. & Newhouse, J.P. (1994) The Marginal Benefits of Medical Technology (National Bureau of Economic Research, Cambridge, MA),

NBER Working Paper.6. McClellan, M., McNeil, B.J. & Newhouse, J.P. (1994) J. Am. Med. Assoc. 272, 859–866.7. McClellan, M. & Newhouse, J.P. (1996) J. Econometrics, in press.8. Udvarhelyi, I.S., Gatsonis, C., Epstein, A.M., Pashos, C.L., Newhouse, J.P. & McNeil, B.J. (1992) J. Am. Med. Assoc. 268, 2530–2536.9. Pashos, C., Newhouse, J.P. & McNeil, B.J. (1993) J. Am. Med. Assoc. 270, 1832–1836.10. Phibbs, C. & Luft, H. (1995) Med. Care Res. Rev. 52, 532–542.11. Newhouse, J.P., Carter, G. & Relles, D. (1992) The Causes of Case-Mix Increase: An Update (RAND Corp., Santa Monica, CA).12. Pashos, C., Normand, S.T., Garfinkle, J.B., Newhouse, J.P., Epstein, A.M. & McNeil, B.J. (1994) J. Am. Coll. Cardiol. 23, 1023–1030.13. Guadagnoli, E., Hauptman, P.J., Ayanian, J.Z., Pashos, C.L. & McNeil, B.J. (1995) N. Engl. J. Med. 333, 573–578.14. McClellan, M. (1996) The Returns to Technological Change in Health Care (Stanford Univ., Stanford, CA).15. McClellan, M. (1995) Am. Econ. Rev. 85, 38–44.16. Imbens, G. & Angrist, J. (1994) Econometrica 62, 467–476.17. Blustein, J. (1993) J. Am. Med. Assoc. 270, 344–349.18. Every, N.R., Larson, E.B., Litwin, P.E., et al. (1993) N. Engl. J. Med. 329, 546–551.19. Weinstein, M.C. (1995) in Valuing Health Care, ed. Sloan, F.A. (Cambridge Univ. Press, Cambridge, U.K.), p. 95.

ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING? 12708

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 58: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

Star scientists and institutional transformation: Patterns ofinvention and innovation in the formation of the biotechnology

industry(geographic agglomeration/human capital/scientific breakthroughs/scientific collaborations/technology transfer)LYNNE G.ZUCKERa AND MICHAEL R.DARBYbaDepartment of Sociology and Organizational Research Program, Institute for Social Science Research, University of California, Box

951484, Los Angeles, CA 90095–1484; andbJohn M.Olin Center for Policy, John E. Anderson School Graduate School of Management,University of California, Box 951481, Los Angeles, CA 90095–1481

ABSTRACT The most productive (“star”) bioscientists had intellectual human capital of extraordinary scientific and pecuniaryvalue for some 10–15 years after Cohen and Boyer’s 1973 founding discovery for biotechnology [Cohen, S., Chang, A., Boyer, H. &Helling, R. (1973) Proc. Natl. Acad. Sci. USA 70, 3240–3244]. This extraordinary value was due to the union of still scarce knowledge ofthe new research techniques and genius and vision to apply them in novel, valuable ways. As in other sciences, star bioscientists werevery protective of their techniques, ideas, and discoveries in the early years of the revolution, tending to collaborate more within theirown institution, which slowed diffusion to other scientists. Close, bench-level working ties between stars and firm scientists wereneeded to accomplish commercialization of the breakthroughs. Where and when star scientists were actively producing publications isa key predictor of where and when commercial firms began to use biotechnology. The extent of collaboration by a firm’s scientists withstars is a powerful predictor of its success: for an average firm, 5 articles coauthored by an academic star and the firm’s scientists result in about 5 more products in development, 3.5 more products on the market, and 860 more employees. Articles by starscollaborating with or employed by firms have significantly higher rates of citation than other articles by the same or other stars. TheU.S. scientific and economic infrastructure has been particularly effective in fostering and commercializing the bioscientific revolution.These results let us see the process by which scientific breakthroughs become economic growth and consider implications for policy.

“Technology transfer is the movement of ideas in people.”—Donald Kennedy, Stanford University, March 18, 1994Scientific breakthroughs are created by, embodied in, and applied commercially by particular individuals responding to incentives and

working in specific organizations and locations; it is misleading to think of scientific breakthroughs as disembodied information which, oncediscovered, is transmitted by a contagion-like process in which the identities of the people involved are largely irrelevant. In the case ofbiotechnology, as new firms were formed and existing firms transformed to utilize the new technology derived from the underlying scientificbreakthroughs, the very best scientists were centrally important in affecting both the pace of diffusion of the science and the timing, location,and success of its commercial applications.

We, in work done separately and in collaboration with coauthors (1–6), are investigating the role of these “star” bioscientists (those withmore than 40 genetic sequence discoveries or 20 or more articles reporting genetic sequence discoveries by 1990) and their “collaborators” (allcoauthors on any of these articles who are not stars themselves) in biotechnology.c The star scientists are extraordinarily productive, accountingfor only 0.8% of all the scientists listed in GenBank through 1990 but 17.3% of the published articles— i.e., their productivity was almost 22times the average GenBank scientist.

Our prior research has concentrated on particular aspects of the process of scientific discovery and diffusion and of technology transfer.We draw here two broad conclusions from this body of work: (i) to understand the diffusion and commercialization of the biosciencebreakthroughs, it is essential to focus on the scientific elite, the stars, and the forces shaping their behavior, and (ii) the breakthroughs asembodied in the star scientists initially located primarily at universities created a demand for boundary spanning between universities and firmsvia star scientists moving to firms or collaborating at the bench-science level with scientists at firms. We demonstrate empirically that these tiesacross university-firm boundaries facilitated both the development of the science and its commercialization, with the result that new industrieswere formed and existing industries transformed during 1976–1995.

We report below the following major findings from our research. Citations to star scientists increase for those who are more involved incommercialization by patenting and/or collaborating or affiliating with new or preexisting firms (collectively, new biotechnology enterprises orNBEs). As the expected value of research increases, star scientists are more likely to collaborate with scientists from their own organiza

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviations: BEA, functional economic area as defined by the U.S.Bureau of Economic Analysis; NBE, new biotechnologyenterprise;NBF, new biotechnology firm; NBS, new biotechnology subunit/subsidiary.

cThe September 1990 release of GenBank (release 65.0; machine readable data base from IntelliGenetics, Palo Alto, CA) constitutesthe universe of all genetic-sequence reporting articles through April 1990, from which we identified 327 stars worldwide, their 4061genetic-sequence-reporting articles, and their 6082 distinct collaborators on those articles, avoiding the more recent period duringwhich sequencing has become more mechanical and thus not as useful an indicator of scientific activity. We coded the affiliations ofeach star and collaborator from the front (and back where necessary) pages of all 4061 articles authored by one or more stars to link inour relational data base to information on the employing universities, firms, research institutes, and hospitals.

STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THEFORMATION OF THE BIOTECHNOLOGY INDUSTRY

12709

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 59: [Proceedings of the National Academy of Sciences] (BookZZ.org)

tion, and this within-organization collaboration decreases the diffusion of discoveries to other scientists. Incumbent firms are slow to developties with the discovering university stars, leading some stars to found new biotechnology firms to commercialize their discoveries. Starbioscientists centrally determined when and where NBEs began to use biotechnology commercially and which NBEs were most successful.Stars that span the university-NBE boundary both contribute significantly to the performance of the NBE and also gain significantly in citationsto their own scientific work done in collaboration with NBE scientists. Nations differentially gain or lose stars during the basic science- andindustry-building period, indicating the competitive success of different national infrastructures supporting development of both the basicscience and its commercial applications.

IDEAS IN PEOPLEThere are great differences in the probability that any particular individual scientist will produce an innovation that offers significant

benefits, sufficient possibly to outweigh the costs of implementing it. We know that a wide range of action differs between great scientists—including our stars—and ordinary scientists, from mentoring fewer and brighter students to much higher levels of personal productivity asmeasured by number of articles published, number of citations to those articles, and number of patents (5, 7, 8).

As shown in Table 1, among the 207 stars who have ever published in the United States, we observe higher average annual citation ratesto genetic-sequence-reporting articles, a scientific productivity measure, for stars with greater commercial involvement: most involved arethose ever listing a NBE as one’s affiliation (“affiliated stars”), next are those ever coauthoring with one or more scientists then listing a localNBE as their affiliation (“local linked stars”), and then those listing only such coauthorship with NBE scientists outside their local area (“otherlinked stars” who are less likely to be working directly in the lab with the NBE scientists).d We distinguish local from other on the basis of the183 functional economic areas making up the United States (called BEA areas). In addition, being listed as discoverer on a genetic sequencepatent implies greater commercial involvement. For the U.S. as a whole, stars affiliated with firms and with patented discoveries are cited over9 times as frequently as their pure academic peers with no patents or commercial ties. The differences in total citations reflects both differencesin the quantity of articles and their quality as measured by citation rate, where quality accounts for most of the variation in total citations acrossthese groups of scientists.

Why Intellectual Human Capital? In most economic treatments, the information in a discovery is a public good freely available to thosewho incur the costs of seeking it out, and thus scientific discoveries have only fleeting value unless formal intellectual-property-rightsmechanisms effectively prevent use of the information by unlicensed parties—i.e., absent patents, trade secrets, or actual secrecy—the value ofa discovery erodes quickly as the information diffuses.

We have a different view. Scientific discoveries vary in the degree to which others can be excluded from making use of them. Inherent inthe discovery itself is the degree of “natural excludability”: if the techniques for replication involve much tacit knowledge and complexity andare not widely known prior to the discovery—as with the 1973 Cohen-Boyer discovery (9) —then any scientist wishing to build on the newknowledge must first acquire hands-on experience. High-value discoveries with such a high degree of natural excludability, so that theknowledge must be viewed as embodied in particular scientists’ “intellectual human capital,” will yield supranormal labor income for scientistswho embody the knowledge until the discovery has sufficiently diffused to eliminate the quasi-rents in excess of the normal returns on the costof acquiring the knowledge as a routine part of a scientist’s human capital.e

Table 1. U.S. stars’ average annual citations by commercial ties and patentingStars by gene-sequence patents

Type of star None Some patents All starsNBE affiliated* 153.2 549.2 323.0Local linked† 130.3 289.7 159.3Other linked‡ 100.1 176.8 109.4Never tied to NBE§ 59.9 230.0 72.2All stars 77.3 310.9 104.4

The values are the total number of citations in the Science Citation Index for the 3 years 1982, 1987, and 1992 for all genetic-sequence discovery articles(up to April 1990) in GenBank (release 65.0, Sept. 1990) authored or coauthored by each of the stars in the cell divided by 3 (years) times the number ofstars in the cell.*All stars ever affiliated with a U.S. NBE.†Any other star ever coauthoring with scientists from NBE in same BEA area (functional economic area as defined by the U.S. Bureau of EconomicAnalysis).‡Any other star ever coauthoring with scientists from NBE outside the BEA area.§All remaining stars who ever published in the United States.

Thus, we argue that the geographic distribution of a new science-based industry can importantly derive from the geographic distribution ofthe intellectual human capital embodying the breakthrough discovery upon which it is based. This occurs when the discovery—especially an“invention of a method of discovery” (10) —is sufficiently costly to transfer due to its complexity or tacitness (11–15) so that the informationcan effectively be used only by employing those scientists in whom it is embodied.

Scientific Collaborations. Except for initial discoverers, the techniques of recombinant DNA were generally learned by working inlaboratories where they were used, and thus diffusion proceeded slowly, with only about a quarter of the 207 U.S. stars and less than an eighthof the 4004 U.S. collaborators in our sample ever publishing any genetic-sequence discoveries by the end of 1979. In a variety of otherdisciplines, scientists use institutional structure and organizational boundaries to generate sufficient trust among participants in a collaborationto permit sharing of ideas, models, data, and material of substantial scientific and/or commercial value with the expectation that any use byothers will be fairly acknowledged and compensated to the contributing scientists (16).

Zucker et al (1) relate the collaboration network structure in biotechnology to the value of the information in the underlying researchproject: the more valuable the information, the more likely the collaboration is confined to a single organization. As expected, diffusion slowsas the share of within-organization collaborations increases, so organizational boundaries do operate to protect valuable information effectively.In work underway, we get similar results in Japan: the value of information being produced increases the probability that collaborators comefrom the same organization.

dRelated results, reported under “Star Scientist Success and Ties to NBEs” below, demonstrate that these differences reflect primarilyincreased quality of work (measured by citations per article) while the star is affiliated or linked to a NBE.

eIn the limit, where the discovery can be easily incorporated into the human capital of any competent scientist, the discoverer(s)cannot earn any personal returns—as opposed to returns to intellectual property such as patents or trade secrets. In the case ofbiotechnology, it may be empirically difficult to separate intellectual capital from the conceptually distinct value of cell cultures createdand controlled by a scientist who used his or her nonpublic information to create the cell culture.

STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THEFORMATION OF THE BIOTECHNOLOGY INDUSTRY

12710

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 60: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Table 2. Articles by affiliated or linked starsNBEs Article counts of starsType by period No. Affiliated* Local linked† Other linked‡ Foreign linked§

1976–1980NBFs 1 9 0 0 0Major Pharm. NBSs 0 0 0 0 0Other NBSs 0 0 0 0 0Total, all NBEs 1 9 0 0 01981–1985NBFs 13 97 20 12 10Major Pharm. NBSs 4 0 2 7 1Other NBSs 0 0 0 0 0Total, all NBEs 17 97 22 19 111986–1990NBFs 19 68 16 30 6Major Pharm. NBSs 8 8 3 9 4Other NBSs 3 0 2 2 0Total, all NBEs 30 76 21 41 101976–1990NBFs 22 174 36 42 16Major Pharm. NBSs 9 8 5 16 5Other NBSs 3 0 2 2 0Total, all NBEs 34 182 43 60 21

Pharm., pharmaceutical.*Count of articles published by each star affiliated with a U.S. NBE of indicated type during the period.†Count of articles published by each U.S. star linked to a NBE in the same BEA by type and period.‡Count of articles published by each U.S. star linked to a NBE in a different BEA by type and period.§Count of articles published by each foreign star linked to a U.S. NBE by type and period.

Boundary Spanning Between Universities and NBEs. This work on collaboration structure indicates the importance of organizationalboundaries in serving as “information envelopes” that can effectively limit diffusion of new discoveries, thereby protecting them. It follows thatwhen information transfer between organizations is desired, boundary spanning mechanisms are vital, creating a demand for social structurethat produces ties between scientists across these boundaries. In biotechnology, early major discoveries were made by star scientists inuniversities but commercialized in NBEs, so the university-firm boundary was the crucial one. It is “people transfer,” not technology transfer,that is measured as star scientists who become affiliated with or linked to NBEs. Working together on scientific problems seems to provide thebest “information highway” between discovering scientists and other researchers.

New institutions and organizations, or major changes in existing ones, that facilitate the information flow of basic science to industry arepositive assets, but also require considerable redirection of human time and energy, and therefore incur real costs (1, 17); some also requireredirection of substantial amounts of financial capital. Therefore, for social construction to occur, the degree to which these structures facilitatebioscience and its commercialization must outweigh the costs.

If the endowed supply of institutions and organizations have not already formed strong ties between universities or research institutes andpotential NBEs, or at least make these ties very easy to create, then demand for change in existing structures and/or formation of newinstitutions and organizations to facilitate these ties is expected.f How much structure is changed, and how much is created, will depend on therelative costs and benefits of transformation/formation.

In the United States the costs relative to the benefits of transforming existing firms appear to be higher than those incurred in forming newfirms: Over 1976–1990, 74% of the enterprises beginning to apply biotechnology were ad-hoc creations, so-called new biotechnology firms(NBFs), compared with 26% representing some transformation of the technical identity of existing firms (new biotechnology subunits orNBSs). As Table 2 shows, ties of star scientists to NBSs have emerged slowly in response to the demands for strong ties between universities orresearch institutes and firms, accounting for under 7% of the articles produced by affiliated or linked stars through 1985 and only increase toabout 13% in the 1986–1990 time period.g The resistance of preexisting firms to transformation is understated even by these disproportionatelylow rates, since the NBSs have generally many more employees than NBFs and since the majority of incumbent firms in the pharmaceuticaland other effected industries had not yet begun to use biotechnology by 1990 and so are not included in our NBS count.

At the same time, many of the NBFs were literally “born” with strong ties to academic star scientists, who were often among theirfounders. Through 1990, generally much smaller and less well-capitalized NBFs produced more research articles with affiliated or linked starsthan the NBSs.

COMMERCIALIZATION OF BIOSCIENCENBE Entry. The implications of our line of argument are far reaching. An indicator of the demand for forming or trans

fNot every social system, however, is flexible enough to rise to that demand. In work underway, we examine these processescomparatively across countries to explore both the demand and the aspects of the existing social structure that make realizing thatdemand difficult. In some countries, the social structure is just too costly to change, and great entrepreneurial opportunities are lostgiven the excellence of the bioscience.

gThese low shares of total ties to NBEs are, if anything, overestimates since we have expanded our definition of linked in Table 2 toinclude “foreign linked stars” whose only ties to NBEs are to firms outside their own country. NBSs have a higher share of links tothese stars whose degree of connection to the firm is likely to be lower on average than local or other linked stars located in the samecountry as the NBE.

STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THEFORMATION OF THE BIOTECHNOLOGY INDUSTRY

12711

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 61: [Proceedings of the National Academy of Sciences] (BookZZ.org)

forming NBEs to facilitate commercialization is the number of star scientists in a local area. Absent such demand measures, the local andnational economic infrastructure provide a good basis for prediction, but when stars (and other demand-related indicators) are taken intoaccount, most effects of the economic infrastructure disappear (4).

FIG. 1. Ever-active stars and new biotechnology enterprises as of 1990.Our empirical analysis of NBE entry is based on panel data covering the years 1976–1989 for each of the 183 BEA areas. Key measures of

local demand for birth of NBEs are the numbers of stars and collaborators active in a given BEA in a given year. We define a scientist as activewhere and when our star-article data base shows him or her to have listed affiliation in the BEA on three or more articles published in that orthe 2 prior years. This is a substantial screen, with only 135 of the 207 U.S.-publishing stars ever active in the U.S. while only 12.5% (500 of4004) U.S.-publishing collaborators are ever active in the United States.

To graphically summarize our main results, we plot both ever-active star scientists and NBEs on a map of the United States cumulativelythrough 1990 (Fig. 1). We can see that the location of stars remained relatively concentrated geographically even when considering all thoseborn in the whole period, and that NBEs tended to cluster in the areas with stars. The geographic concentration and correlation of both stars andNBEs is even greater for those entering by 1980.

With this very simple analysis, we can see the strong relationship between the location of ever-active stars and NBEs. These relationshipsreceived more rigorous test in multivariate panel Poisson regressions for the 183 BEAs over the years 1976–1989 as reported in ref. 4: Evenafter adding other measures of intellectual capital, such as the presence of top-quality universities and the number of bioscientists supported byfederal grants, and economic variables such as average wages, stars continued to have a strong, separate, significant effect in determining whenand where NBEs were born. The number of collaborators in a BEA did not have a significant effect until after 1985, when the formative yearsof the industry were mostly over, and labor availability became more important than the availability of stars.

In these same regressions we also found evidence of significant positive effects from the other intellectual human capital variables, whichserve as proxy measures for the number of other significant scientists working in areas used by NBEs which do not result in much if anyreported genetic-sequence discoveries. Adding variables describing the local and national economic conditions improved the explanatory powerof the intellectual capital variables relatively little (as judged by the logarithm of the likelihood function).

In summary, prior work has found that intellectual human capital and particularly where and when star scientists are publishing is a keydeterminant of the pattern over time and space of the commercial adoption of biotechnology.

NBE Success and Ties to Star Scientists. The practical importance for successful commercialization of an intellectual human capitalbridge between universities and firms is confirmed in a cross-section of 76 California NBEs (5). Local linked (and sometimes affiliated) starshave significant positive effects on three important measures of NBE success:h products in development, products on the market, andemployment growth. That is, the NBEs most likely to form the nucleus of a new industry are those that have the strongest collaborative linkswith star scientists. We will see below that these NBE-star ties also dramatically improve the scientists’ productivity. This remarkable synergy,along with the intrinsic and financial

hFunding availability for coding products data and survey collection of additional employment data limited us to California for thisanalysis.

STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THEFORMATION OF THE BIOTECHNOLOGY INDUSTRY

12712

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 62: [Proceedings of the National Academy of Sciences] (BookZZ.org)

incentives it implies, aligns incentives across basic science and its commercialization in a manner not previously identified.

FIG. 2. California stars and the number of products in development at new biotechnology enterprises in 1990.Consider first the number of products in development, coded from Bioscan 1989. A graphical summary of the main effects uncovered in a

rigorous regression analysis are summarized by the map in Fig. 2 which shows both the location of star scientists and the location of enterprisesthat are using biotechnology methods. Note that we limited this initial work to California, because of the intensive data collection required.California saw early entry into both the science and industry of biotechnology, possesses a number of distinct locales where bioscience or boththe science and industry have developed, and is generally broadly representative of the U.S. biotechnology industry.i Large dots in circlesindicate NBE-affiliated or NBE-linked stars, while large dots alone indicate stars located in that area but not affiliated or linked with a localfirm. We indicate the location of firms by either scaled triangles, representing NBEs with no linked or affiliated stars, or by scaled diamonds,representing NBEs with linked and/or affiliated stars. The size of the triangle or diamond indicates the number of products in development;small dots represent NBEs with no products in development. While there is a small diamond and there are a few large triangles, it is clear thatgenerally NBEs with linked and/or affiliated stars are much more likely to have many products in development.

Over all three measures of NBE success analyzed (5), there is a strong positive coefficient estimated on the number of articles written byfirm scientists collaborating with local linked stars. For an average NBE, two articles coauthored by an academic star and a NBE’s scientistsresult in about 1 more product in development, 1 more product on the market, and 344 more employees; for five articles these numbers are 5,3.5, and 860, respectively.j We note two qualifications to these strong findings: (i) it is not the articles themselves but the underlyingcollaborations whose extent is indicated by the number of articles which matters; and (ii) correlation cannot prove causation, but we do havesome evidence that the main direction of causation runs from star scientists to the success of firms and not the reversed.k

iIn our full 110 NBE California sample, there are 87 NBFs and 22 NBSs (with one joint venture unclassified), a ratio that is onlyslightly higher than the national average. Missing data for 34 firms reduced the number of observations available for the regressions to76.

jIn Poisson regressions, the expected number of products in development and products on the market are both exponentiallyincreasing in the number of such linked articles; in linear regressions there are about 172 more employees per linked article. Weexpected the linking relationship to be especially important because of its potential for increasing information flow about importantscientific discoveries made in the university into the NBE. Being part of an external “network for evaluation,” these academic stars arelikely to be able to provide more objective advice concerning scientific direction including which products should “die” before testingand marketing and which merit further investment by the firm, even given their often significant financial interest in the firm (18). Evenso, we found the magnitude of the effects surprising.

kWe believe, primarily on the basis of fieldwork, that very often tied stars were deeply involved in the formation of the NBEs towhich they were tied. Moreover, we are beginning to examine some quantitative evidence which confirms our belief on the direction ofcausation. For star scientists whose publications began by the year of birth of the tied firm’s birth, there is only an average lag of 3.02years between the birth of the firm and the scientist’s first tied publication, which is far shorter than the time required for any successfulrecombinant DNA product to be approved for marketing (on the order of a decade). We would interpret most of the average lag interms of time to set up a new lab, apply for patents on any discoveries, and then get into print, with some allowance needed for trailingagreements with prior or simultaneous employers. For star scientists who start publishing after the firm was born, the average lagbetween their first publication and their first tied publication is only 2.14 years. This is too short a career for the scientists to be hiredfor any possible halo effect. Indeed we think many of these scientists became stars only because of the very substantial productivityeffects of working with NBEs. In summary, the evidence on timing is that these relationships typically start too early for either the firmto have any substantial track record or before the stars do.

STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THEFORMATION OF THE BIOTECHNOLOGY INDUSTRY

12713

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 63: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Table 3. National stars: Commercial ties and migrationCountries Share of stars* Fraction tied† Migration rate

Gross‡ Net§

United States 50.2 33.3 22.2 2.9Japan 12.6 21.1 40.4 9.6United Kingdom 7.5 9.7 58.1 –32.3France 6.1 0.0 20.0 4.0Germany 5.8 0.0 50.0 8.3Switzerland 3.6 20.0 93.3 –40.0Australia 3.4 7.1 35.7 7.1Canada 2.4 0.0 50.0 –30.0Belgium 1.7 14.2 42.9 14.3Netherlands 1.2 20.0 80.0 0.0Total for top 10 94.7 14.9 35.4 –0.8

*Percent of total stars ever publishing in any country; some double-counting of multiple-country stars; rest of world: Denmark, Finland, Israel, Italy,Sweden, and the U.S.S.R.†Percent of stars ever publishing who were affiliated or linked to a NBE in the country.‡Rate=100×[(immigration+emigration of stars)/stars ever publishing in country].§Rate=100×[(immigration—emigration of stars)/stars ever publishing in country].

Star Scientist Success and Ties to NBEs. We have seen how ties to stars predict more products in development and on the market, aswell as more employment growth. Just as ties predict NBE success, they also predict higher level of scientific success as measured by citations.Recall the strong covariation between total citations and the degree to which stars are involved in commercialization and patenting in Table 1.It can be explained in three, possibly complementary ways. (i) The stars who are more commercially involved really are better scientists thanthose who are not involved either because they are more likely to see and pursue commercial applications of their scientific discoveries or arethe ones most sought out by NBEs for collaboration or venture capitalists to work on commercial applications (quality-based selection), (ii) Forthis elite group there is really no significant variation across stars in the expected citations to an article, but NBEs and venture capitalists makeenormous offers to the ones lucky enough to have already made one or more highly cited discoveries (luck-based selection). (iii) NBEs providemore financial and other resources to scientists who are actively working for or in collaboration with the firm making it possible for them tomake more progress (resource/productivity).

Because we have the star scientists’ full publishing histories for articles reporting genetic-sequence discoveries (up to April 1990), we cancompetitively test these three explanations of the higher citation rates observed for stars who are more involved in commercialization bylooking at the total citations received by each of these articles for 1982, 1987, and 1992 (mean = 14.52 for the world and 16.64 for the UnitedStates). Generally, we find consistent support for the third hypothesis listed above: NBEs actually increase the quality of the stars’ scientificwork so that their publications written at or in collaboration with a NBE would be more highly cited than those written either before orafterwards. The presence of one or more affiliated stars about doubles the expected citations received by an article. The same hypothesis issupported for (local-, other-, and foreign-NBE) linked stars in the full sample, but the relevant coefficient, though positive, is not significant inthe U.S.-only sample. In addition, highly-cited academic scientists are selected by NBEs for collaborations in the full sample, but this does nothold up in the U.S. sample. Otherwise tests of higher citation rates before or after working with NBEs consistently rejected the selectionhypotheses. Overall, the resource/productivity hypothesis is maintained: Star scientists obtain more resources from NBEs and do work that ismore highly cited while working for or with a NBE.

International Competitiveness and Movement of Stars. Our syllogism argues that star scientists embodying the breakthroughtechnology are the “gold deposits” around which new firms are created or existing firms transformed for an economically significant period oftime, that firms which work with stars are likely to be more successful than other firms, and that—although access to stars is less essential whenthe new techniques have diffused widely—once the technology has been commercialized in specific locales, internal dynamics ofagglomeration (19–22) tend to keep it there. The conclusion is that star scientists play a key role in regional and national economic growth foradvanced economies, at least for those science-based technologies where knowledge is tacit and requires hands-on experience.

Given the widespread concern about growth and “international competitiveness,” we present in Table 3 comparative data for the top 10countries in biotechnology on the distribution, commercial involvement, and migration of star scientists. Based on country-by-country counts ofstars who have ever published there, the United States has just over half of the world’s stars. Our nearest competitor, Japan, has only one fourthas many. Collectively, the North American Free Trade Area has 55.7%, the European Community and Switzerland 27.4%, and Japan andAustralia 16.9% of the stars operating in the top 10 countries.

Looking at the fraction of stars who are ever affiliated with or linked to a NBE in their country, we see that the United States, particularly,as well as Japan, Switzerland, Netherlands, and Belgium, all appear to have substantial star involvement in commercialization, with morelimited involvement in the United Kingdom and Australia. Surprisingly, at least up to 1990 when our data base currently ends, we find noevidence of these kinds of “working” commercial involvement by stars in France, Germany, or Canada.l Both the large number of the bestbiotech scientists working in the United States and their substantial involvement in its commercialization appear to interact in explaining theU.S. lead in commercial biotechnology. These preliminary findings lend some support to the hypothesis that boundary-spanning scientificmovement and/or collaboration is an essential factor both in the demand for forming or transforming NBEs and in determining their differentialsuccess. In work underway, we are modeling

lWe are extending our data base to 1994 to trace changes in this pattern of involvement in response to certain recent institutional andpolicy changes, particularly with respect to Japanese universities and research funding and removal of German regulatory restrictionson biotechnology.

STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THEFORMATION OF THE BIOTECHNOLOGY INDUSTRY

12714

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 64: [Proceedings of the National Academy of Sciences] (BookZZ.org)

empirically the underlying mechanisms which explain each of these proximate determinants.Migration is a particularly persuasive indicator of the overall environment—scientific and commercial—faced by these elite bioscientists.

Moving across national boundaries involves substantial costs so that differences in infrastructure must be correspondingly large. The UnitedStates, with a strong comparative advantage in the higher education industry as well as many of the key discoveries, is the primary producer ofstar scientists in the world. Despite the significant outflow of outstanding young scientists who first publish in the United States beforereturning home, America has managed to attract enough established stars to achieve a small net in-migration.m The major losers of key talenthave been Switzerland, the United Kingdom, and Canada. Field work has indicated that Swiss cantons have enacted local restrictionsinhospitable to biotechnology and that the United Kingdom has systematically reduced university support (23) and deterred otherentrepreneurial activity by subsidy to favored NBEs. The Canadian losses presumably reflect the ease of mobility to the particularly attractiveU.S. market.

CONCLUSIONSGeneralizability. We have seen for biotechnology that a large number of new firms have been created and preexisting businesses

transformed to commercialize revolutionary breakthroughs in basic science.n Economic and wage growth in the major research economies aredependent upon continuing advances in technology, with the economies’ comparative advantages particularly associated with the ability ofhighly skilled labor forces to implement new breakthrough technologies in a pattern of continuous renewal (19, 24–27). Based on extendeddiscussions with those familiar with other technologies and some fragmentary evidence in the literature, it seems likely that many of our centralfindings do generalize to other cases of major scientific breakthroughs which lead to important commercial applications.

First note that technological opportunity and appropriability—the principal factors that drive technical progress for industries (28, 29) —are also the two necessary elements that created extraordinary value for our stars’ intellectual human capital during the first decade ofbiotechnology’s commercialization. While relatively few mature industries are driven by technological opportunity in the form of basicscientific breakthroughs, the emergence phase of important industries frequently is so driven.

For example, there are broadly similar patterns of interfirm relationships for large and small enterprises within and across nationalboundaries for semiconductors and biotechnology, although there is some corroborating evidence that embodiment of technology in individualscientists is even more important for semiconductors than for biotechnology. Levin (30) notes that [as with recombinant DNA products]integrated circuits were initially nearly impossible to patent. More generally, Balkin and Gomez-Mejia (31) report on the distinctive emphasison incentive pay and equity participation for technical employees in (largely nonbiotech) high-tech firms, especially for the “few keyindividuals in research and development…viewed as essential to the company….” Success in high-technology, especially in formative years,we believe comes down to motivated services of a small number of extraordinary scientists with vision and mastery of the breakthroughtechnology.

Growing Stars and Enterprises. We have seen for biotechnology—and possibly other science-driven breakthrough technologies—thatthe very best scientists play a key role in the formation of new and transformation of existing industries, profiting scientifically as well asfinancially. We see across countries that there is very substantial variation in the fraction of star scientists involved in commercialization,bringing discoveries initially from the universities to the firms via moving or working with NBE scientists. Clearly, there are very substantialimplications for economic growth and development involved in whether a nation’s scientific infrastructure leads to the emergence of numerousstars and is conducive to their involvement in the commercialization of their discoveries.o

Commercialization is more a traffic rotary than a two-way street: More commercialization yields greater short-run growth, but this may beoffset in the future if the development of basic science is adversely affected. Commercial involvement of the very best scientists provides themgreatly increased resources and is associated with increased scientific productivity as measured by citations. However, it may lead them topursue more commercially valuable questions, passing up questions of greater importance to the development of science. On the other hand, theapplied questions of technology have often driven science to examine long-neglected puzzles which lead to important advances and indeedimportant new subdisciplines such as thermodynamics and solid-state physics.

We are confident that the commercial imperative will continue to a play an important role in both private and public decision making. Webelieve that it is essential, therefore, that we develop a better understanding of what policies, laws, and institutions account for the wide varietyof international experience with the science and commercial application of biotechnology, and their implications, for better or worse, for futurescientific advancement.

Both field and quantitative work have taught us technology transfer is about people, but not just “ideas in people.” The “people transfer”that appears to drive commercialization is importantly altered by the by the incentives available and by the entrepreneurial spirit that seeks“work arounds” in the face of impediments. A star scientist who can sponsor a rugby team at Kyoto University seems capable of achievinganything, but we also see that different rules, laws, resources, and customs have led to wide national differences in success in biotechnology.We need deeper empirical understanding of these institutional determinants of personal and national achievement in a variety of sciences andtechnologies to retain what is valuable and replace what is not. The most important lessons are to be drawn not for analysis of pastbreakthroughs which have formed or transformed industries, but for those yet to come in sciences we can only guess.

This article builds on an ongoing project in which Marilynn B. Brewer (at the University of California, Los Angeles, and currently at OhioState University) also long played a leading role. Jeff Armstrong was responsible for the analysis of firm success and Maximo Torero for theanalysis of mobility of top scientists. We acknowledge very useful comments from our discussant Josh Lerner and other participants in the1995 National Academy of Sciences Colloquium on Science,

mThe low gross (in plus out) migration rate reflects the large size of the U.S. market, so that there is much interregional butintranational migration with regional effects implicit in the analysis of birth of U.S. NBEs above.

nSee, in particular, ref. 6 for a detailed case study of the transformation of the technical identity of one of the largest U.S.pharmaceutical firms to the point that firm scientists and executives believe that it is indistinguishable in drug-discovery from the bestlarge dedicated new biotech firms. A similar pattern of transformation appears to have been followed by nearly half of the largepharmaceutical firms. The remainder appear to be either gradually dropping out of drug discovery or merging with large dedicated newbiotech firms to acquire the technical capacity required to compete.

oThe economic infrastructure, including the flexibility of incumbent industries and the availability of start-up capital, is also likely tobe significant in comparisons of international differences in commercialization of scientific breakthroughs.

STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THEFORMATION OF THE BIOTECHNOLOGY INDUSTRY

12715

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 65: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Technology, and the Economy. We are indebted to a remarkably talented team of postdoctoral fellows Zhong Deng, Julia Liebeskind, andYusheng Peng and research assistants Paul J.Alapat, Jeff Armstrong, Cherie Barba, Lynda J.Kim, Kerry Knight, Edmundo Murrugara, AmalyaOliver, Alan Paul, Jane Ren, Erika Rick, Benedikt Stefansson, Akio Tagawa, Maximo Torero, Alan Wang, and Mavis Wu. This paper is a partof the National Bureau of Economic Research’s research program in Productivity. This research has been supported by grants from the AlfredP.Sloan Foundation through the National Bureau of Economic Research Research Program on Industrial Technology and Productivity, theNational Science Foundation (SES 9012925), the University of California Systemwide Biotechnology Research and Education Program, andthe University of California’s Pacific Rim Research Program.1. Zucker, L.G., Darby, M.R., Brewer, M.B. & Peng Y. (1996) in Trust in Organizations, eds. Kramer, R.M. & Tyler, T. (Sage, Newbury Park, CA), pp.

90–113.2. Liebeskind, J.P., Oliver, A.L., Zucker, L.G. & Brewer, M.B. (1996) Organ. Sci. 7, 428–443.3. Tolbert, P.S. & Zucker, L.G. (1996) in Handbook of Organization Studies, eds. Clegg, S.R., Hardy, C. & Nord, W.R. (Sage, London), pp. 175–190.4. Zucker, L.G., Darby, M.R. & Brewer, M.B. (1994) Working Paper (National Bureau of Economic Research, Cambridge, MA), No. 4653.5. Zucker, L.G., Darby, M.R. & Armstrong, J. (1994) Working Paper (National Bureau of Economic Research, Cambridge, MA), No. 4946.6. Zucker, L.G. & Darby, M.R. (1995) Working Paper (National Bureau of Economic Research, Cambridge, MA), No. 5243.7. Zuckerman, H. (1967) Am. Social. Rev. 32, 391–403.8. Zuckerman, H. (1977) Scientific Elite: Nobel Laureates in the United States (Free Press, New York).9. Cohen, S., Chang, A., Boyer, H. & Helling, R. (1973) Proc. Natl. Acad. Sci. USA 70, 3240–3244.10. Griliches, Z. (1957) Econometrica 25, 501–522.11. Nelson, R.R. (1959) J. Potit. Econ. 67, 297–306.12. Arrow, K.J. (1962) in The Rate and Direction of Inventive Activity: Economic and Social Factors, N.B.E.R. Special Conference Series, ed. Nelson,

R.R. (Princeton Univ. Press, Princeton), Vol. 13, pp. 609–625.13. Arrow, K.J. (1974) The Limits of Organization (Norton, New York).14. Nelson, R.R. & Winter, S.G. (1982) An Evolutionary Theory of Economic Change (Harvard Univ. Press, Cambridge, MA).15. Rosenberg, N. (1982) Inside the Black Box: Technology and Economics (Cambridge Univ. Press, Cambridge, U.K.).16. Zucker, L.G. & Darby, M.R. (1995) in AIP Study of Multi-Institutional Collaborations Phase II: Space Science and Geophysics, Report No. 2:

Documenting Collaborations in Space Science and Geophysics, eds. Warnow-Blewett, J., Capitos, A.J., Genuth, J. & Weart, S.R. (AmericanInstitute of Physics, College Park, MD), pp. 149–178.

17. Zucker, L.G. & Kreft, I.G.G. (1994) in Evolutionary Dynamics of Organizations, eds. Baum, J.A.C. & Singh, J.V. (Oxford Univ. Press, Oxford), pp.194–313.

18. Zucker, L.G. (1991) Res. Sociol. Organ. 8, 157–189.19. Grossman, G.M. & Helpman, E. (1991) Innovation and Growth in the Global Economy (MIT Press, Cambridge, MA).20. Marshall, A. (1920) Principles of Economics (Macmillan, London), 8th Ed.21. Audretsch, D.B. & Feldman, M.P. (1993) The Location of Economic Activity: New Theories and Evidence, Centre for Economic Policy Research

Conference Proceedings (Consorcio de la Zona Franca di Vigo, Vigo, Spain), pp. 235–279.22. Head, K., Ries, J. & Swenson, D. (1994) Working Paper (National Bureau of Economic Research, Cambridge, MA), No. 4767.23. Henkel, M. & Kagan, M. (1993) in The Research Foundations of Graduate Education: Germany, Britain, France, United States, and Japan, ed.

Clark, B.R. (Univ. of California Press, Berkeley), pp. 71–114.24. Romer, P.M. (1986) J. Polit. Econ. 94, 1002–1037.25. Romer, P.M. (1990) J. Polit. Econ. 98, Suppl., S71-S102.26. Grossman, G.M. & Helpman, E. (1994) J. Econ. Perspect. 8, 23–44.27. Jones, C.I. (1995) J. Polit. Econ. 103, 759–784.28. Nelson, R.R. & Wolff, E.N. (1992) Reports (New York Univ., New York), No. 92–27.29. Klevorick, A.K., Levin, R.C., Nelson, R.R. & Winter, S.G. (1995) Res. Policy 24, 185–205.30. Levin, R.C. (1982) in Government and Technological Progress: A Cross-Industry Analysis, ed. Nelson, R.R. (Pergamon, New York), pp. 9–100.31. Balkin, D.B. & Gomez-Mejia, L.R. (1985) Pers. Admin., 111– 123.

STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THEFORMATION OF THE BIOTECHNOLOGY INDUSTRY

12716

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 66: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

Evaluating the federal role in financing health-related research

ALAN M.GARBER†‡§ AND PAUL M.ROMER§¶||†Veterans Affairs Palo Alto Health Care System, Palo Alto, CA 94304; ‡Stanford University School of Medicine, §National Bureau of

Economic Research, and ¶Graduate School of Business, Stanford University, Stanford, CA 94305; and ||Canadian Institute for AdvancedResearch, Toronto, ON Canada MST 1X4

ABSTRACT This paper considers the appropriate role for government in the support of scientific and technological progress inhealth care; the information the federal government needs to make well-informed decisions about its role; and the ways that federalpolicy toward research and development should respond to scientific advances, technology trends, and changes in the political andsocial environment. The principal justification for government support of research rests upon economic characteristics that leadprivate markets to provide inappropriate levels of research support or to supply inappropriate quantities of the products that result from research. The federal government has two basic tools for dealing with these problems: direct subsidies for research and strengthened property rights that can increase the revenues that companies receive for the products that result from research. In thecoming years, the delivery system for health care will continue to undergo dramatic changes, new research opportunities will emerge ata rapid pace, and the pressure to limit discretionary federal spending will intensify. These forces make it increasingly important toimprove the measurement of the costs and benefits of research and to recognize the tradeoffs among alternative policies for promotinginnovation in health care.

In this paper, we address three general questions. What role should the federal government play in supporting scientific and technologicalprogress in health care? What information should the federal government collect to make well-informed decisions about its role? How shouldfederal policy toward research and development respond to scientific advances, technology trends, and changes in the political and socialenvironments?

To address these questions, we adopt a societal perspective, considering the costs and benefits of research funding to American society asa whole. Both in government and in the private sector, narrower perspectives usually predominate. For example, a federal agency may consideronly the direct costs that it bears. A device manufacturer may weigh only the direct costs and benefits for the firm. Both organizations willthereby ignore costs and benefits that accrue to members of the public. The societal perspective takes account of all costs and benefits.Although alternative perspectives are appropriate in some circumstances, the comprehensiveness of the societal perspective makes it the usualpoint of departure for discussions of government policy. Much of our discussion focuses on decisions that are made by the National Institutesof Health (NIH), the largest federal agency devoted to biomedical research, but our comments also apply to other federal agencies sponsoringscientific research.

The approach we adopt is that of neoclassical, “Paretian” welfare economics (1). This approach dictates that potential changes in policyshould be evaluated by comparing the total costs and benefits to society. It suggests that only those policies whose benefits exceed their costsshould be adopted. When they are accompanied by an appropriate system of transfers, these policies can improve the welfare of everyone. As istypical in cost-benefit analysis (CBA), we focus on total costs and benefits and do not address the more detailed questions about how gainsshould be distributed among members of the public. By adopting this approach to measuring policy-making, we simplify the analysis and candraw upon a well-developed intellectual tradition (2).

We start by outlining a theoretical framework for organizing the discussion of these issues. The usual analysis of government policytoward science and technology marries the notion of market failure—the failure of markets to satisfy the conditions necessary for economicefficiency—and the notion of a rate of return to research. These concepts have helped to structure thinking about these issues, but they are toolimiting for our purposes. We propose a broader framework that compares the benefits from more rapid technological change with the costsassociated with two possible mechanisms for financing it: expanded property rights (which creates monopoly power) and tax-financedsubsidies. Expanded property rights could take the form of longer patent life or more broadly defined patent and copyright protection forintellectual property. Tax-financed subsidies could take the form of government-funded (extramural) research, government-performed research(e.g., intramural research at NIH), government subsidies to private research, and government-subsidized training. Optimal policy, we claim,uses a mix of expanded property rights and subsidies. Thus, policymakers must address two distinct questions. Is the total level of support forresearch and development adequate? Is the balance between subsidies and monopoly power appropriate?

These questions arise in any setting in which innovation is a concern. After we define the fundamental concepts used in discussions oftechnology policy, we show that the choice between monopoly power and subsidies arises within a private firm just as it does at the level of thenation. After describing this analytical framework, we then ask how it can be used to guide government policy decisions. Specifically, whatkinds of data would policymakers need to collect to make informed decisions about both questions? Such data would enable a governmentagency engaged in research funding to set and justify overall spending levels and to set spending priorities across different areas of its budget. Itwould also be able to advise other branches of government about issues such as patent policy that can have far-reaching implications for thehealth care sector.

THEORETICAL FRAMEWORKMarket Failure and Public Goods. The central theme of microeconomic analysis is the economic efficiency of the

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviations: NIH, National Institutes of Health; CBA, cost-benefit analysis.

EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH 12717

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 67: [Proceedings of the National Academy of Sciences] (BookZZ.org)

idealized competitive market. There are many forms of market failure—departures from this ideal. Two of the most important are monopolisticcontrol of specific goods and incomplete property rights. Many discussions treat research as a public good and presume that the underlyingmarket failure is one of incomplete property rights. This suggests that if we could make the protection for intellectual property rights strongenough, we could return to the competitive ideal. In fact, a true public good is one that presents policymakers with an unavoidable choicebetween monopoly distortions and incomplete property rights.

There are two elements in the definition of a public good. It must be nonrival, meaning that one individual’s consumption of the good doesnot diminish the quantity available for others to use. It must also be nonexcludable. Once it is produced, anyone can enjoy the benefits it offers,without getting the consent of the producer of the good (3).

Incomplete excludability causes the kind of market failure we expect to observe when property rights are not well specified. When a rivalgood such as a common pasture is not excludable, it is overused and underprovided. Society suffers from a “tragedy of the commons.” Thedirect way to restore the conditions needed for an efficient outcome is to establish property rights and let a price system operate. For example, itis possible to divide up the commons, giving different people ownership of specific plots of land. The owners can then charge grazing fees forthe use of the land. When there are so many landholders that no one person has a monopoly on land, these grazing fees give the owners oflivestock the right incentives to conserve on the use of the commons. They also give landowners the right incentives to clear land and createnew pasture. When it is prohibitively expensive to establish property rights and a price system, as in the case of fish in the sea, the governmentcan use licensing and quotas to limit overuse. It can also address the problem of underprovision by directly providing the good, for example byoperating hatcheries.

For our purposes, the key observation is that these unmitigated benefits from property rights are available for rival goods. Nonrival goodspose a distinct and more complicated set of economic problems that are not widely appreciated. Part of the difficulty arises from the obscurityof the concept of rivalry itself. The term rival means that two persons must vie for the use of a particular good such as a fish or plot of land. Adefining characteristic of research is that it produces nonrival goods—bits of information that can be copied at zero cost. It was costly todiscover the basic information about the structure of DNA, but once that knowledge had been uncovered, unlimited numbers of copies of itcould be made and distributed to biomedical researchers all over the world. By definition, it is impossible to overuse a nonrival good. There isno waste when every laboratory in the world can make use of knowledge about the structure of DNA. There is no tragedy in the intellectualcommons. For a detailed discussion of nonrivalry and its implications for technology development, see Romer (4).

Some of the most important science and technology policy questions turn on the interaction of excludability and rivalry. As noted above,for a rival good like a pasture, increased excludability, induced by stronger property rights, leads to greater economic efficiency. Strongerproperty rights induce higher prices, and higher prices solve both the problem of overuse and the problem of underprovision. However, for anonrival good, stronger property rights may not move the economy in the right direction. When there are no property rights, the price for agood is zero. This leads to the appropriate utilization of an existing nonrival good but offers no incentives for the discovery or production ofnew nonrival goods. Higher prices ameliorate underprovision of the good (raising the quantity supplied) but exacerbate its underutilization(diminishing the quantity demanded). If scientists had to pay a royalty fee to Waston and Crick for each use that they made of the knowledgeabout the structure of DNA, less biomedical research would be done.

The policy challenge posed by nonrival goods is therefore much more difficult than the one posed by rival goods. Because property rightssupport an efficient market in rival goods, the “theory of the first best” can guide policy with regard to such goods. The first best policy is tostrive to establish or mimic as closely as possible an efficient market. For nonrival goods, in contrast, policy must be guided by the less specific“theory of the second best.” For these goods, it is impossible, even in principle, to approach an efficient market outcome. A second best policy,as the name suggests, is an inevitable but uneasy compromise between conflicting imperatives.

The conceptual distinction between rivalry and excludability is fundamental to any discussion of policy. Rivalry is an intrinsic feature of agood, but excludability is determined to an important extent by policy decisions. Under our legal system, a mathematical formula is a type ofnonrival good that is intentionally made into a public good by making it nonexcludable. Someone who discovers such a formula cannot receivepatent or copyright protection for the discovery. A software application is another nonrival good, but because copyright protection renders itexcludable, it is not a public good. It is correct but not very helpful to observe that the government should provide public goods. It does notresolve the difficult question of which nonrival goods it should make into public goods by denying property rights over these goods.

Beyond “the Market Versus the Government.” In many discussions, the decision about whether a good should be made into a publicgood is posed as a choice between the market and the government. A more useful way to frame the discussion is to start by asking when a pureprice system (which may create monopoly power) is a better institutional arrangement than a pure tax and subsidy system, and vice versa.

By a pure price system we mean a system in which property rights are permanent and owners freely set prices on their goods. Under sucha system, a firm that developed a novel chemical with medicinal uses could secure the exclusive rights to sell the chemical forever.

A pure tax and subsidy system represents a polar opposite. Under this system, the good produced is not excludable, so a producer cannotset prices or control how their output is used. Production would be financed by the subsidy. Produced goods are available to everyone for free.To clarify the policy issues that arise in the choice between these two systems, our initial discussion will be cast entirely in terms of a firmmaking internal decisions about investment in research, avoiding any reference to the public sector.

Financing Innovation Within the Firm. Picture a large conglomerate with many divisions. Each division makes a different type ofproduct and operates as an independent profit center. It pays for its inputs, charges for its outputs, and earns its own profits. Senior managers,who are compensated partly on the basis of the profits their division earns, have an incentive to work hard and make their division perform well.

To make the discussion specific, imagine that many of the products made by different divisions are computer controlled. Suppose also thatsome divisions within the firm make software and others manufacture paper products, such as envelopes. Both the software goods and thepaper products may be sold to other divisions. The interesting question for our purposes is how senior managers price these internal sales.

Producing Paper Products. For rival goods like envelopes, the “invisible hand” theorem applies to an internal market within the firm justas it would to an external market: a pure price system with strong property rights leads to efficient outcomes. An efficient firm will tell themanagers of the envelope division that they are free to charge other divisions whatever price they want for these envelopes. Provided that theother divisions are free to choose between buying internally or

EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH 12718

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 68: [Proceedings of the National Academy of Sciences] (BookZZ.org)

buying from an outside seller, this arrangement tends to maximize the profits of the firm. It is efficient for the internal division to makeenvelopes if it can produce them at a lower cost than an outside vendor. If not, the price system will force them to stop. If senior managementdid not give the division property rights over the envelopes and allowed all other divisions to requisition unlimited envelopes without paying,envelopes would be wasted on a massive scale. The firm would suffer from an internal version of the tragedy of the commons.

Producing Software with a Price System. Now contrast the analysis of envelopes with an analysis of software. Almost all of the cost ofproducing software is up-front cost. When a version of the computer code already exists, the cost of an additional copy of the software is nearlyzero. It is nearly a pure nonrival good.

Suppose that one division has developed a new piece of software that diagnoses hardware malfunctions better than any previous product.This software would be useful for all of the divisions that make computer-controlled products. Senior managers could give property rights overthe software to the division that produced it, letting it set the price it charges other divisions for the use of the software. Then the producermight set a high price. Other divisions, however, will avoid using this software if the price is so high that it depresses their own profits. Theymight purchase a less expensive and less powerful set of software diagnostic tools from an outside vendor. Both of these outcomes lead toreductions in the conglomerate’s overall profits. They are examples of what economists term monopoly price distortions—underuse induced byprices that are higher than the cost of producing an additional unit.

It would cost the shareholders of the conglomerate nothing if this software were made freely available to all of the divisions, and profitsdecrease if some divisions forgo the use of the program and therefore fail to diagnose hardware malfunctions properly or if they pay outsidesuppliers for competing versions of diagnostic software.

Producing Software Under a Tax and Subsidy System. Because software is a nonrival good, the best arrangement for allocating anexisting piece of software is to deny the division that produced it internal property rights over it. This avoids monopoly price distortions. Seniormanagement could simply announce that any other division in the conglomerate may use the software without charge. But this kind ofarrangement for distributing software gives each division little incentive to produce software that is useful to other divisions within the firm. Itsolves the underutilization problem but exacerbates the underprovision problem.

Senior management, foreseeing this difficulty, might therefore establish a system of taxes and subsidies. They could tax the profits of eachdivision, using the proceeds to subsidize an operating division that develops new software for internal use. They could even set up a separateresearch and development division funded entirely from subsidies provided by headquarters. This division’s discoveries would be given to theoperating divisions for free. Despite the statist connotation associated with the concepts of taxes and subsidies, the managers and owners of aprivate firm may adopt them because they increase efficiency and lead to higher profits.

These arguments show that, in principle, taxes, subsidies, and weak property rights can be an efficient arrangement for organizing theproduction and distribution of goods like software. However, subsidies have implicit costs. Managers must ensure that the software producedunder the terms of the subsidy actually meets an important need within the other divisions of the firm. To supervise a subsidized operation, theymust estimate the value of its output in the absence of any price signals or arms-length transactions that reveal information about willingness topay. Operating divisions will accept any piece of software that is offered for free, so the fact that a subsidized software group seems to have amarket for its goods within the firm reveals almost nothing. This division might write software that is worth far less to the conglomerate than itscost of production. Thus, a subsidy system poses its own risk to the profitability of the firm. Avoiding these risks imposes serious measurementand supervisory costs on senior management, costs they need not incur when a division produces a rival good and runs as a profit center. Tosupervise the envelope division, senior managers only need to know whether it earns a profit.

The taxes that headquarters imposes on the operating divisions also create distortions. If the workers in a division keep only a fraction ofthe benefits that result from their efforts, they will not work as hard as they should to save costs and raise productive efficiency. Taxes weakenincentives. If it is too difficult for senior management to supervise the activities of software workers who receive subsidies, the distortion inincentives resulting from a system of taxes and subsidies may be more harmful than distortions resulting from operating the software divisionas a monopolistic profit center.

The problem for this firm is a problem for any economic entity. For rival goods like envelopes, a price system offers a simple, efficientmechanism for making the right decisions about production and distribution. For nonrival goods like software, there is no simple, efficientsystem. Both price systems and tax and subsidy systems can induce large inefficiencies. In any specific context, making the right second-bestchoice between these pure systems or some intermediate mixture requires detailed information about the relative magnitudes of the associatedefficiency costs.

Financing Innovation for the Nation as a Whole. At the level of the nation, just as at the level of the firm, relative costs drive choicesbetween price systems and tax subsidy systems. The major cost of the price system is monopoly price distortion, which occurs when a good issold at a price that exceeds marginal cost.

Fig. 1 illustrates monopoly price distortion. The downward-sloping demand curve shows how the total quantity purchased varies with theprice charged. The demand curve can also be interpreted as a schedule of the willingness to pay for an additional unit of the good as a functionof the total number of units that have already been purchased. As the number already sold increases, the willingness to pay for one more unitfalls. The figure also charts the marginal cost of producing additional units of output, assumed here to be constant, as well as the price p* andquantity q* purchased when a monopolist is free to set prices to maximize profits.

The triangle marked “Deadweight loss” represents the dollar value of the welfare loss that results from setting price above marginal cost:some people are willing to pay more than

FIG. 1. Monopoly price distortion.

EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH 12719

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 69: [Proceedings of the National Academy of Sciences] (BookZZ.org)

the cost of producing one more unit but less than the monopoly price. The resulting underconsumption can be overcome by reducing themonopolist’s price to the level of marginal cost. Expiration of a patent, by eliminating monopoly after a fixed time, eventually solves thisproblem.

Monopoly pricing can cause another problem. The total value to society of the good depicted here is the total area under the demand curveless the cost to society of the units that are produced. In the figure, the rectangle below the marginal cost line marked “Variable cost” representsthe production costs for q* units of the good. The total value to society is the sum of the willingnesses to pay of all people who purchase, or thearea under the demand curve up to the quantity q*. The net value to society is the difference between these two areas, which is equal to therectangle marked “Producer profits” plus the triangle marked “Consumer surplus.” This rectangle of profits is the difference between therevenue from sales and the variable cost of the goods produced. The surplus is a pure gain captured by those consumers who pay less than thegoods are worth to them. Firms compare the profit rectangle to the fixed research and development cost of introducing the good when theyevaluate a new product. They neglect the consumer surplus that the new good will generate for purchasers. Thus even under conditions ofstrong property rights and high monopoly prices, there will be a tendency for the market to underprovide valuable new goods.

When policymakers weigh the use of property rights and monopoly power to finance the introduction of new goods, they must considertwo other aspects of monopoly pricing that change the size of the total distortions it creates. On the one hand, price discrimination—the strategyof charging different customers different prices—can mitigate or eliminate the efficiency losses due to monopoly. On the other hand, theefficiency losses from monopoly power become worse when one monopolist sells to another.

A surprising implication of economic theory is that a perfectly price-discriminating monopolist (i.e., one that charges each consumer hisexact willingness to pay) produces the efficient (i.e., perfectly competitive) quantity of output. By charging each consumer the exact amountthat he would be willing to pay, the monopolist continues to produce up to the point where the value to the last consumer is equal to themarginal cost of an additional unit of output. Thus, price discrimination mitigates or completely solves the problem of underuse. In addition, ithelps solve the problem of underprovision because it increases the total profit that a supplier of a new good can capture. Price discrimination iswidely used in air travel (airlines usually charge more for the changeable tickets likely to be used by business travelers) and telephone services(which throughout the world charge businesses more than individuals). Price discrimination also occurs in physician and hospital services andin pharmaceutical and laboratory supply sales. Recent legal challenges to the use of price discrimination by pharmaceutical companies in theirsales to managed care organizations may unfortunately have limited the use of this promising strategy for minimizing the losses from monopolypricing.

Monopoly distortions can become larger when production of a good involves a chain of monopolists. For example, suppose that onemonopolist invents a new laboratory technique, and a second develops a new drug whose production uses this technique. When two or moremonopolists trade in this kind of vertical chain, the welfare losses do not just add up, they multiply. The problems of underuse and failure todevelop the good both become worse than they would be if a single monopolist invented the technique, developed products from it, and pricedthe goods to the final consumers. This is the justification that economists typically offer for vertical integration of an upstream and adownstream firm into a single firm. However, in an area that is research intensive and subject to uncertainty, and where there are many possibleusers of any innovation, vertical integration is often unfeasible. Chiron, which held a monopoly in the use of a critical enzyme for PCR, wouldhave been unable to identify, much less integrate into a single firm, all of the possible firms that could use PCR before it made its decisionsabout developing this technique. On these grounds, theory suggests that a single monopolist in a final-product market will induce smaller sociallosses than a monopolist that will play a crucial supplier role to other firms, which are themselves monopolists in downstream markets.

Taxes and Subsidies Cause a Different Set of Distortions. As we have noted, the polar alternative to a pure price system is an allocationmechanism that relies on subsidies to finance innovation. The funds required for a system of government subsidies can be raised only bytaxation, which harms incentives. For example, raising the income tax diminishes the incentives to work. In addition, subsidies replace a markettest with a nonmarket system that rewards a different set of activities. If these activities are not useful or productive, the subsidies themselvesinduce distortions and waste. To describe the costs associated with a subsidy system, recall the case of the subsidized software-producingdivision of the conglomerate. It is costly to design and operate an administrative system that tries to identify useful activities. Failures in such asystem also impose costs. Suppose that many of the projects that are subsidized produce no value; suppose further that a system which relied ona market test of value produced fewer such failures. Then funds allocated to the additional wasteful projects must be counted as part of the costof operating the subsidy system. Under conditions of uncertainty, any allocation system will produce some failures; in this example, we assumethat a subsidy system would produce more of them.

Peer review of university-based research grants is widely regarded as an unusually efficient and effective mechanism of subsidyallocation. Its effectiveness derives partly from the details of its structure, such as the anonymous reviews by panels of disinterested experts.However, it also benefits from the limited problem that it is trying to solve. Research review groups make decisions at a high level ofabstraction; they do not need to forecast the precise consequences of pursuing a line of research, and do not need much information about the“market demand” for the good they are ultimately subsidizing. Consider, for example, the information necessary to make a good decision aboutsubsidizing different research proposals for research on computer-human interfaces. Then contrast this with the substantially greater amount ofinformation that would be necessary for selecting among several proposals to develop new software applications that will be sold to the public.The information needed to make decisions about final products is extensive, including detailed information about characteristics and their valuein the myriad ways that consumers might put them to use. Surely a market test by people who spend their own money is the most efficientmechanism for selecting products in this setting. (To push this point to an extreme, imagine what recreational reading would be like if thegovernment did not offer copyright protection of books, so that the only people who could make a living as authors were people who receivedgrants awarded on the basis of relevancy by university professors!)

Debate about technology policy programs often turns on disagreements about the cost of setting up and operating a system for allocatingsubsidies. Views in this area are often polarized, but there is little disagreement that it is much harder to establish effective subsidies fornarrowly defined final products from an industry than it is to subsidize flexible inputs for that industry and let a market test determine how theyare allocated to produce the final product mix. Arguably, the most important contribution the federal government made to the development ofthe biotechnology industry was to promote training for people who went to work in molecular biology and

EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH 12720

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 70: [Proceedings of the National Academy of Sciences] (BookZZ.org)

related fields. Similarly, government subsidies for training in computer science, which provided the software industry with a pool of talenteddevelopers and entrepreneurs, have probably been more effective mechanisms for promoting the development of the software industry thangovernment attempts to promote specific computer languages. The one possible exception to this rule arises when the government is animportant user of the good in question, as for example in the case of military equipment. In this case, users within the government have muchinformation about the relevant market demand and can be more successful at selecting specific products to subsidize.

Measuring the Gains from Research and the Costs of Financing It. To make informed decisions about research support, and to strikean appropriate balance between expanded property rights and subsidies, policymakers need quantitative information that will enable them toanswer three questions. (i) What are the benefits of an additional investment in research? (ii) What are the costs of financing research through asystem of property rights that depends on monopoly profits as the principal incentive? (iii) What are the costs of financing research through asystem of taxes and subsidies? We discuss each of these questions, then address some of the pitfalls that may arise in making decisions basedon incomplete or misleading information.

Measuring Benefits. The problem of measuring the benefits from research expenditures can be readily posed in terms of the demandcurve of Fig. 1. The full benefit to society from research leading to a new discovery is represented by the area under the demand curve up to thequantity of goods sold. If we subtract the variable costs of producing the units sold, we have a measure of benefits that can be compared withthe research costs needed to generate this benefit. There are then two ways to proceed. Policymakers can use an estimate of profits to firms as acrude underestimate of the total gains to society. Alternatively, they can try to measure these gains directly by looking at the benefits enjoyedby users of the goods.

Profits as a Proxy for Social Benefits. To keep the discussion simple, assume that a firm made a fixed investment in research sometimein the past. Each year, it earns revenue on sales of the product produced from this research and pays the variable costs of goods produced. Thedifference, the annual accounting profits of the firm, appears as the profit rectangle in Fig. 1. These profits change over time. The value of theinnovation will change as substitute goods are developed, prices for other goods rise or fall, and knowledge about the innovation grows.Accounting profits of firms can thus be used as a lower-bound estimate of the welfare gains from innovation.

In practice, there are several obvious problems with this approach. First, by ignoring consumer surplus, this measure underestimates thebenefits of a good. Second, it may be impossible for a government agency (unlike the manufacturer) to estimate the revenues attributable to asingle product. Third, until a product has run the course of its useful life, its entire revenue stream will be highly uncertain. At an early stage inthe life of a new product, such as a patented drug, the stock market valuation of the company may be taken as an indication of the best estimateof the present value of all the revenue streams held by the firm, and changes in stock market valuation when a new product is approved maygive some indication of the present value of the anticipated revenue stream from the good. But if the possibility of approval is anticipated bythe stock market, the change in stock market value at the time of approval will be an underestimate of the full value of this revenue stream.

Finally, market transactions will not give an accurate indication of willingness to pay if demand for a good is subsidized. Traditional fee-for-service medical insurance acts as such a subsidy (5). Then patients bear only a fraction of the cost directly, and consume drugs and healthservices whose value falls short of the true social cost. In this situation, the monopolist’s profits from the sale of the innovation overstate themagnitude of the benefits to society of a newly invented medical treatment.

Cost-Benefit Approach. A more complete picture of the benefits to society can be painted using cost-benefit measures of the total valueto consumers of a new good. Consider the value of the discovery that aspirin prevents myocardial infarction (6). What is the information worth?To answer this question, one begins by considering the size of population that would benefit from the therapy, followed by the change in theexpected pattern of morbidity and mortality attributable to adoption, and finally the dollar valuation of both the survival and quality-of-lifeeffects. This would represent the potential return and could be calculated on an annual basis, but the potential return would likely overestimatethe actual surplus. Some people in the group at risk, for example, might have been taking aspirin before the information from the studiesbecame available. Furthermore, not everyone who could potentially benefit would comply with treatment. Thus, it is necessary to estimate theincrement in the number of people using the therapy rather than the potential number of individuals taking it. In addition, there would likely bereductions in expenditures for the treatment of heart attacks, which, after all, would be averted by use of the therapy. Essentially, the estimate ofthe surplus would be based on a CBA, perhaps conducted for the representative candidates for treatment, multiplied by the number of peoplewho undergo treatment as a direct consequence of the information provided by the clinical trial.

Although the techniques of CBA have been adopted in many areas of public policy, most “economic” analyses of health care and healthpractices have eschewed CBA for the related technique of cost-effectiveness analysis, which, unlike CBA, does not attempt to value healthoutcomes in dollar terms (7). Instead, outcomes are evaluated as units of health (typically life expectancy or quality-adjusted life years). Thelack of a dollar measure of value of output means that cost-effectiveness analysis does not provide a direct measure of consumer surplus.However, if the cost-effectiveness analysis is conducted properly, it is often possible to convert the information from a cost-effectivenessanalysis into a CBA with additional information about the value of the unit change in health outcomes. For example, suppose that the value ofan additional year of life expectancy is deemed to be $100,000, and that a patient with severe three-vessel coronary artery disease treated withbypass surgery can expect to live two years longer at a cost (in excess of medical management) of $45,000. The cost-effectiveness ratio ofsurgery, or the increment in costs ($45,000) divided by the increment in health effects (two years), is $22,500. The net benefit of surgery is thedollar value of the increased life expectancy ($200,000) less the incremental cost ($45,000), or $155,000. Calculations like these are a centralfeature of the field of medical technology assessment (8).

Usually the information needed to construct exact measures of the value of medical research will not be available, but crude calculationscan be illuminating. Moreover, basic investments in information collection, for example, surveys of representative panels of potentialconsumers, might greatly improve the accuracy of these estimates. Simple calculations like these, together with more systematic data on healthoutcomes for the population at large, are among the prerequisites for better decision-making by the government.

Measuring the Cost of Using the Price System and Monopoly Profits. Benefit measures comprise only part of the information neededfor good decision-making. Suppose, for example, that policymakers anticipate a large benefit from research directed toward the prevention of aspecific disease. They must also decide whether this research should be subsidized by the government or financed by granting monopoly powerto private sector firms.

EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH 12721

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 71: [Proceedings of the National Academy of Sciences] (BookZZ.org)

The theoretical discussion in the last section has already identified some of the factors that can influence the social cost of using monopolypower to motivate private research efforts. Monopoly will be more costly if there are many firms with some monopoly power that sell to eachother in a vertical chain. In principle, this second problem might be serious for an industry that is research-based, particularly if current trendstoward granting patents on many kinds of basic and applied scientific knowledge continue. For example, a drug may be produced by theapplication of a sequence of patented fundamental processes that results in production of a reagent. The reagent may then be combined withother chemicals to produce a drug. If access to the process is sold by a monopoly, the reagent is sold by another monopoly, and the drug is soldby a third monopoly, the distortion due to monopoly will be compounded. Strengthened property rights can mean, in the limit, that anarbitrarily large number of people or firms with patent rights over various pieces of knowledge will each have veto power over any subsequentdevelopments. If one firm had control of all these processes and carried out all these functions, the price distortion for the final product wouldbe smaller, but as we have indicated, in a research-intensive field characterized by much uncertainty and a large number of small start-up firms,this arrangement may not be feasible.

Yet as we have also indicated, monopoly will be less costly to society as a whole if firms can take advantage of price discrimination.Because the cost of more reliance on monopoly makes this issue so important to a research-intensive field such as pharmaceuticals and becauseso little is known about the net effect of these two conflicting forces, we believe that it would be valuable to collect more information about themagnitude of monopoly distortions in fields closely related to health care.

Below, we describe feasible mechanisms that could be used to collect more of this kind of information. There are real challenges tocollecting the information, because much of it—such as the prices that hospitals and health care networks pay for drugs—is a trade secret.

Monopoly distortions are not the only costs incurred when the private sector finances research; the cost of establishing and maintainingproperty rights may be substantial. Enforcement of property rights is inexpensive for most physical objects, such as cars or houses. But fornonrival goods that can readily be copied and used surreptitiously, it is much more costly to extend property rights, and more subtlemechanisms may be needed to do so. Initially, software publishers relied on copy protection schemes to prevent revenue losses fromunauthorized copying. Over time, they have developed less intrusive techniques (such as restricting technical assistance to registeredcustomers) that achieve the same end.

Sometimes the costs of enforcing property rights are so high that a system based on private incentives will be infeasible. These cases willtherefore have high priority for scarce tax-payer financed government subsidies. Suppose that a private firm decided to sponsor a trial of aspirinto prevent colon cancer and sought the permission of the Food and Drug Administration (FDA) to have exclusive rights to market the use ofaspirin for this purpose. Although the company might establish effectiveness and obtain exclusive rights from the FDA, the availability ofaspirin from many producers and without a prescription, along with the large number of indications for its use, would make it nearly impossibleto enforce market exclusivity for this indication. In such an extreme case, measuring the costs of enforcement is unnecessary, but there oftenwill be instances in which such estimates will be needed because enforcement of property rights is worthwhile but costly. Moreover, as thesoftware example suggests, there is much room for experimenting with different systems to protect property rights.

Cost of Using Taxes and Subsidies. Most of the field of public finance is concerned with quantifying the losses and gains that occur withgovernment activity, such as taxation. Every form of taxation alters behavior by distorting economic incentives; for example, taxes on bequestsreduce the desired size of a bequest, reduce national savings, and increase transfers of wealth during life. Income taxation modifies the relativeattractiveness of time devoted to leisure and time devoted to paid work. Traditional calculations of the benefits of government programs inhealth care, however, ignore the “dead-weight” losses due to the behavioral distortions induced by taxation. These losses can be substantial,although their exact magnitude depends on the form of the tax and the economic activity to which it applies. According to the recent estimates,the 1993 personal tax rate increases raised the deadweight loss by about two dollars for every additional dollar of tax revenue (9). These arepart of the costs of a tax and subsidy system.

A government subsidy system, like that of a large firm, can generate extensive administrative costs. It can also cause large quantities ofresources to be wasted on poorly selected projects. A government agency that dispenses research dollars must devote substantial time and effortto choosing among several competing projects. The market directly produces a mechanism (albeit a Darwinian mechanism that may not becostless) to sort among competing uses of resources. Little is known about the costs of a system to administer subsidies. However, as theprevious discussion suggested, qualitative evidence suggests that subsidy systems work better when they make general investments in outputsthat are flexible and have many uses. They are less suitable for specific, inflexible investments that require extensive, context-specificinformation about benefits and willingness to pay.

Making Decisions with Incomplete Information. Because many of the pieces of information that we have outlined above are notavailable or are available only in the form of qualitative judgments made by experts, it is tempting to substitute surrogate measures for whichthe information is available. For example, one might simply give up any hope of making judgments about the magnitudes of costs and benefitsof various strategies for supporting advances in health care. The NIH might simply try to produce the best biomedical science possible andassume that everything else will follow. But as Rosenberg has noted (10), a country’s success in producing Nobel Prizes in scientific fields isinversely correlated with its economic performance!

More seriously, other seemingly reliable measures could significantly bias government decisions. For example, because profits areobservable and salient in political debates, a government agency that subsidizes research may want to maximize profits earned by firms thatdraw on their research. For NIH, this might mean adopting a strategy that maximizes the profits earned by biotechnology and pharmaceuticalfirms in the United States. Because a substantial portion of the demand for medical care is still subsidized by a system of fee-for-service-insurance, this strategy could lead to large social losses. Even under the paradigm of managed care, which removes or decreases the implicitsubsidy for medical care services, profits can be a poor guide to policy. The highest payoff to government spending on research may come fromfunding research in areas where it is prohibitively expensive to establish the system of property rights that makes private profit possible.

A prominent example of this phenomenon, mentioned above, is the discovery that aspirin can prevent heart attacks and death from heartattacks (11). It is difficult to conceive of realistic circumstances in which a producer of aspirin could gain exclusive rights to sell aspirin for thisindication, and it is unlikely that the discovery that aspirin had such beneficial effects markedly increased the profits of its producers.Moreover, since aspirin is produced by many firms, no one of them had much to gain by financing this kind of research. But if the increasedprofit in this case was small, the consumer’s surplus

EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH 12722

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 72: [Proceedings of the National Academy of Sciences] (BookZZ.org)

may have been extremely large. As our previous discussion noted, it is precisely in circumstances under which a producer cannot recoup thefixed costs of investment in developing a technology that government research may have its greatest payoffs. (In this context, the researchestablished aspirin’s beneficial effects on heart disease rather than proving the safety and efficacy of the drug more generally.) In suchcircumstances, it is imperative to go beyond profits and measure consumer’s surplus, but the usual market-based proxies may provide very littleinformation about any such benefits.

Public good features also lent a strong presumption that it was appropriate for the government to sponsor research on the value of beta-blockers after myocardial infarction (12, 13). In the influential NIH-sponsored Beta-Blocker Heart Attack Trial, propranolol reduced mortalityby about 25%. The excludability and property rights problems characteristic of aspirin would seem to have been less important for propranolol,but the combination of looming patent expiration and the availability of a growing number of close substitutes diminished the incentives for aprivate company to sponsor such a trial. Any increase in demand for beta-blockers resulting from the research would likely have applied to theclass generally. Although strengthened property rights (such as lengthened market exclusivity) might have made it possible for a privatecompany to capture more of the demand increase resulting from such research, problems with enforcement are similar to those of aspirin: itwould be difficult to ensure that other beta-blockers would not be prescribed for the same indication, diluting the return to the manufacturer ofpropranolol.

The drug alglucerase, for the treatment of Gaucher disease, has characteristics almost opposite to those of aspirin and beta-blockers.Gaucher disease is caused by deficient activity of the enzyme glucocerebrosidase, and NIH-sponsored research led to discovery of the enzymedefect and the development of alglucerase, a modified form of the naturally occurring enzyme.

Subsequently, a private corporation (Genzyme) developed alglucerase further and received exclusive rights to market the compound underthe provisions of the Orphan Drug Act. Thus in this instance, both a tax subsidy and a strong property rights approach facilitated thedevelopment of the drug.

The high price of alglucerase attracted substantial attention, particularly because most of the drug’s development had been sponsored orconducted by the government. The standard dosage regimen devised by the NIH cost well over $300,000 per year for an adult patient, andtherapy is lifelong. According to the manufacturer, the marginal cost of producing the drug accounted for more than half the price, a ratio that isunusually high for a pharmaceutical product (14). Although drug-sparing regimens that appear to be as effective have since been tested, theleast expensive of these cost tens of thousands of dollars annually (15). The supplier was able to charge high prices because there is no effectivesubstitute for the drug. This meant that nearly all insurers and managed care organizations covered the drug at any price the manufacturerdemanded. Insurance coverage meant that demand would not fall significantly with increases in price, so that monopoly would not cause asmuch underutilization as would be typical if demand were highly price-responsive. With the insurance subsidy, there would beoverconsumption, and expenditures on the drug could exceed the value of benefits it provided.

At current prices, alglucerase is unlikely to be cost-effective compared with many widely accepted health care interventions. Anexploration of the federal role in the development of alglucerase revealed the hurdles to be overcome in obtaining the information needed toguide public decision-making—it was possible to obtain rough estimates of the private company’s research and development investment butnot the investment made by the federal government. Nevertheless, precise information about the costs of research are often, as in this case,unnecessary to make qualitative decisions about the appropriateness of the taxation and subsidy approach (14).

More detailed information about the relative costs of public and private support for various forms of research can be valuable for manyreasons. It may overturn long-standing presumptions about the best kind of research for the government to support. The traditional view is thatNobel Prize-winning science is the area where government support is most important. However, as the case of PCR demonstrates, it is nowclear that it is possible to offer property rights that can generate very large profits to a firm that makes a Nobel Prize-winning discovery. It maynot be as costly to set up a system of property rights for basic scientific discoveries as many people have presumed. If so, we must still verifywhether the costs of relying on monopoly distortions for this kind of discovery are particularly high. At present, we have little basis for makingthis judgment.

In an era when research budgets are stagnant or shrinking, circumstances will force this kind of judgment. Much population-basedresearch, including epidemiological research and social science research, could provide valuable information (providing insights in such areasas etiologic factors in human disease, biological adaptations to aging, and understanding of the economic consequences of disease and itstreatment). This information could inform both public policy and individual planning. All such information is nonrival, and much of it may beinherently nonexcludable because it would be so costly to establish a system of property rights. It is precisely in the areas of research thatproduce knowledge which is not embodied in a specific product that the benefits from federal investment are likely to be greatest, but mostdifficult to measure. With a fixed budget, a decision to fund work that could be financed in the private sector—such as sequencing the humangenome— means that competing proposals for population-based or epidemiological research cannot be funded. The choice between these kindsof alternatives should be based on an assessment of the best available evidence on all the benefits and costs.

Using Experimentation to Inform Decisions About Research Financing. In many studies of clinical interventions, it is feasible toconstruct rough estimates of the social returns to government investment in research. As we noted above, it is considerably more difficult toestimate the costs of different systems for financing research. Does this mean that no measurement is possible and that debates about financingmechanisms should be driven by tradition, belief, and politics rather than by evidence? Undoubtedly, measurement can be improved bydevoting more resources to it and engaging more intensively in standard activities to measure proxies for research productivity (citationanalysis, tracing the relationship of products to research findings, and so on). Even if such activities result in credible estimates of the benefitsof research, they tend not to address the principal policy issue: what mix of private and public financing is best? To answer this question,consideration should be given to the collection of new kinds of data and even to feasible large-scale social experiments.

A provocative experiment that could be designed along these lines would be one that “auctioned off the exploration rights” along a portionof the human genome. Another portion of the genome could be selected for comparison; here the government could refuse to allow patentprotection for basic results like gene sequences, and would offer instead to subsidize research on sequencing and on genetic therapies. If twolarge regions were selected at random, the difference in the rate of development of new therapies between the privately owned and the publicregions and the differences in the total cost of developing these therapies could give us valuable information about the relative costs and socialbenefits of different financing mechanisms.

The experimental approach is unlikely to settle all issues about the appropriate federal role in funding research. In a

EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH 12723

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 73: [Proceedings of the National Academy of Sciences] (BookZZ.org)

gene-mapping experiment, with one region assigned to the private sector and the other to federally sponsored researchers, differences inoutcomes could be due to characteristics of the regions that were randomly assigned (and random assignment would not eliminate chancevariation if the regions were too small). But many insights might emerge from such an effort, including the identification of cost consequences,the effect of funding source on ultimate access to resulting technological innovations, the dissemination of research results, the effectiveness ofprivate sector firms in exploiting price discrimination, and so on. The scope for conducting such experiments might be large, and should betargeted toward those areas of research in which there is genuine uncertainty about the appropriate allocation between property rights and taxesand subsidy.

A more conservative strategy would be to collect detailed information about natural experiments such as the discovery and patenting ofPCR. It would be very useful to have even ballpark estimates of the total monopoly price distortions induced by the evolving pricing policybeing used by the patent holder.

CONCLUSIONSFederal agencies often use estimates of industry revenues or consumer surplus to make claims about benefits or returns to their

investments in scientific and technological research. Though these components of research productivity are important, they are inadequate as abasis for setting and evaluating government policy toward research. Our discussion has emphasized the choice between property rights and asystem of taxes and subsidy (i.e., government sponsorship) for research. This decision is not made at the level of NIH or any other agency thatsponsors and conducts research, but it is fundamental to public policy.

It may be tempting to dismiss these issues because it is so difficult to estimate the quantities that we identify as being central to decisionsabout government support for research. However, it would not be difficult to make rough estimates of these quantities and to begin to use themin policy discussions. Undoubtedly, it is difficult to select among the alternative mechanisms for supporting research. Nevertheless, decisionsabout the use of these mechanisms are made every time the government makes spending and property rights decisions relevant for science andtechnology policy issues. The effort required to obtain the needed information and consider these issues systematically might pay a large socialreturn.

In coming years, three forces will increase the importance of taking this broad perspective on the federal role in supporting research. First,voters and politicians are likely to attribute a higher cost to taxes and deficit finance. As a result, in future years all federal agencies will likelybe forced to rely less on the tax and subsidy mechanism for supporting technological progress than they have in the past.

Second, a dramatic reduction in the cost of information processing systems will increasingly affect all aspects of economic activity. Thischange will make it easier to set up new systems of property rights, which can be used to give private firms an incentive to produce goods thattraditionally could be provided only by the government. The rapid development of the Internet as a medium of communication may ultimatelylead to advances in the ability to track and price a whole new range of intellectual property. The success of the software industry also suggeststhat other kinds of innovations in areas such as marketing may make it possible for private firms to earn profits from goods even when propertyrights to the goods they produce seem quite weak.

At the same time, a third force—the move toward managed care in the delivery of health care services—pushes in the other direction. Thischange in the market for health care services is desirable on many grounds, but to the extent that it reduces utilization of some medicaltechnologies, it will have the undesirable side effect of diminishing private sector incentives to conduct research leading to innovations inhealth care. Everything else equal, this change calls for increased public support for biomedical research. In the near term, the best policyresponse may therefore be one that combines expanded government support for research in some areas with stronger property rights and a shifttoward more reliance on the private sector in other areas. Further work is needed to give precise, quantitative guidance to striking the rightbalance. In the face of stagnant or declining resources, we will have to make increased efforts to gather and analyze the information needed totarget research activities for subsidy and to learn which areas the private sector is likely to pursue most effectively.

A.M.G. is a Health Services Research and Development Senior Research Associate of the Department of Veterans Affairs.1. Phelps, E.S. (1973) in Economic Justice, ed. Phelps, E.S. (Penguin Books, Baltimore), pp. 9–31.2. Mishan, E.J. (1988) Cost-Benefit Analysis (Unwin Hyman, London).3. Cornes, R. & Sandler, T. (1986) The Theory of Externalities, Public Goods, and Club Goods (Cambridge Univ. Press, Cambridge, U.K.).4. Romer, P. (1993) Brookings Pap. Econ. Act. Microecon. 2, 345– 390.5. Pauly, M.V. (1968) Am. Econ. Rev. 58, 531–536.6. Steering Committee of the Physicians’ Health Study Research Group (1989) N. Engl. J. Med. 321, 129–135.7. Gold, M.R., Siegel, J.E., Russell, L.B. & Weinstein, M.C., eds. (1996) Cost-Effectiveness in Health and Medicine (Oxford Univ. Press, New York).8. Fuchs, V.R. & Garber, A.M. (1990) N. Engl. J. Med. 323, 673–677.9. Feldstein, M.S. & Feenberg, D. (1996) in Tax Policy and the Economy, ed. Poterba, J. (MIT Press, Cambridge, MA), Vol. 10.10. Rosenberg, N. (1994) in Economics of Technology, ed. Granstrand, O. (North-Holland, New York), pp. 323–337.11. Antiplatelet Trialist Collaboration (1994) Br. Med. J. 308, 91–106.12. Yusuf, S., Peto, R., Lewis, J., Collins, R. & Sleight, P. (1985) Prog. Cardiovasc. Dis. 27, 336–371.13. Beta-Blocker Heart Attack Trial Research Group (1982) J. Am. Med. Assoc. 247, 1707–1714.14. Garber, A.M., Clarke, A.E., Goldman, D.P. & Gluck, M.E. (1992) Federal and Private Roles in the Development and Provision of Alglucerase

Therapy for Gaucher Disease (U. S. Office of Technology Assessment, Washington, DC).15. Beutler, E. & Garber, A.M. (1994) PharmacoEconomics 5, 453–459.

EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH 12724

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 74: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

Public-private interaction in pharmaceutical research

IAIN COCKBURN* AND REBECCA HENDERSON†‡

*Faculty of Commerce and Business Administration and National Bureau of Economic Research, University of British Columbia,Vancouver, BC, Canada V6T 1Z2; and †Sloan School of Management and National Bureau of Economic Research, Massachusetts Institute ofTechnology, Cambridge, MA 02138

ABSTRACT We empirically examine interaction between the public and private sectors in pharmaceutical research usingqualitative data on the drug discovery process and quantitative data on the incidence of coauthorship between public and privateinstitutions. We find evidence of significant reciprocal interaction, and reject a simple “linear” dichotomous model in which the publicsector performs basic research and the private sector exploits it. Linkages to the public sector differ across firms, reflecting variation ininternal incentives and policy choices, and the nature of these linkages correlates with their research performance.

The economic case for public funding of scientific and technological research rests on the belief that the private sector has inadequateincentives to invest in basic research (1). This belief in turn rests on the idea that research and development (R&D) can be usefully arrayedalong a continuum, with “basic” work, or research that is orientated towards the discovery of fundamental scientific principles at one end, and“applied” work, or research designed to be immediately translated into products and processes at the other. Since basic research is likely to berelevant to a very broad range of fields, to have application over many years, and be useful only when combined with other research,economists have long believed that the returns to basic research may be difficult to appropriate privately.

This perspective is complemented by work in the sociology of science, which suggests that the norms and incentive structures thatcharacterize publicly funded science combine to create a community in which it is much more likely that “good science” will be conducted.Researchers working in the public sector are rewarded as a function of their standing in the broad research community, or according to the“rank hierarchy” of the field (2). Because this standing is a function of priority, the public sector is characterized by the rapid publication of keyideas and a dense network of communication across key researchers that is particularly conducive to the rapid advance of scientific knowledge.Research undertaken in the private sector, in contrast, is believed to be shaped by the need to appropriate private returns from new knowledge,which leads firms to focus on applied research and to attempt to restrict communication of results. Faced with different constraints andincentives, private sector researchers are thus viewed as much less likely to publish their research or to generate basic advances in scientificknowledge (3–5).

In combination, these two perspectives have sustained a consensus that has supported substantial public commitment to basic research forthe last 50 years. Nearly one-half of all the research undertaken in the United States, for example, is funded by the public sector, and spendingby universities on research increased by over 100% in real terms between 1970 and 1990 (6). However, budgetary concerns are placingincreasing pressure on government support for science, and questions about the appropriate level of public funding of research are now beingraised on two fronts.

In the first place, it has proven very difficult to estimate the rate of return to publicly funded research with any precision (7). Theconceptual problems underlying this exercise are well understood, and although those studies that have been conducted suggest that it may bequite high (8–10), it is still far from clear whether too much or too little public resources are devoted to science. In the second place, questionshave been raised about the usefulness of dichotomies drawn between basic and applied and “open” versus “closed” as bases for public fundingdecisions. There is considerable evidence that private firms invest significantly in basic research (11, 12), while at the same time severalobservers have suggested that publicly funded researchers have become increasingly interested in the potential for private profit, placing thenorms of open science under increasing threat.

In this paper we explore this second issue in the context of pharmaceutical research, as a contribution toward clarifying the nature of therelationship between the public and private sectors. The pharmaceutical industry provides a particularly interesting arena in which to study thisissue: health related research is a very substantial portion of the total public research budget, yet some researchers have charged that thisinvestment has yielded very few significant advances in treatment. Between 1970 and 1988, for example, public funding for the NationalInstitutes of Health (NIH) increased more than 200% in real terms, whereas private spending on biomedical research increased over 700%. Yetat the same time the rate of introduction of new drugs remained approximately constant, and there has been little improvement in such criticalvariables as mortality and morbidity (13).

Prior research has shown that spending on privately funded research is correlated with NIH spending (14), whereas a number of casestudies of individual firms have confirmed the importance of an investment in basic research to the activities of private firms (12, 15). Here wedraw upon both qualitative evidence about the research process and quantitative data on publication rates and patterns of coauthorship to builda richer understanding of the interaction between public and private institutions in pharmaceutical research.

Our results suggest that public sector research plays an important role in the discovery of new drugs, but that the reality of the interactionbetween the public and private sectors is much more complex than a simple basic/applied dichotomy would suggest. While in general thepublic sector does focus more attention on the discovery of basic physiological and biochemical mechanisms, the private sector also investsheavily in such basic research, viewing it as fundamental to the maintenance of a productive research effort. Public and private

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviations: R&D, research and development; NIH, National Institutes of Health.‡To whom reprint requests should be addressed.

PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH 12725

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 75: [Proceedings of the National Academy of Sciences] (BookZZ.org)

sector scientists meet as scientific equals, solve problems together, and regard each other as scientific peers, which is reflected in extensivecoauthoring of research papers between the public and private sectors. We also find some evidence that this coauthoring activity is correlatedwith private sector productivity. Publication of results makes the output of public sector research effort freely available, but the ability of theprivate sectors to access and use this knowledge appears to require a substantial investment in doing “basic science.” To take from theindustry’s knowledge base, the private sector must also contribute to it.

Taken together, our results suggest that the conventional picture of public research as providing a straightforward “input” of basicknowledge to downstream, applied private research may be quite misleading, and that any estimation of the returns to publicly funded researchmust take account of this complexity.

DATA AND METHODSWe gathered both qualitative and quantitative data to examine public-private interaction. We used two sources of data for our qualitative

analysis. The first source is narrative histories of the discovery and development of 25 drugs introduced between 1970 and 1995, which wereidentified as having had the most significant impact on medical treatment by two leading industry experts. Each history was constructed fromboth primary and secondary sources, and aimed in each case to identify both the critical events and the key players in the discovery of eachdrug. (We are indebted to Richard Wurtman and Robert Bettiker for their help in constructing these histories.) Our second source of data is aseries of detailed field interviews conducted with a number of eminent public sector researchers and with researchers employed at 10 majorpharmaceutical firms.

Our primary source of quantitative data is bibliographic information on every paper published in the public literature between 1980 and1994 by researchers listing their address as 1 of 10 major research-oriented pharmaceutical firms, or 1 of the NIH. This data base wasconstructed by searching address fields in Institute for Scientific Information’s Science Citation Index. It is important to note that ScienceCitation Index lists up to six addresses given for each paper, which may not correspond exactly to the number of authors. For these 10 samplefirms alone, our working data set contains 35,813 papers, with over 160,000 instances of individual authorship, for which Science CitationIndex records 69,329 different addresses. Our focus here is on coauthorship by researchers at different institutions. Clearly, much knowledge isexchanged at arm’s length through reading of the open literature, and in some instances coauthorship may simply be offered as a quid pro quofor supplying reagents or resources, or as a means of settling disputes about priority. Nonetheless, we believe that coauthorship of papersprimarily represents evidence of a significant, sustained, and productive interaction between researchers. There are also very substantialpractical problems in analyzing citation patterns. We define a “coauthorship” as a listing of more than one address for a paper: a paper with sixauthors listing Pharmacorp, Pharmacorp, NIH, and Massachusetts Institute of Technology as addresses would generate three suchcoauthorships. We classified each address according to its type: SELF, university, NIH, public, private, nonprofit, hospital, and a residualcategory of miscellaneous, so that we were able to develop a complete picture of the coauthoring activity of each firm. Table 1 gives a briefdefinition of each type.

These data on publications and coauthorship are supplemented by an extensive data set collected on R&D activity from the internalrecords of these 10 firms. This data set extends from 1965 to 1990 and includes discovery and development expenditures matched to a varietyof measures of output including important patents, Investigational New Drugs, New Drug Approvals, sales, and market share. These data aredescribed in more detail in previous work (16–18). Although for reasons of confidentiality we cannot describe the overall size or nature of thefirms, we can say that they cover the range of major R&D-performing pharmaceutical manufacturers and that they include both American andEuropean manufacturers. In aggregate, the firms in our sample account for approximately 28% of United States R&D and sales, and we believethat they are not markedly unrepresentative of the industry in terms of size or of technical and commercial performance.Table 1. Definitions of institutional typeType DefinitionSELF “COMPANY X” in file obtained by searching SCI for “COMPANY X”Hospital Hospitals, clinics, treatment centersNIH Any of National Institutes of HealthPublic Government-affiliated organizations, excluding NIH; e.g., National Labs, European Molecular Biology LabUniversity Universities and medical schoolsPrivate For profit organizations, principally pharmaceutical and biomedical firmsNonprofit Nonprofit nongovernment organizations, e.g., Imperial Cancer Research FundMiscellaneous Unclassified

SCI, Science Citation Index.

QUALITATIVE EVIDENCE: FIELD INTERVIEWS AND CASE STUDIESCase Studies. Table 2 presents a preliminary summary of 15 of our 25 case histories of drug discovery. It should be noted immediately

that this is a highly selective and not necessarily representative sample of new drugs introduced since 1970. There is also significant selectioninduced by the fact that many potentially important drugs arising from more recent discoveries are still in development. Bearing in mind thesecaveats, a number of conclusions can be drawn from Table 2. First, there is some support for the “linear” model. Publicly funded researchappears to have been a critical contributor to the discovery of nearly all of these drugs, in the sense that publicly funded researchers made amajority of the upstream “enabling” breakthroughs, such as identifying the biological activity of new classes of compounds or elucidatingfundamental metabolic processes that laid the foundation for the discovery of the new drug. On the other hand, publicly funded researchappears to be directly responsible—in the sense that publicly funded researchers isolated, synthesized, or formulated the clinically effectivecompound, and obtained a patent on it—for the introduction into the marketplace of only 2 of these 15 drugs.

Second, there are very long lags between upstream “enabling discoveries” and downstream applied research. At least for these drugs, theaverage lag between the discovery of a specific piece of knowledge discovered by the public sector and the identification and clinicaldevelopment of a new drug appears to be quite long—in the neighborhood of 10–15 years. It seems clear that the returns to public sectorresearch may only be realized after considerable delay, and that much modern publicly funded research has yet to have an impact in the form ofnew therapeutic agents.

Note also that though this very stark presentation of these case histories lends some support to a linear dichotomized view of therelationship between the public and private sectors, it was also very clear from the (unreported) details of these case histories that the privatesector does a considerable amount of basic science and that applied clinical research conducted by

PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH 12726

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 76: [Proceedings of the National Academy of Sciences] (BookZZ.org)

the public sector appears to have been at least as important as basic research in the discovery of some new agents.Table 2. Lags in drug discovery and developmentDrug Date of key

enablingscientificdiscovery

Public? Date ofsynthesis ofmajor compound

Public? Date of marketintroduction

Lag from enablimdiscovery tomarketintroduction, yr.

Captopril 1965 Y 1977 N 1981 16Cimetidine 1948 Y 1975 N 1977 29Cisplatin 1965 Y 1967 Y 1978 13Cyclosporin 1972 N 1983EPO 1950 Y 1985 N 1989 39Finasteride 1974 Y 1986 N 1992 18Fluoxetine 1957 Y 1970 N 1987 30Foscarnet 1924 Y 1978 Y 1991 67Gemfibrozil N 1968 N 1981Lovastatin 1959 Y 1980 N 1987 28Nifedipine N 1971 N 1981Omeprazole 1978 N 1989 11Ondansetron 1957 Y 1983 N 1991 34Propranolol 1948 Y 1964 N 1967 19Sumatriptan 1957 Y 1988 N 1992 35Basic discoveries: 11 public 3

private2 public 12private

EPO, erythropoeitin; Y, yes; N, no.

Field Interviews. The picture of the linear model in Table 2 was not supported by the findings of our field interviews. The notion thatpharmaceutical research is a process in which the public sector funds basic research that is then transferred to a private sector that conducts thenecessary applied research to translate it into products was rejected by most of our respondents. These industry experts painted a much morecomplex picture.

On the one hand, all interviewees reinforced conventional wisdom in stressing how critical publicly funded research was to the success ofprivate research. They gave many examples of historical discoveries that could not have been made without knowledge of publicly fundedresearch results and, although there have as yet been few major breakthroughs in medical treatment as a result of the revolution in molecularbiology, contact with the public sector to stay current with the latest advances in cell biology and molecular physiology was viewed as aprerequisite of modern pharmaceutical research.

On the other hand, our respondents stressed the bidirectional, interactive nature of problem solving across the public and private sectors.They described a process in which key individuals, novel ideas, and novel compounds were continually exchanged in a continual process ofreciprocal interaction characterized by very high levels of mutual trust. They suggested that the reciprocal nature of this process is partially afunction what Cohen and Levinthal (11) have called “investment in absorptive capacity.” Major pharmaceutical firms conduct basic researchboth so that they can take advantage of work conducted in the public sector and so that they will have something to “trade” with leading edgeresearchers. Investment in hard-to-appropriate basic research is probably also a function of the need to hire research scientists of the highestpossible calibre. Such key, or “star” scientists are critical to modern research both because they are capable of very good research and becausethey greatly facilitate the process of keeping in touch with the rest of the biomedical community (19). However, it is very difficult to attractthem to a private company unless they are permitted—even actively encouraged—to publish in the leading edge journals and to stay current intheir fields.

Several interviewees also raised another, deeply intriguing possibility. They suggested that contact with the public sector might alsoimprove the nature of the problem solving process within the firm, since contact with the public sector continually reinforced in private sectorresearchers the habits of intellectual curiosity and open exchange that may be fundamental to major advances in science.

Taken together, our interviews suggested that the public sector may play as important a role in improving the quality of the researchprocess in the private sector as it does in generating specific pieces of useful basic knowledge.

QUANTITATIVE ANALYSISPatterns of Coauthorship. The descriptive statistics for our data on publication and coauthoring activity provide some preliminary results

consistent with this more complex picture. Private sector scientists publish extensively—roughly three papers for every million dollars of R&Dspending. Leading private sector researchers publish very heavily, indeed, with the most productive researchers in our sample firms publishingmore than 20 papers per year. These firms also exhibit the heavily skewed distribution of publications per researcher and disproportionate shareof “star” researchers characteristic of publicly funded research communities (20).

Researchers in these firms also coauthor extensively with researchers in the public sector both in the United States and abroad. Tables 3and 4 break down instances of coauthorship by each of the 10 firms in our sample, as well as for the NIH. After SELF (private sector researchercoauthoring with other researchers working within the same firm), universities are by far the largest type of coauthoring institution, followed byhospitals. One curious result is the remarkably small number of coauthorships with the NIH. As the last row of Table 3 indicates, this appearsnot to be a sample selection problem: the breakdown of over 170,000 coauthorships by the NIH is not markedly different from the firms in oursample, with the great majority of coauthorships being with SELF and universities, and relatively few with private sector institutions. Whilemany university researchers are supported by NIH grants and thus should perhaps be re-classified as NIH, it is still interesting that linkagesbetween the private sector and the NIH are via this indirect channel.

Some interesting trends over time are apparent, both in the number of instances of coauthorship and in the mix across different types ofinstitutions. While the numbers of papers published by the 10 firms in the sample tripled over the 15-year period, instances of coauthorshipgrew more than 4-fold. Over

PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH 12727

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 77: [Proceedings of the National Academy of Sciences] (BookZZ.org)

time the fraction of coauthorships with universities rose steadily, mostly at the expense of SELF. No significant trends in the aggregate share ofthe other types of coauthorships are apparent.Table 3. Patterns of coauthorship by type of coauthor and firmFirm SELF Public NIH Hospital University Nonprofit Private Miscellaneous TotalA 0.55 0.03 0.01 0.07 0.27 0.03 0.03 0.01 6,583B 0.48 0.03 0.01 0.08 0.34 0.03 0.02 0.01 15,628C 0.64 0.02 0.01 0.05 0.23 0.02 0.03 0.00 17,292D 0.53 0.02 0.01 0.04 0.35 0.02 0.04 0.00 2,053E 0.54 0.03 0.03 0.06 0.29 0.02 0.03 0.01 8,971F 0.70 0.01 0.00 0.04 0.19 0.00 0.05 0.02 327G 0.54 0.02 0.01 0.06 0.29 0.02 0.04 0.01 8,451H 0.68 0.00 0.00 0.08 0.20 0.00 0.02 0.01 1,414I 0.62 0.02 0.01 0.05 0.23 0.02 0.03 0.01 7,874J 0.50 0.06 0.01 0.06 0.25 0.04 0.06 0.02 736NIH 0.60 0.04 NA 0.04 0.25 0.03 0.02 0.01 170,014

Table entries are the fraction of instances each type of institution appears as an address of a coauthor on a paper published by each of the firms in thedata set. The last column gives the number of instances of coauthorship for each firm. NA, not available.

Links to Public Sector Research and Own Research Productivity. These data on coauthoring document significant linkages betweenprivate sector research and “upstream” public sector activity. But the impact of such linkages is unclear. Does more participation in the widerscientific community through publication or coauthoring give a private sector firm a relative advantage in conducting research? As Table 3indicates, firms show marked differences in both the number of coauthorships and the types of institutions they collaborate with. Formal testsstrongly reject homogeneity across firms in the distribution of their coauthorships over TYPE, even after controlling for a time trend.

In prior work we found substantial and sustained variation across firms in research productivity, which we believe are driven to a greatextent by differences in the ability of firms to access and use knowledge spillovers. We hypothesize that this ability is a function of both theeffort expended on building such linkages and their “quality.” Table 5 presents multinomial logit results from modelling firms’ choice of TYPEof coauthor as a function of some of the characteristics, which we have identified in previous work as being important determinants of researchperformance: the size of the firm’s research effort and two variables, which capture aspects of the firm’s internal incentives and decision-making system. Compared with the reference category (coauthoring with a private sector firm) firms which are “pro-publication” in the senseof rewarding and promoting individuals based on the standing in the wider scientific community are more likely to coauthor with publicinstitutions, nonprofits, and universities, whereas those that allocate R&D resources through “dictatorship” rather than peer review are slightlymore likely to coauthor internally. Because our prior work suggests that those firms that are pro-publication and that do not use dictatorships toallocate research resources are more productive than their competitors, these results are consistent with the hypothesis that coauthoringbehavior is significantly linked to important differences in the ways in which research is managed within the firm.

Table 6 presents results from regressing a crude measure of research productivity (important patents per research dollar, where“importance” is defined by the fact that a patent was granted in two of three major world markets—Japan, the United States, and Europe) ontwo variables derived from the bibliographic data: the fraction of coauthorships with universities, which can be thought of as a proxy for thedegree to which the firm is linked to the public sector, and the fraction of the firm’s publications attributable to the top 10% of its scientistsranked by number of publications, which proxies for the presence of a “star” system within the firm. Firm dummies, a time trend, and totalpublications per research dollar are also included as control variables. The fraction of coauthorships with universities is positive and significantin all of these regressions, even controlling for firm fixed effects and “propensity to publish.” The presence of a star system also correlatespositively and significantly with research productivity.Table 4. Patterns of coauthorship by type of coauthor and yearYear SELF Public NIH University Hospital Nonprofit Private Miscellaneous Total80 0.69 0.02 0.02 0.20 0.04 0.01 0.01 0.01 2,05081 0.67 0.02 0.01 0.21 0.05 0.02 0.02 0.00 2,20082 0.62 0.02 0.02 0.26 0.05 0.02 0.02 0.01 2,70283 0.62 0.02 0.02 0.23 0.07 0.02 0.02 0.01 2,99284 0.58 0.03 0.01 0.25 0.08 0.03 0.02 0.01 3,02385 0.60 0.02 0.01 0.25 0.07 0.02 0.02 0.00 3,83486 0.57 0.02 0.01 0.26 0.08 0.02 0.03 0.01 3,92887 0.57 0.02 0.01 0.28 0.07 0.02 0.02 0.01 4,53588 0.58 0.02 0.02 0.27 0.06 0.02 0.02 0.01 4,31289 0.55 0.02 0.01 0.30 0.07 0.02 0.02 0.01 4,03290 0.56 0.02 0.02 0.28 0.07 0.02 0.03 0.01 5,14791 0.53 0.03 0.01 0.30 0.06 0.02 0.04 0.01 6,26092 0.54 0.03 0.01 0.29 0.06 0.03 0.04 0.01 7,61193 0.53 0.03 0.01 0.29 0.06 0.02 0.04 0.01 8,29394 0.53 0.03 0.01 0.30 0.06 0.03 0.03 0.01 8,410Total 39,175 1,780 922 19,074 4,345 1,541 2,031 483 69,239

Table entries are the fraction of instances each type of institution appears that year as an address of a coauthor on a paper published by one of the firmsin the data set. The last column gives the number of instances of coauthorship that year. The last row gives totals by type of coauthor over all years.

PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH 12728

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 78: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Table 5. Multinomial logit coefficientsExplanatory variables

Category: Type ofcoauthor institution

Time trend Degree to which firm ispro-publication

Degree to which R&Ddecisions are made by asingle individual

Size of firm’s drugdiscovery effort in $m

Constant

Hospital 0.046* 0.071 0.029 –0.008* –2.734(0.021) (0.045) (0.038) (0.002) (1.783)

Nonprofit 0.367 0.207* 0.007 –0.008* –3.640(0.027) (0.059) (0.048) (0.002) (2.252)

Public, including –0.058* 0.363* –0.077** –0.005* 4.494*NIH (0.023) (0.055) (0.043) (0.002) (1.922)SELF –0.041* –0.018 0.061** –0.001 6.761

(0.018) (0.039) (0.034) (0.001) (1.557)University 0.021 0.104* 0.039 –0.007* 0.489

(0.019) (0.041) (0.034) (0.001) (1.598)

Dependent variable: Type of coauthor institution 1980–1988 data: 26,501 observations.

Reference category: Private.Standard errors are in parentheses.*, Significant at 5% level.**, Significant at 10% level.

We hesitate to over-interpret these results: confounding with aggregate time trends, the small sample imposed by incomplete data,difficulties with lags, causality, and a variety of other measurement problems discussed in previous papers mean that they are not as statisticallyrobust as we would prefer. Furthermore, they are offered as descriptive results rather than tests of an underlying behavioral model. Nonetheless,they offer support for the hypothesis that the ability to access and interact with public sector basic research activity is an important determinantof the productivity of downstream private sector research.Table 6. Determinants of patent output at the firm level

Model1 2 3 4

Intercept 5.159*(1.032)

5.292*(1.042)

4.252*(0.859)

4.037*(0.839)

Percent of coauthorships with universities 7.340*(1.611)

6.897*(1.680)

5.137*(1.789)

4.493*(1.759)

Papers per research dollar 0.005(0.006)

0.061*(0.026)

Firm dummies Yes YesTime trend –0.227*

(0.045)–0.231*(0.045)

–0.203**(0.038)

–0.211*(0.037)

RMSE 0.987 0.987 0.777 0.754R-squared 0.293 0.301 0.611 0.638Intercept 4.043*

(1.358)2.380*(1.198)

2.551*(1.146)

2.515*(1.118)

Percent of publications by top 10 authors 2.052(1.489)

3.897*(1.717)

3.358*(1.646)

3.236*(1.613)

Percent of coauthorships with universities 4.870*(1.749)

4.305*(1.726)

Papers per research dollar 0.056*(0.002)

Firm dummies Yes Yes YesTime trend –0.142*

(0.048)–0.129*(0.036)

–0.179*(0.039)

–0.187*(0.038)

RMSE 1.093 0.792 0.757 0.738R-squared 0.132 0.595 0.635 0.658

Ordinary least-squares regression. Dependent variable: Important patents per research dollar. 1980–1988 data, 84 observations. Standard errors are inparentheses. RMSE, root mean squared error.*, Significant at 5% level.**, Significant at 10% level.

CONCLUSIONS AND IMPLICATIONS FOR FURTHER RESEARCHThe simple linear model of the relationship between public and private research may be misleading. Information exchange between the

two sectors appears to be very much bidirectional, with extensive coauthoring between researchers in pharmaceutical firms and researchers inthe public sector across a wide range of both institutions and nationalities. Our preliminary results suggest that participating in this exchangemay be an important determinant of private sector research productivity: The relationship between public and private sectors appears to involvemuch more than the simple, costless, transfer of basic knowledge from publicly funded institutions to profit-oriented firms.

Without further work exploring the social rate of return to research it is, of course, difficult to draw conclusions for public policy fromthese results. However they do suggest that any estimate of the rate of return to public research, at least in this industry, must take account ofthis complex structure. They are also consistent with the hypothesis that public policy proposals that curtail the flow of knowledge betweenpublic and private firms in the name of preserving the appropriability of public research may be counterproductive.

We would like to express our appreciation to those firms and individuals who generously contributed data and time to this study, and toGary Brackenridge and Nori Nadzri, who provided exceptional research assistance. Lynn Zucker and Michael Darby provided many helpfulcomments and suggestions. This research was funded by the Sloan Foundation, the University of British Columbia Entrepreneurship ResearchAlliance (Social Sciences and Humanities Research Council of Canada grant 412–93–0005), and four pharmaceutical companies. Their supportis gratefully acknowledged.1. Arrow, K. (1962) in The Rate and Direction of Inventive Activity, ed. Nelson, R. (Princeton Univ. Press, Princeton), pp. 609–619.2. Zucker, L. (1991) in Research in Sociology of Organizations, ed. Barley, S. & Tolbert, P. (JAI, Greenwich, CT), Vol. 8, pp. 157–189.3. Merton, D. (1973) in The Sociology of Science: Theoretical and Empirical Investigation, ed. Starer, N.W. (Univ. Chicago Press, Chicago), pp. 439–

460.4. Dasgupta, P. & David, P.A. (1987) in Arrow and the Ascent of Modern Economic Theory, ed. Feiwel, G.R. (N.Y. Univ. Press, New York), pp. 519–

542.5. Dasgupta, P. & David, P.A. (1994) Res. Policy 23, 487–521.6. Henderson, R., Jaffe, A. & Trajtenberg, M. (1994) Universities as a Source of Commercial Technology: A Detailed Analysis of

PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH 12729

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 79: [Proceedings of the National Academy of Sciences] (BookZZ.org)

University Patenting, 1965–1988, National Bureau of Economic Research Working Paper No. 5068 (Natl. Bureau Econ. Res., Cambridge, MA).7. Jones, C. & Williams, J. (1995) Too Much of a Good Thing? The Economics of Investment in R&D, Finance and Economics Discussion Series of the

Division of Research in Statistics, Federal Reserve Board, Working Paper No. 95–39 (Federal Reserve Board, Washington, DC).8. Mansfield, E. (1991) Res. Policy 20, 1–12.9. Griliches, Z. (1979) Bett J. Econ. 10, 92–116.10. Griliches, Z. (1994) Am. Econ. Rev. 84, 1–23.11. Cohen, W.M. & Levinthal, D.A. (1989) Econ. J. 99, 569–596.12. Gambardella, A. (1992) Res. Policy 21, 1–17.13. Wurtman, R. & Bettiker, R. (1994) Neurobiol. Aging 15, S1-S3.14. Ward, M. & Dranove, D. (1995) Econ. Inquiry 33, 1–18.15. Koenig, M. & Gans, D. (1975) Res. Policy 4, 331–349.16. Cockburn, I. & Henderson, R. (1994) J. Econ. Manage. Strategy 3, 481–519.17. Henderson, R. & Cockburn, I. (1994) Strategic Manage. J. 15, 63–84.18. Henderson, R. & Cockburn, I. (1995) RAND J. Econ. 27(1), 32–59.19. Zucker, L., Darby, M. & Armstrong, J. (1994) Intellectual Capital and the Firm: the Technology of Geographically Localized Knowledge

Spillovers , National Bureau of Economic Research Working Paper No. 4946 (Natl. Bureau Econ. Res., Cambridge, MA).20. David, P.A. (1994) in Economics of Technology, ed. Granstrand, O. (North-Holland, Amsterdam), pp. 65–89.

PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH 12730

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 80: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

Environmental change and hedonic cost functions for automobiles

STEVEN BERRYa, SAMUEL KORTUMb, AND ARIEL PAKESaaDepartment of Economics, Yale University, New Haven, CT 06520; and bDepartment of Economics, Boston University, Boston, MA

02215ABSTRACT This paper focuses on how changes in the economic and regulatory environment have affected production costs and

product characteristics in the automobile industry. We estimate “hedonic cost functions” that relate product-level costs to theircharacteristics. Then we examine how this cost surface has changed over time and how these changes relate to changes in gas pricesand in emission standard regulations. We also briefly consider the related questions of how changes in automobile characteristics, andin the rate of patenting, are related to regulations and gas prices.

The automobile industry is one of this country’s largest manufacturing industries and has long been subject to both economic regulationand to pressure from changing economic conditions. These pressures were particularly striking in the 1970s and 1980s. The United StatesCongress passed legislation to regulate automotive emissions and throughout the period emissions standards were tightened. This period alsowitnessed two sharp increases in the price of gasoline (Fig. 1).

There is a large literature detailing the industry’s response to the changes in both emissions standards and in gas prices (e.g., refs. 1–6).We add to this literature by considering how these changes have altered production costs at the level of the individual production unit, theautomobile assembly plant. We also note that when we combine our results with data on the evolution of automobile characteristics and patentapplications, we find evidence that the changing environment induced fuel and emission saving technological change.

This paper is organized as follows. In the next section, we review a method we have developed for estimating production costs as afunction of time-varying factors and of the characteristics of the product. Then the data set, constructed by merging several existing product-level data sets with confidential production information from the United States Bureau of the Census’s Longitudinal Research Data file isdescribed. Next we present estimates of the parameters defining the hedonic marginal cost function, and consider how this function haschanged over time. The final two sections integrate data on movements in an index of the miles per gallon (mpg) of cars in given horsepowerweight classes, and in applications in relevant patent classes, into the analysis.

ESTIMATING A HEDONIC COST FUNCTIONMany, if not most, markets feature products that are differentiated in some respect. However, most cost function estimates assume

homogeneous products. There are good reasons for this, chief among them the frequent lack of cost data at the product level. However,important biases may result when product differentiation is ignored. In particular, changes in costs caused by changes in product characteristicsmay be misclassified as changes in productivity. This issue is especially important for our study, because product characteristics are changingvery rapidly during our period of analysis (e.g., Table 1 in refs. 7 or 8).

To get around this problem, this study combines plant-level cost data and information on which products were produced at each planttogether with a model of the relationship between production costs and product characteristics. We used the map between plants and theproducts they produce to work out the implications of our model for plant level costs, and then fit those implications to the plant level cost data.The fact that each plant produces only a few products facilitates our task.

Note that although we have plant level information, we still only have a limited number of observations per product. Thus, it is notpossible to estimate separate cost functions for each product. Our model follows a long tradition in treating products as bundles ofcharacteristics (see ref. 9) and then modeling demand and cost as functions of these characteristics. As in homogeneous product models, themodel also allows costs to depend on output quantities and on input prices. We call our cost function a hedonic cost function because it is theproduction counterpart of the hedonic price function introduced by Court (10) and revived by Griliches (11).

Hedonic cost functions of this sort have been estimated before using different assumptions and/or different types of data than those usedhere. For example, refs. 7, 12, and 13 all make assumptions on the nature of equilibrium and on the demand system, which enable them to usedata on price, quantity, and product characteristics to back out estimates of the hedonic cost function without ever actually using cost data. This,however, is a rather indirect way of estimating the hedonic cost function, which depends on a host of auxiliary assumptions, and partly as aresult, often runs into empirical problems (e.g., ref. 7).

Friedlaender et al. (14) (see also ref. 6) make use of firm level cost data and a multi-product production function framework to allow firmcosts to depend on a “relatively small number of generic product types” (p. 4). Although their goal was much the same as ours, the data at theirdisposal were far more limited.

In on-going work we consider possible structures for hedonic cost functions. There differences in product characteristics generate shifts inproductivity and, hence, shifts in measured input demands. That work adds disturbances to this framework and aggregates the resulting factordemand equations into an “hedonic cost function”.

We focus here on estimates of the materials demand equation, leaving the input demand equations for labor and capital for later work.There are several reasons for our focus on materials costs. First, as shown below, our data, which are for auto assembly plants, indicate thatmost costs are materials costs. Second, of the three inputs that we observe, materials might most plausibly be treated in a static cost-minimization framework. Third, we find that our preliminary results for materials are fairly easy to interpret, whereas those for labor andcapital present some unresolved puzzles. Of course, we

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES 12731

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 81: [Proceedings of the National Academy of Sciences] (BookZZ.org)

may discover that the reasons for the problems in the labor and capital equations require us also to modify the materials equation, so wecontinue to explore other approaches in our on-going research.

FIG. 1. Sources of change in the auto industry. Shows plots of emission standards and gas prices against time.The materials demand equation that we estimate for automobile model j produced at plant p in time period t has several components. In

our companion paper we discuss alternative specifications for these components, but here we only provide some intuition for the simplefunctional form that we use.

Since we are concerned that because labor and capital may be subject to long-term adjustment processes in this industry and a static costminimizing assumption for them might be inappropriate, we consider a production function that is conditional on an arbitrary index of laborand capital. This index, which may differ with both product characteristics, to be denoted by x, and with time, or t, will be denoted by G(L, K, x,t). Given this index, production is assumed to be a fixed coefficient times materials use.

The demand for materials, M, is then a constant coefficient times output. That coefficient, to be denoted by c(xj, εpt, β), is a function of:product characteristics (xj), a plant-specific productivity disturbance (εpt), and a vector of parameters to be estimated (β). In this paper, weconsider only linear input-output coefficients, i.e.,

c=xjβ+εp. [1]

Finally, we allow for a proportional time-specific productivity shock, δt. This term captures changes in underlying technology and,possibly, in the regulatory environment. (In more complicated specifications it can also capture changes in input prices that result in inputsubstitution.) The production function is then

[2]

Then, the demand for materials that arises from the variable cost of producing product j at plant p at time t is

Mjpt=δtc(xj, εpt, β)QJpt.[3]

While we assume that average variable costs are constant (i.e., that the variable portion of input demand is linear in output), we do allowfor increasing returns via a fixed component of cost. We denote the fixed materials requirement as µ. There may also be some fixed cost toproducing more that one product at a plant. Specifically, let there be a set-up cost of ∆ for each product produced at a plant; we might think ofthis as a model change-over cost.c Let J(p) be the set of models produced by plant p and Jp be the number of them. Then total factor usage isgiven by

[4]

with Mjpt as defined in Eq. 3.If we divide Eq. 4 through by plant output and rearrange, we obtain the equation we take to data

[5]

where is the weighted average

[6]

Except for the proportional time-dummies, δ, Eq. 5 could be estimated by ordinary least squares (under appropriate assumptions on ε).dWith the proportional δ, the equation is still easy to estimate by non-linear least squares.

cFrom visits to assembly plants, we have learned that a fairly wide variety of products can be produced in a single assembly withoutlarge apparent costs. Therefore, we would not be surprised to find a small model changeover cost, particularly in materials.

dIn the empirical work, we also experimented with linear time dummies and did not find much differenceeFor example, firm headquarters could allocate production to plants before they learn the plant/time productivity shock ε. This

assumption is particularly unconvincing if the εs are, as seems likely, serially correlated. Possible instruments for the right-hand-sidevariables include the unweighted average xs and interactions between product characteristics and macro-economic variables. The use ofinstruments becomes even more relevant once the possibility of increasing returns introduces a more direct effect of output.

fIn particular, we do not examine the extent to which vertical integration differs among plants, and we learned from our plant visitsthat there are differences in the extent to which processes like stamping and wire system assembly are done in different assemblyplants. Unfortunately we do not have information on the prices that guide these substitution decisions.

ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES 12732

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 82: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Our results are preliminary in that they ignore a number of important economic and econometric issues. First, the plant and productoutputs are used as weights in the construction of the right-hand-side variables in Eq. 5, and we have not accounted for the possibleeconometric endogeneity of output. There are assumptions that would justify treating output as exogenous, but they are not very convincing.e Incalculating standard errors we ignore heteroskedasticity and the likely correlation of εpt across plants (due to, say, omitted productcharacteristics and the fact that the same products are produced at more than one plant) and over time (due to serially correlated plantproductivities). Our functional forms allow for fixed costs, but no other form of increasing returns. Finally we do not engage in a more detailedexploration of substitution patterns between materials and labor or capital.f

Each of these issues is important and worthy of further exploration. In our on-going research we are examining the robustness of ourresults, and extending our models where it seems necessary.

THE DATAWe constructed our data set by merging data on the characteristics of automobile models with United States Bureau of the Census data on

inputs and costs at the plants at which those models were assembled. The sources for most of the characteristics data were annual issues of theAutomotive News Market Data Book (Crain Auto Group).g To determine which models were assembled at which plants we used data fromannual issues of Wards Automotive Yearbook on assembly plant sourcing.h For each model year Wards publishes the quantity assembled ofeach model at each assembly plant. Because we did not have good data on the characteristics of trucks, we eliminated plants that assembledvans and trucks. We also eliminated plants that produced a significant number of automobile parts for final sale, because we had no way toseparate out the cost of producing those parts.i

The Bureau of the Census data are is from the Longitudinal Research Data File, which, in turn, is constructed from information providedto the Annual Survey of Manufacturing (ASM) in non-census years, and information provided to the Census Of Manufacturing in census years(see ref. 15 for more information on the Longitudinal Research Data File). The ASM does not include quantity data, although the quintannualcensus does. All of the data (from both the ASM and the census) are on a calendar year basis.

Although the census data on costs are on a calendar year basis, the Ward’s data on quantities and the Automotive News data oncharacteristics are on a model year basis (and since the model year typically begins in August of the previous year, the number of vehiclesassembled in a model year can differ significantly from those assembled in a calendar year). Thus, we needed a way of obtaining annualcalendar year data on quantities.

Bresnahan and Ramey (16) used data on posted line speed, number of shifts per day, regular hours, and overtime hours at weekly intervalsfrom issues of Automotive News to construct weekly posted output for most United States assembly plants from 1972 to 1982. We used theirdata to adjust the Ward’s data to a calendar year basis.j We note that it is the absence of these data for the years 1984–1990 that limits ouranalysis to the years 1972–1982.Table 1. Characteristics of the sampleYear No. of plants Average quantity Average no. of models per plant1972 20 202,000 3.41973 21 196,000 2.41974 20 146,000 2.51975 21 130,000 2.71976 20 165,000 2.61977 19 198,000 2.61978 20 206,000 2.31979 21 184,000 2.31980* 201981 22 155,000 2.71982 23 134,000 2.9

*Not published; census confidentiality.

Table 1 provides characteristics of our sample. It covers about 50% of the total United States production of automobiles, with highercoverage at the end of the sample. The low coverage stems from our decision to drop the large number of plants producing both automobilesand light trucks or vans. There are about 20 active automobile assembly plants each year in our sample, and 29 plants that were active at somepoint during our sample period.k These plants are quite large. Depending on the year, the average plant assembles 130,000– 202,000automobiles, and employs 2,814–4,446 workers (about 85% of them production workers). Note that the average plant produces 2.4–3.4 distinctmodels each year.

Table 2 provides annual information on the average (across plants) materials input per vehicle assembled and the unit values of thesevehicles. The materials series is constructed as the costs of parts and materials (engines, transmissions, stamped sheet metal, etc.), as well asenergy costs, all deflated by a price index for materials purchased by SIC 3711 (Motor Vehicles and Car Bodies), constructed by Wayne Grayand Eric Bartelsman (see the National Bureau of Economic Research data base).l Because we use an industry and factor specific price deflater,we interpret the materials series as an index of real materials input. The unit values are the average of the per vehicle price received by theplants for the vehicles assembled by those plants deflated by the gross domestic product deflater.

This measure of materials input represents the lion’s share of the total cost of the inputs used by these assembly plants; on average, theshare of materials in total costs was about 85%, with most of the balance being labor cost.m Material costs per vehicle were fairly constantduring the first half of the 1970s, but moved upwards after 1975, with a sharp jump in 1982. As one might expect, these cost trends weremirrored in the unit value numbers.

Of course the characteristics of the vehicles produced also changed over this period. Annual averages for many of these characteristics areprovided, for example, in ref. 7, although

gThe initial characteristics data base was graciously provided by Ernie Berndt. It was then updated and extended first by Berry et al.(7) and then by us (see below). More detail on this data base can be found in ref. 7.

hAn initial data set based on Wards Automotive Yearbook was graciously provided to us by Joshua Haimson and we simply updatedand extended it.

iIn the census years (1972, 1977, 1982) we can look at the value of shipments by type of product. Automobiles are over 99% of thevalue of shipments for all but one of our plants. Other products made up about 4% of the value of shipments for that plant in 1982.

jThese data were graciously provided to us by Valerie Ramey. We use it to allocate the Ward’s data across weeks. We then aggregatethe weekly data to the calendar year quantities needed for the cost analysis.

kWe did not use the information from the first year of a plant that started up during our sample period, or the information from thelast year of a plant that exited during this period. This was to avoid modeling any additional costs to opening up or shutting down aplant. Of the 29 plants that operated at some point in our 10-year period, 6 exited before 1983.

lEnergy costs are a very small fraction of material costs, under 1%, throughout the period.

ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES 12733

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 83: [Proceedings of the National Academy of Sciences] (BookZZ.org)

those numbers are for the universe of cars sold, rather than for our production sample. In our sample, the number of cars with air conditioning(AC) as standard equipment begins at near zero near the beginning of the sample and increases to almost 15% by 1982. Average mpg,discussed further below, increases from 14 to about 23, while average horsepower declines from about 148 to near 100. The weight of cars alsodecreases from about 3800 to 2800 pounds. Note that the fact that these large changes in x characteristics occurred implies that we should notinterpret the increase in the observed production costs (or in observed price) per vehicle as an increase in the cost or price ofa “constant quality”vehicle.Table 2. Materials use and unit valuesYear Cost of materials Cost per unit value Cost share materials1972 6,444 8,901 0.861973 6,636 8,847 0.851974 6,512 8,727 0.841975 6,316 8,652 0.851976 6,470 9,009 0.861977 6,757 9,320 0.871978 6,745 9,286 0.861979 6,694 9,724 0.851980*1981 6,879 9,438 0.841982 7,493 10,672 0.85

*Not published; census confidentiality.

As noted, in addition to characteristics valued directly by the consumer (such as horsepower, size, or mpg), we are also interested in howthe technological characteristics of a car (particularly those that effected emissions and fuel efficiency) changed over time and affected costs. Inour sample period, the automobile companies adopted a number of new technologies in response to both lower emission standards and highergas prices. Bresnahan and Yao (17) have collected detailed data on which cars used which technology.n In particular, using the EnvironmentalProtection Agency’s Test Car List, they tracked usage of five technologies: no special technology (a baseline), oxidation catalysts (i.e., catalyticconverters), three-way catalysts, three-way closed-loop catalysts, and fuel injection. Census confidentiality requirements prohibit us frompresenting the proportion of vehicles in our sample using each of these technologies, so Table 3 uses publicly available data to compute thefraction of car models built by United States producers using each technology in each model year. The baseline technology was used invirtually all models until the 1975 model year, at which time most models shifted to catalytic converters. The catalytic converters began to bedisplaced by the more modern technologies in the 1980 model year, and by 1981 they had been displaced in over 80% of the models.

RESULTS FROM THE PRODUCTION DATATable 4 presents baseline estimates of the materials demand equation. The right-hand-side variables include: the term 1/Q, whose

coefficient determines fixed costs; the term J/Q, whose coefficient determines model changeover costs; the product characteristics (the xvariables); and, in the right-most specification, the time-specific parameters (the δt), which shift the variable component of the materials costover time (see Eq. 5).

In many studies, the parameters on the x variables would be the primary focus of analysis. However, in the present context they are largelyincluded as a set of controls that allow us to get more accurate estimates of the shifts in material costs over time (i.e., of the δt). The differencebetween the two sets of results presented in Table 4 is that the second set includes these δt, whereas the first set does not. The sum of squareresiduals reported at the bottom of the table indicate that these time effects are jointly significant at any reasonable level of significance.

The estimates of the materials demand equation do not provide a sharp indication of the importance of model changeover costs, or of fixedcosts (at least after allowing for the time effects), or of a constant cost that is independent of the characteristics of the car. However, most of theproduct characteristics have parameter estimates that are economically and statistically significant. For example, the coefficients on ACindicate that having AC as standard equipment increases per car materials costs by about $2600 (in the specification with the δt) and by about$3600 (in the specification without). We think that the AC dummy variable proxies for a package of “luxury standard equipment,” so the largefigures here are not surprising. A 1 mpg increase in fuel efficiency is estimated to raise costs in the range of $80–$160, whereas a 1 poundincrease in weight increases costs by around $1.30–$1.50.

Table 4 presents estimates of ln(δ), not levels, so the coefficients have the approximate interpretation of percentage changes over the baseyear of 1972. In the early years these coefficients are not significantly different from zero, but they become significant in 1977 and stay so.There appears to be a clear upward trend, with apparent jumps in 1977 and 1980.

We now come back to the question of how well cost changes correlate with changes in emissions standards. Emissions requirements tooktwo jumps, one in 1975 (when they were tightened by about 40%) and one in 1980, when an even greater tightening occurred. Table 4 finds ajump in production costs in 1980, but not in 1975.

One possible explanation is that early adjustments to the fuel emissions requirement were crude, but relatively inexpensive and camelargely at the cost of “performance” (a characteristic that may not be adequately captured by our observed characteristics). Later technologies,such as fuel injection, may have been more costly in dollar terms, but less so in terms of performance.

We use the technology variables described in Table 3 to study the effect of technology in more detail. These variables are potentiallyinteresting because, although there is no cross-sectional variation in fuel efficiency and emissions requirements, there is cross-sectionalvariation in technology. Thus, they might let us differentiate between the impacts on costs of other time specific variables (e.g., input prices),and the new technologies that were at least partially introduced as responses to the emissions requirements. In particular, we would like toknow if the technology variables can help to explain the increasing series of time dummies found in Table 5.

Let τjt be a vector of indicator variables for the type of technology used in model j at time t. We introduce these technology indicators as afurther proportional shift term in the estimation equation. In particular, we alter Eq. 3 so that the variable portion of the materials demand forproduct j at time t is

Mjpts=δtexp(τjtγ)c(xj, εpt, β)Qjpt. [7]

where γ is the vector of parameters giving the proportionate shift in marginal costs associated with the different technologies. Just as oneof the δs is normalized to one, so we normalize the γ associated with the baseline technology to

mTotal assembly costs are calculated as the sum of materials costs (as discussed above), labor costs, and capital costs. Labor costs,which were about 12.6% of the total, are reported salaries and wages of production and nonproduction workers plus supplementarylabor costs. We proxy capital costs as 15% of the beginning-of-year building plus machinery assets (at book value).

nWe thank Tim Bresnahan for generously providing this data. We have since updated it (using the EPA Test Car Lists) for modelyears 1982 and 1983, as well as for many of the models in 1981.

ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES 12734

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 84: [Proceedings of the National Academy of Sciences] (BookZZ.org)

zero. Note that we can separately identify the δs and the γs because of the cross-sectional variation in technologies.Table 3. Technology variables (proportion of sample)Model year Baseline Catalytic converter 3-way converter Open loop Fuel injection1972 1 0 0 0 01973 1 0 0 0 01974 1 0 0 0 01975 0.15 0.84 0 0 0.011976 0.19 0.80 0 0 0.011977 0.09 0.89 0 0 0.021978 0.03 0.95 0 0 0.021979 0 0.98 0 0.01 0.021980 0 0.86 0 0.08 0.061981 0 0.18 0.20 0.59 0.031982 0 0.16 0.38 0.44 0.021983 0 0.05 0.31 0.37 0.27

Table 5 gives some results from estimating the materials equation with the technology variables included. The first is exactly as in Eq. 7.From prior knowledge and from this first regression, we believe that simple catalytic converters may be relatively cheap, whereas the othersmay be more expensive. Therefore, as a second specification, we constrain the γ for catalytic converters (technology 1) to be equal to thebaseline technology.

We see that the technology parameters, the γs, generally have the expected sign and pattern. In the first specification, the γ associated withsimple catalytic converters is estimated at about zero, whereas the others are positive, though not statistically significantly so, and increasing asthe technology becomes more complex. In the second specification (with γ1 ≡ 0) the coefficients on technology are individually significant andhave the anticipated, increasing pattern.

Recall from Table 3 that simple catalytic converters began to be used at the time of the first tightening of emissions standards, and wereused almost exclusively between 1975 and 1979 (inclusive). In 1980, when the emissions standard were tightened for the second time, the shareof catalytic converters began to fall, and by 1981 the simple catalytic converter technology had been abandoned by over 80% of the models.

Thus, the small cost coefficient on catalytic converters is consistent with the small estimate of the change in production costs following thefirst tightening in emissions requirements found in Table 4, whereas the larger cost effects of the later technologies helps explain Table 4’sestimated increase in production costs following the second tightening of the emissions standards in 1980. Indeed, once we allow for thetechnology classes as in Table 5, the time effects (the δs) are only marginally significant, and there is no longer a distinct upward trend in theirvalues.

As an outside check on our results, we note that the Bureau of Labor Statistics publishes an adjustment to the vehicle component of theConsumer Price Index for the costs of meeting emissions standards [the information is obtained from questionnaires to plant managers; see theReport on Quality Changes for Model Passenger Cars (EPA), various years]. After taking out their adjustments for retail margins and deflatingtheir series, we find that it shows a sum total of $71 in emissions adjustment costs between 1971 and 1974 and then an increment of $176 in1975. The Bureau of Labor Statistics’ series then increases by only $56 between 1975 and 1979, but jumps by $632 between 1979 and 1982.Table 5 estimates very similar numbers. Note however that some of the costs of the new technologies that we are picking up may have beenpartially offset by improved performance characteristics not captured inTable 4. Results from the materials equationVariable Parameter Estimate Standard error Estimate Standard error1/Q µ 50.0 m 15.7 m 20.6 m 14.8 mJ/Q ∆ –5.9 m 6.4 m –6.1 m 5.9 mxConstant β0 –2108 1371 –471.8 1181.5AC β1 3587 271.8 2599 260.0mpg β2 169.0 35.2 79.3 32.0hp β3 2.1 4.5 5.0 3.8wt β4 1.49 0.30 1.30 0.26t ln(δ)1973 0.01 0.041974 –0.01 0.041975 –0.02 0.041976 0.01 0.041977 0.08 0.041978 0.10 0.041979 0.11 0.041980 0.22 0.041981 0.19 0.041982 0.24 0.04ssq 168 m 123 m

The dependent variable is material cost per car in 1983 dollars, and there are 227 observations. An m after a figure indicates millions of dollars. The totalsum of squares is 559.8 m. hp, horsepower; wt, weight; ssq, sum of squared error; AC, air conditioning; mpg, miles per gallon.

ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES 12735

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 85: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Table 5. Materials demand with technology effectsVariable Estimate Standard error Estimate Standard error1/Q µ 16.6 m 14.7 m 17.2 m 14.4 mJ/Q ∆ –1.3 m 5.9 m –2.7 m 5.7 mConstant β0 –689.2 1207 –608.2 1143AC β1 2138.1 279.3 2172 261.7mpg β2 86.4 32.6 85.8 31.2hp β3 4.5 3.7 4.4 3.7wt β4 1.34 0.26 1.33 0.25τ γCatalytic converter γ1 –0.01 0.12 0 —Three-way γ2 0.14 0.15 0.15 0.08Closed loop γ3 0.20 0.15 0.21 0.08Fuel injection γ4 0.28 0.14 0.29 0.07t ln(δ)1973 0.02 0.04 0.02 0.041974 0.00 0.06 –0.00 0.041975 –0.02 0.12 –0.02 0.041976 0.02 0.13 –0.01 0.041977 0.09 0.13 0.08 0.041978 0.11 0.13 0.11 0.041979 0.11 0.13 0.10 0.041980 0.12 0.14 0.11 0.061981 –0.01 0.15 –0.02 0.091982 0.02 0.15 0.02 0.08ssq 113.6 m 113.6 m

The dependent variable is material cost per car in 1983 dollars, and there are 227 observations. An m after a figure indicates millions of dollars. The totalsum of squares is 559.8 m. For abbreviations see Table 4.

THE FUEL EFFICIENCY OF THE NEW CAR FLEETRecall that gas prices increased sharply in 1974 and then again between 1978 and 1980. They trended downward from 1982. Table 6 (from

the ref. 24 data set) shows how the median fuel efficiency of new car sales has changed over time. There was very little response of the mediano

of the mpg of new car sales to the gas price hike of 1973 until 1976. As discussed in Pakes et al. (18), this is largely because more fuel efficientmodels were not introduced until that time, and the increase in gas prices had little effect on the distribution of sales among existing models.The movement upward in the mpg of new car sales that began in 1976 continued, though at only a modest rate, until 1979. Between 1979 and1983 there was a more striking rate of improvement in this distribution. After 1983, the distribution seems to trend slowly downward with thegas price.

These trends are replicated, though in somewhat different intensities and years, in the downward movements in both the weight andhorsepower distributions of the cars marketed. There is, then, the possibility that the increase in the mpg of cars was mostly at the expense ofthe weight and horsepower of the models marketed, i.e., there was no change in the mpg for given horsepower-weight (hp/wt) classes.

To investigate this possibility we calculated a “divisia” index of mpg per hp/wt class. That is, first we divided all models into nine hp/wtclasses,p then calculated the annual change in the mpg in each of these classes, and then took a weighted average of those changes in everyyear; the weights being the fraction of all models marketed that were in the class in the base year for which the increase was being calculated.This index is given in column 2 of Table 6. It grew rapidly in most of the period between 1976 and 1983 (the average rate of growth was 2.85%per year), though there was different behavior in different subperiods (the index fell between 1978 and 1980 and grew most rapidly in 1976 and1977).

We would expect this index to increase if either of the firms moved to a different point on a given cost surface, being willing to incurhigher production costs for more fuel efficient cars, or if the gas price hike induced technological change that enabled firms to produce morefuel efficient cars at no increase in cost. Comparing the movements in the mpg index in Table 6 to the time dummies estimated in Table 5, wesee little correlation between the mpg index and our estimates of the δt.q We therefore look at the possibility that the mpg index increases weregenerated by induced technological change.r

INNOVATIONAs noted, another route by which changes in the environment can affect the automobile industry is through induced innovation. Table 3

showed how some new technologies have been introduced over time. The table shows that the simple catalytic converter was introducedimmediately after the new fuel emission standards in 1975, and lasted until replaced by more modern technologies beginning in 1980.

Other than looking at specific technologies, it is very difficult to measure either innovative effort or outcomes, and hence to judge eitherthe extent or the impacts of induced innovation. Perhaps the best we can do is to look at those patent appli

oIndeed, we have looked at the entire distribution of the mpg of new car sales and its movements mimic those of the median.pWe divided all models marketed into three equally sized weight classes, generating in this way cutoff points for large, medium, and

small weight classes. We then did the same for the horsepower distribution. We placed each model into one of the nine hp/wt classesdetermined by the horsepower and weight cutoffs we had determined.

qOn the other hand there is some correlation between the mpg index and the time dummies in Table 4, suggesting that thetechnologies we describe in Table 3 might also have increased fuel efficiency

rWe have also examined whether we could pick up changes in the mpg coefficient over time econometrically. However, once westarted examining changes in coefficients over time there was too much variance in the point estimates to do much in the way ofintertemporal comparisons.

ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES 12736

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 86: [Proceedings of the National Academy of Sciences] (BookZZ.org)

cations that were eventually granted in the three subclasses of the international patent classification that deal with combustion engines (F02B,F02D, and F02M: Internal Combustion Engines, Controlling Combustion Engines, and Supplying Combustion Engines with CombustibleMaterials or constituents Thereof). A time series of the patents in these subclasses is plotted in Fig. 2.Table 6. Evolution of fuel efficiencyModel year Median mpg Change in mpg index per hp and wt class, %1972 14.4 –6.81973 14.2 0.151974 14.3 –1.41975 14.0 –1.11976 17.0 9.81977 16.5 6.31978 17.0 –1.01979 18.0 –0.01980 19.5 –6.61981 19.0 5.11982 22.0 2.91983 24.0 5.31984 24.0 –0.91985 21.0 –5.91986 23.0 4.91987 22.0 –0.61988 22.0 0.41989 21.0 1.61990 21.0 1.1

hp, horsepower: wt, weight.

That series indicates that the timing of the changes in the number of patent applications in these classes is remarkably closely related to thetiming of both the gas price changes and the changes in emissions standards. In the 10-year period between 1959 and 1968 the annual sum ofthe number of patent applications in these classes stayed almost constant at 312 (it varied between 258 and 346). There was a small jump in1969 to 416, and between 1969 and 1972 (which corresponds to the period when emissions standards were introduced) the number of patentsaveraged 498. A rather dramatic change occurred in the number of patents applied for in these classes after the first oil price shock in 1973/74(an increase to 800 in 1974), and the average number between 1974 and 1983 was 869. This can be divided into an average of 810 between1974 and the second oil price shock in 1979, and an average of 929 between 1979 and 1983. These later jumps in applications in thecombustion engine related classes occurred at the same time as the total United States patent applications fell, making the increase in patentingactivity on combustion engines all the more striking.

FIG. 2. Patents in engine technologies plotted against time.It seems then that the gas price shocks, and to a possibly lesser extent the regulatory changes, induced significant increases in patent

applications. Of course there is likely to be a significant and variable lag between these applications and the subsequent embodiment of thepatented ideas in the production processes of plants. Moreover, very little is known about this lag. What does seem to be the case is that patentapplications and research and development expenditures have a large contemporaneous correlation (see ref. 19). However, the attempts atestimating the lag between research and development expenditures and subsequent productivity increases have been fraught with too manysimultaneity and variability problems for most researchers (including ourselves in different incarnations) to come to any sort of reliableconclusion about its shape.

CONCLUSIONSIn this paper we provide some preliminary evidence on the impact of regulatory and gas price changes on production costs and

technological change. We find that, after controlling for product characteristics, costs moved upwards in our period (1972–1982) of rapidlychanging gas prices and tightened emissions standards.

When we introduce dummy variables for technology classes we find that the simple catalytic converter technology that was introducedwith the first tightening of emission standards did not have a noticeable impact on costs, but the more advanced technologies that wereintroduced with the second tightening of emissions standards did. Moreover, the introduction of the technology dummies eliminates the shiftupwards in costs over time. Thus, the increase in costs appear to be related to the adoption of new technologies that resulted in cleaner, andperhaps more fuel efficient, cars.

The fuel efficiency of the new car fleet began increasing after 1976, and continued this trend until the early 1980s, after which it, with thegas price, slowly fell. Our index of mpg per hp/wt class also began increasing in 1976 and, at least after putting in our technology variables, itsincrease was not highly

ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES 12737

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 87: [Proceedings of the National Academy of Sciences] (BookZZ.org)

correlated with the index of annual costs that we estimate. Also, patent applications in patent classes that deal with combustion enginesincreased dramatically after both increases in gas prices. These latter two facts provide some indication that gas price increases inducedtechnological change, which enabled an increase in the fuel efficiency of new car models with only moderate, if any, increases in productioncosts.

In future work we hope to provide a more detailed analysis of these phenomena, as well as integrate (perhaps improved versions) of ourhedonic cost functions with an analysis of the demand-side of the market (as in ref. 7). This ought to enable us to obtain a deeper understandingof the automobile industry and its likely responses to various changes in its environment.

We thank the participants at the National Academy of Sciences conference on Science and the Economy, particularly to Dale Jorgenson,and to Zvi Griliches, Jim Levinsohn, and Bill Nordhaus, for helpful comments. Maria Borga, Deepak Agrawal, and Akiko Tamura providedexcellent research assistance. We gratefully acknowledge support from National Science Foundation Grants SES-9122672 (to S.B., JamesLevinsohn, and A.P.) and SBR-9512106 (to A.P.) and from Environmental Protection Agency Grant R81–9878–010.

1. Dewees, D.N. (1974) Economics and Public Policy: The Automobile Pollution Case (MIT Press, Cambridge, MA).2. Toder, E.J., Cardell, N.S. & Burton, E. (1978) Trade Policy and the U.S. Automobile Industry (Praeger, New York).3. White, L.J. (1982) The Regulation of Air Pollutant Emissions from Motor Vehicles (American Enterprise Institute, Washington, DC).4. Abernathy, W.J., Clark, K.B. & Kantrow, A.M. (1983) Industrial Renaissance: Producing a Competitive Future for America (Basic Books, New

York).5. Crandall, R., Gruenspecht, T., Keeler, T. & Lave, L. (1986) Regulating the Automobile (Brookings Institution, Cambridge, MA).6. Aizcorbe, A., Winston, C. & Friedlaender, A. (1987) Blind Intersection? Policy and the Automobile Industry (Brookings Institution, Washington, DC).7. Berry, S., Levinsohn, J. & Pakes, A. (1995) Econometrica 60, 889–917.8. Havenrich, R., Marrell, J. & Hellman, K. (1991) Light-Duty Automotive Technology and Fuel Economy Trends Through 1991: A Technical Report

(EPA, Washington, DC).9. Lancaster, K. (1971) Consumer Demand: A New Approach (Columbia Univ. Press, New York).10. Court, A. (1939) The Dynamics of Automobile Demand (General Motors Corporation, Detroit), pp. 99–117.11. Griliches, Z. (1961) The Price Statistics of the Federal Government (NBER, New York).12. Bresnahan, T. (1987) J. Ind. Econ. 35, 457–482.13. Feenstra, R. & Levinsohn, J. (1995) Rev. Econ. Studies 62,19–52.14. Friedlaender, A. R, Winston, C. & Wang, K. (1995) RAND 14,1–20.15. McGuckin, R. & Pascoe, G. (1988) Surv. Curr. Bus. 68, 30–37.16. Bresnahan, T. & Ramey, V. (1994) Q.J. Econ. 109, 593–624.17. Bresnahan, T.F. & Yao, D.A. (1985) RAND 16, 437–455.18. Pakes, A., Berry, S. & Levinsohn, J. (1993) Am. Econ. Rev. Paper Proc. 83, 240–246.19. Pakes, A. & Griliches, Z. (1980) Econ. Lett. 5, 377–381.

ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES 12738

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 88: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

Sematech: Purpose and Performance

DOUGLAS A.IRWIN AND PETER J.KLENOWGraduate School of Business, University of Chicago, 1101 East 58th Street, Chicago, IL 60637ABSTRACT In previous research, we have found a steep learning curve in the production of semiconductors. We estimated that

most production knowledge remains internal to the firm, but that a significant fraction “spills over” to other firms. The existence ofsuch spillovers may justify government actions to stimulate research on semiconductor manufacturing technology. The fact that not allproduction knowledge spills over, meanwhile, creates opportunities for firms to form joint ventures and slide down their learningcurves more efficiently. With these considerations in mind, in 1987 14 leading U.S. semiconductor producers, with the assistance of theU.S. government in the form of $100 million in annual subsidies, formed a research and development (R&D) consortium calledSematech. In previous research, we estimated that Sematech has induced its member firms to lower their R&D spending. This mayreflect more sharing and less duplication of research, i.e., more research being done with each R&D dollar. If this is the case, thenSematech members may wish to replace any funding withdrawn by the U.S. government. This in turn would imply that the U.S.government’s contributions to Sematech do not induce more semiconductor research than would otherwise occur.

In 1987, 14 U.S. semiconductor firms and the U.S. government formed the research and development (R&D) consortium Sematech (forSemiconductor Ma/nufacturing Technology). The purpose of the consortium, which continues to operate today, is to improve U.S.semiconductor manufacturing technology. The consortium aims to achieve this goal by some combination of (i) boosting the amount ofsemiconductor research done and (ii) enabling member firms to pool their R&D resources, share results, and reduce duplication.

Until very recently, the U.S. government has financed almost half of Sematech’s roughly $200 million annual budget. The economicrationale for such funding is that the social return to semiconductor research may exceed the private return, and by enough to offset the socialcost of raising the necessary government revenue. That is, the benefits to society—semiconductor firms and their employees, users ofsemiconductors, and upstream suppliers of equipment and materials—may exceed the benefits to the firms financing the research. In previouswork, we have found evidence suggesting that some semiconductor production knowledge “spills over” to other firms (1). Depending on theirprecise nature, these spillovers may justify government funding to stimulate research.

It is not clear, however, that the government’s contributions to Sematech result in more research on semiconductor manufacturingtechnology. We estimated that Sematech induces member firms to lower their total R&D spending (inclusive of their contributions to theconsortium; ref. 2). Moreover, we estimated that the drop exceeded the level of the government’s contributions to Sematech. Such a drop intotal semiconductor R&D spending might reflect greater sharing and less duplication of research. This increase in the efficiency of R&Dspending makes it conceivable that more research is being done despite fewer R&D dollars. But it could instead be that the same amount ofresearch is being conducted with less spending. If so, then Sematech members should wish to fully fund the consortium in the absence ofgovernment financing. As a result, the government’s Sematech contributions might be less effective in stimulating research than, for example,R&D tax credits.

THE PURPOSE OF SEMATECHThe semiconductor industry is one of the largest high-technology industries in the United States and provides inputs to other high-

technology industries such as electronic computing equipment and telecommunications equipment. It also ranks among the most R&D-intensive of all industries. In 1989 for example, U.S. merchant semiconductor firms devoted 12.3% of their sales to R&D (3), compared with3.1% for U.S. industry overall (4). [“Merchant” firms are those that produce chips solely for external sale (e.g., Intel) as opposed to internal use(e.g., IBM).]

In our previous work (1), we tested a number of hypotheses regarding production knowledge in the semiconductor industry. We employedquarterly data from 1974 to 1992 on shipments by each merchant firm for seven generations (from 4-kilobyte up to 16-megabyte) of dynamicrandom access memory chips. We found a steep learning curve; per unit production costs fell by 20% with each doubling of experience. Wealso found that most production knowledge, on the order of two-thirds, remains proprietary, or internal to the firm. Many of the steps inmemory chip production are identical to those in the production of other computer chips such as microprocessors. As a result, joint research andproduction ventures abound in the industry and often involve producers of different types of computer chips. These ventures are designed toallow partners to slide down the steep learning curve together rather than individually.

The one-third component of production knowledge that spills over across firms, meanwhile, appeared to flow just as much between firmsbased in the same country as between firms based in different countries. Depending on their source, these spillovers could push the social returnto research on semiconductor production technology above the private return to such research. If so, then the policy prescription is a researchsubsidy to bring the private return up to the social return. Given that the spillovers were no stronger domestically than internationally, however,an international agreement to subsidize world research on semiconductors would be the optimal policy. Our results provide no justification forfavoring the industry of one country over another.

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviation: R&D, research and development.

SEMATECH: PURPOSE AND PERFORMANCE 12739

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 89: [Proceedings of the National Academy of Sciences] (BookZZ.org)

The spillovers we found may, however, reflect market or nonmarket exchanges between firms. We have in mind joint ventures, movementof technical personnel between firms, quid pro quo communication among technical personnel, and academic conferences. In these cases, thepolicy prescription is far from obvious. For example, suppose the spillovers occur solely through joint ventures. On the one hand, venturepartners do not take into account any negative impact of their collaboration on other firms’ profits. On the other hand, if knowledge acquiredwithin ventures spills over to nonmembers, then the government should encourage such ventures (5).

The U.S. government has taken several steps to encourage research on semiconductor technology (6). The Semiconductor Chip ProtectionAct of 1984 enhanced protection of intellectual property, and the National Cooperative Research Act of 1984 loosened antitrust restrictions onR&D joint ventures. Partly as a result of this legislation, Sematech was incorporated in August of 1987 with 14 founding members (AT&TMicroelectronics, Advanced Micro Devices, International Business Machines, Digital Equipment, Harris Semiconductor, Hewlett-Packard,Intel, LSI Logic, Micron Technology, Motorola, NCR, National Semiconductor, Rockwell International, and Texas Instruments). With anannual budget of about $200 million, Sematech was designed to help improve U.S. semiconductor production technology. Until very recently,the Advanced Research Projects Agency contributed up to $100 million in government funds to Sematech.

How does Sematech function? Under its by-laws, Sematech is prohibited from engaging in the sale of semiconductor products (7, 8).Sematech also does not design semiconductors, nor does it restrict member firms’ R&D spending outside the consortium. Sematech memberscontribute financial resources and personnel to the consortium. They are required to contribute 1% of their semiconductor sales revenue, with aminimum contribution of $1 million and a maximum of $15 million. Of the 400 technical staff of Sematech, about 220 are assignees frommember firms who stay at Sematech’s facility in Austin, Texas, from 6 to 30 months. Because the objective has been to bolster the domesticsemiconductor industry, membership has been limited to U.S.-owned semiconductor firms. U.S. affiliates of foreign firms are not allowed toenter (a bid by the U.S. subsidiary of Hitachi was turned down in 1988). However, no restrictions are placed on joint ventures betweenSematech members and foreign partners.

The Sematech consortium focuses on generic process R&D (as opposed to product R&D). According to Spencer and Grindley (7), “thisagenda potentially benefits all members without threatening their core proprietary capabilities.” At its inception, Sematech purchased andexperimented with semiconductor manufacturing equipment and transferred the technological knowledge to its member companies. Spencerand Grindley (7) state that “central funding and testing can lower the costs of equipment development and introduction by reducing theduplication of firms’ efforts to develop and qualify new tools.”

Since 1990, Sematech’s direction has shifted toward “sub-contracted R&D” in the form of grants to semiconductor equipmentmanufacturers to develop better equipment. This new approach aims to support the domestic supplier base and strengthen the links betweenequipment and semiconductor manufacturers. By improving the technology of semiconductor equipment manufacturers, Sematech has arguablyincreased the spillovers it generates for nonmembers. Indeed, Spencer and Grindley (7) argue that “[s]pillovers from Sematech effortsconstitute a justification for government support. The equipment developed from Sematech programs is shared with all U.S. corporations,whether they are members or not.” These spillovers may be international in scope; Sematech members may enter joint ventures with foreignpartners, and equipment manufacturers may sell to foreign firms.

According to a General Accounting Office (9) survey of executives from Sematech members, most firms have been generally satisfiedwith their participation in the consortium. The General Accounting Office Survey indicated that the Sematech research most useful to membersincludes methods of improving and evaluating equipment performance, fabrication factory design and construction activities, and defectcontrol. Several executives maintained that Sematech technology had been disseminated most easily through “people-to-people interaction,”and that the assignee program of sending personnel to Austin has been useful. These executives also noted that, as a result of Sematech, theyhad purchased more semiconductor equipment from U.S. manufacturers. Burrows (10) reports that Intel believes it has saved $200–300 millionfrom improved yields and greater production efficiencies in return for annual Sematech investments of about $17 million. The GeneralAccounting Office (11) has stated that “Sematech has demonstrated that a government-industry R&D consortium on manufacturing technologycan help improve a U.S. industry’s technological position while protecting the government’s interest that the consortium be managed well andpublic funds spent appropriately.”

Sematech has also drawn extensive criticism from some nonmember semiconductor firms. According to Jerry Rogers, president of CyrixSemiconductor, “Sematech has spent five years and $1 billion, but there are still no measurable benefits to the industry.” T.J.Rodgers, thepresident and chief executive officer of Cypress Semiconductor, has argued that the group just allows large corporations to sop up governmentsubsidies for themselves while excluding smaller, more entrepreneurial firms (10). A controversial aspect of Sematech was its initial policy,since relaxed, of preventing nonmembers from gaining quick access to the equipment it helped develop. These restrictions raised questionsabout whether research undertaken with public funds was benefiting one segment of the domestic semiconductor industry at the expense ofanother.

Another heavily criticized feature of Sematech has been its membership fee schedule, which discriminates against small firms. Sematechmembers, as noted earlier, are required to contribute 1% of their semiconductor sales revenue to the consortium, with a minimum contributionof $1 million and a maximum of $15 million. This fee schedule places proportionately heavier financial burdens on firms with sales of less than$100 million and lighter burdens on firms with sales of more than $1.5 billion. Many smaller firms such as Cypress Semiconductor say theycannot afford to pay the steep membership dues or to send their best engineers to Sematech’s Austin facility for a year or more. Even if thesecompanies joined, moreover, they might have a limited impact on Sematech’s research agenda.

Sematech’s membership has also declined. Three firms have left the consortium, dropping its membership to 11, and another has reservedits option of leave. (Any firm can leave Sematech after giving 2 years notice.) In January 1992, LSI Logic and Micron Technology announcedtheir withdrawal from Sematech, followed by Harris Corporation in January 1993. Press reports in February 1994 indicated that AT&TMicroelectronics notified Sematech of its option to leave the consortium in 2 years, although a spokesman denied the company had definiteplans to leave. All of the former members questioned the new direction of Sematech’s research effort, complaining that Sematech strayed fromits original objective of developing processes for making more advanced chips toward just giving cash grants to equipment companies.Departing firms have also stated that their own internal R&D spending has been more productive than investments in Sematech.

THE PERFORMANCE OF SEMATECHSematech’s purpose is to improve U.S. semiconductor firms’ manufacturing technology. As discussed, the rationale for the

SEMATECH: PURPOSE AND PERFORMANCE 12740

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 90: [Proceedings of the National Academy of Sciences] (BookZZ.org)

U.S. government’s subsidy to the consortium rests on two premises: first, that the social return to semiconductor research exceeds the privatereturn (meaning the private sector does too little on its own); and second, that government contributions to Sematech result in moresemiconductor research being done.

We call the hypothesis that Sematech induces more high-spillover research the “commitment” hypothesis. Under this hypothesis, wewould expect Sematech to induce greater spending on R&D by member firms (inclusive of their Sematech contributions). Firms need not joinSematech, however, and those that do can leave after giving 2 years notice. Firms should be tempted to let others fund high-spillover R&D.Under this hypothesis, then, the 50% government subsidy is crucial for Sematech’s existence. The commitment hypothesis both justifies agovernment subsidy and requires one to explain Sematech’s membership. Relatedly, a government subsidy could be justified on the groundsthat not all U.S. semiconductor firms have joined Sematech, and that some of the knowledge acquired within the consortium spills over tononmembers. Based on the commitment hypothesis, Romer (12) cites Sematech as a model mechanism for promoting high-spillover research.

Not mutually exclusive with the commitment hypothesis is the hypothesis that Sematech promotes sharing of R&D within the consortiumand reduces duplicative R&D. We call this the “sharing” hypothesis. Under this hypothesis, Sematech’s floor on member contributions iscrucial because without it, firms would contribute next to nothing and free ride off the contributions of others. The sharing hypothesis impliesgreater efficiency of consortium R&D spending than of independent R&D spending. From a private firm standpoint, Sematech contributionswere all the more efficient when matched by the U.S. government. Under this sharing hypothesis, we would expect Sematech firms to lowertheir R&D spending (inclusive of their contributions to Sematech). This is because members should get more research done with each dollarthey contribute than they did independently. Since their contributions to Sematech are capped at 1% of their sales (far below their independentR&D spending), the consortium should not affect the efficiency of their marginal research dollar. As a result, it should not affect the totalamount of research they carry out.

Unlike the commitment hypothesis, the sharing hypothesis does not provide a rationale for government funding. Firms should have theappropriate private incentive to form joint ventures that raise the efficiency of their R&D spending. Perhaps fears of antitrust prosecution, evenin the wake of the National Cooperative Research Act of 1984, deter some semiconductor firms from forming such ventures. The stamp ofgovernment approval may provide crucial assurance for Sematech participants such as IBM and AT&T. Still, a waiver from antitrustprosecution for the research consortium should serve this function rather than government financing.

What does the evidence say about these hypotheses? Previously (2), we estimated whether Sematech caused R&D spending by membersto rise or fall. To illustrate our methodology, consider for a moment broad measures of the performance of the U.S. semiconductor industry.Sematech was formed in the fall of 1987. After falling through 1988, the share of U.S. semiconductor producers in the world market hassteadily risen, and the profitability of U.S. semiconductor firms has soared. Some view this rebound as confirmation of Sematech’s positive rolein the industry. But this before-and-after comparison does not constitute a controlled experiment. What would have happened in the absence ofSematech? We do not know the answer to this, but we can compare the performance of Sematech member firms to that of the rest of the U.S.semiconductor industry. Any factors affecting the two groups equally, such as perhaps exchange rate movements and the U.S.-JapanSemiconductor Trade Agreement, will be a function of the year rather than Sematech membership per se. And factors specific to each firmrather than to Sematech membership can be purged by examining Sematech member firms before Sematech’s formation. This is the approachwe used to try to isolate the impact of the Sematech consortium on member R&D spending (2).

We found that R&D intensity (the ratio of R&D spending to sales) rose after 1987 for both members and nonmembers of Sematech, butthat the increase was larger for nonmembers than for members (2). When we controlled for firm effects, year effects, and age of firm effects,we found a 1.4 percentage point, a statistically significant effect of Sematech on member firms’ R&D intensity. This result was not sensitive tothe exact sample of firms or time period covered, or to the use of R&D relative to sales versus assets.

Is our estimated impact of Sematech on member firm R&D spending economically significant? In 1991, our sample of semiconductorfirms had sales of $31.1 billion with $3.2 billion in R&D expenditures (a ratio of 10.3%). In that year, Sematech members accounted for two-thirds of sales ($20.7 billion) and R&D ($2.2 billion) in our sample, for a ratio of 10.6%. If Sematech reduced this ratio by 1.4 percentagepoints, then in the absence of the consortium, firms would have spent 12.0% of sales on R&D, or $2.5 billion, or $300 million more. In theabsence of Sematech, according to this exercise, the overall R&D/sales ratio of the industry would have been 11.2% rather than 10.3% in 1991.Under this interpretation, Sematech reduced the industry’s R&D spending by 9%. (This whole exercise presumes that Sematech had no overallimpact on semiconductor sales or on other firms.)

To summarize, we estimated a negative, economically significant impact of Sematech membership on R&D spending (2). This accordswell with the sharing hypothesis, under which the consortium increases the efficiency of inframarginal member R&D spending. Under thishypothesis, Sematech members should replace any Sematech funding that the government withdraws. The evidence is less easy to reconcilewith the commitment hypothesis, wherein Sematech commits members to boost their research on high-spillover R&D. One cannot reject thecommitment hypothesis, however, because the two hypotheses are not mutually exclusive. The validity of the sharing hypothesis could bemasking the fact that more high-spillover R&D is being carried out as a result of the consortium.

CONCLUSIONSIn a previous study (1), we found that most semiconductor production knowledge remains within the firm. Since semiconductor firms slide

down related learning curves whether they produce memory chips or microprocessors, efficiency gains can be leaped from joint ventures. Withthis in mind, Sematech was formed in 1987. In our study (1), we also found that some semiconductor production knowledge spills over acrosssemiconductor firms. These spillovers could justify government actions to stimulate semiconductor research. With this in mind, the U.S.government has funded almost half of Sematech’s budget. In another study (2), we estimated that Sematech induces member firms to lowertheir R&D spending. This suggests that Sematech allows more sharing and less duplication of research. Under this interpretation, it is notsurprising that Sematech members have stated that they wish to fully fund the consortium in the absence of government financing. Moreover,this evidence is harder (but not impossible) to reconcile with the hypothesis that, through government funding, Sematech induces firms to domore semiconductor research.1. Irwin, D. & Klenow, P. (1994) J. Polit. Econ. 102, 1200–1227.2. Irwin, D. & Klenow, P. (1996) J. Int. Econ. 40, 323–344.

SEMATECH: PURPOSE AND PERFORMANCE 12741

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 91: [Proceedings of the National Academy of Sciences] (BookZZ.org)

3. Semiconductor Industry Association (1993) Databook (SIA, San Jose, CA), p. 41.4. National Science Foundation (1989) Research and Development in Industry (National Science Foundation, Washington, DC), NSF Publ. No. 92–307,

p. 77.5. Cohen, L. (1994) Am. Econ. Rev. 84, 159–163.6. Irwin, D. (1996) in The Political Economy of American Trade Policy, ed. Krueger, A. (Univ. of Chicago Press, Chicago), pp. 11–70.7. Spencer, W. & Grindley, P. (1993) Calif. Manage. Rev. 35, 9–32.8. Grindley, P., Mowery, D. & Silverman, B. (1994) J. Policy Anal. Manage. 13, 723–758.9. General Accounting Office (1991) Federal Research: Sematech’s Efforts to Develop and Transfer Manufacturing Technology (U.S. Government

Printing Office, Washington DC), GPO Publ. No. GAO/RCED-91–139FS.10. Burrows, P. (1992) Electron. Bus. 18, 47–52.11. General Accounting Office (1992) Federal Research: Lessons Learned from Sematech (U.S. Government Printing Office, Washington, DC), GPO

Publ. No. GAO/RCED-92–1238.12. Romer, P. (1993) Brookings Pap. Econ. Act. Microecon. 2, 345– 390.

SEMATECH: PURPOSE AND PERFORMANCE 12742

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 92: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethL.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

The challenge of contracting for technological information

RICHARD ZECKHAUSERJohn F.Kennedy School of Government, Harvard University, 79 John F.Kennedy Street, Cambridge, MA 02138ABSTRACT Contracting to provide technological information (TI) is a significant challenge. TI is an unusual commodity in five

ways. (i) TI is difficult to count and value; conventional indicators, such as patents and citations, hardly indicate value. TI is often soldat different prices to different parties, (ii) To value TI, it may be necessary to “give away the secret.” This danger, despitenondisclosure agreements, inhibits efforts to market TI. (iii) To prove its value, TI is often bundled into complete products, such as acomputer chip or pharmaceutical product. Efficient exchange, by contrast, would involve merely the raw information, (iv) Sellers’superior knowledge about TI’s value make buyers wary of over-paying, (v) Inefficient contracts are often designed to secure rents fromTI. For example, licensing agreements charge more than marginal cost. These contracting difficulties affect the way TI is produced,encouraging self-reliance. This should be an advantage to large firms. However, small research and development firms spend more peremployee than large firms, and nonprofit universities are major producers. Networks of organizational relationships, particularlybetween universities and industry, are critical in transmitting TI. Implicit barter—money for guidance—is common. Property rightsfor TI are hard to establish. Patents, quite suitable for better mousetraps, are inadequate for an era when we design better mice. MuchTI is not patented, and what is patented sets fuzzy demarcations. New organizational forms are a promising approach to contractingdifficulties for TI. Webs of relationships, formal and informal, involving universities, start-up firms, corporate giants, and venturecapitalists play a major role in facilitating the production and spread of TI.

Information is often described as a public good.a This assumes that there is nonrivalry in consumption and that, once information is madeavailable to one party, it is readily available to another. For some types of information, particularly consumptive information such as the scoresof sporting events, this may be an adequate description. But if our concern is with information affecting technology and the economy, it almostcertainly is not. I argue below that the public good classification can be misleading in two respects: (i) for much information, many of the usualcharacteristics of public goods are not satisfied,b and (ii) focusing on the public good aspect of information has deterred economists and policyanalysts from delving more deeply into the distinctive properties of information, including most particularly the challenge of contracting fortechnological information (TI).

Even if there is no restriction on access to information, it may be extremely costly to acquire. The basics of physics or molecular biologyare contained in textbooks, yet people spend years learning to master them. Corporations become tied to a given technology and have vastdifficulties changing when a superior one becomes available. Often the physical costs of change, for example to new machines, are smallrelative to the costs of changing procedures and training personnel. Looking across corporations within the same industry, we often seesignificantly different levels of productivity. In the classical economic formulation, technological advance merely drops into the productionfunction, boosting levels of outputs or factors. In the real world, improved technology, as represented, say, by new information, may beextremely costly to adopt. Many of the factors that limit the public good status of TI also make it difficult to buy and sell, even as a private good.

Economics has addressed the challenges of contracting, particularly in the context of agency relationships. Inefficiencies arise because it isnot possible to observe the agent’s effort, or to verify the state of the world, or because potential outcomes are so numerous (due to uncertainty)that it is not possible to prespecify contingent payments (see refs. 3 and 4).c All these problems arise in contracting for TI. For example,because effort is difficult to monitor, contracts for TI usually pay for outputs (e.g., a royalty), not inputs, even in circumstances where the buyeris much less risk averse than the seller.

THE PECULIAR PROPERTIES OF TECHNOLOGICAL INFORMATIONThe primary challenge in contracting for information stems from the bizarre properties of information as a commodity, which are

discussed below under five headings: counting and valuation, giving away the secret, bundling and economies of scale, asymmetric knowledgeof value, and patterns of rents. For the moment, we focus discussion on TI, a category that is predominantly produced by what we call R&D. TIenters the production function to expand the opportunity set, to get more output, or value of output, for any level of input.

Counting and Valuation. Theorists have proposed a variety of measures for information, which may involve counting bits or consideringchanges in odds ratios, but such measures could hardly be applied with meaning to information contained in the formulation of a newpharmaceutical or the design of a computer chip. (Tallies of papers, patents, and citations are frequently used as surrogate measures fortechnological advance.) Even if an unambiguous quantity measure were available for information, we need a metric that indicates theimportance of the area to which it is applied. Price plays this

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviations: R&D, research and development; TI, technologicalinformation; JG, Johnson-Grace, Inc.aThe attendant policy concern is that too little inventive activity will take place when private rates of return fall below public rates.

Pakes and Schankerman (1) find private rates to be “disappointing,” suggesting a divergence is a concern.bWere research and development (R&D) a public good, with consumption of the good provided free of charge, the largest economy

should spend the most, with smaller countries riding free. In 1993, in fact, Sweden had the highest national R&D intensity. Leavingdefense aside, the United States trailed its major competitors, Japan and Germany (2).

cSee also the extensive literature on research contracting (e.g., ref. 5).

THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION 12743

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 93: [Proceedings of the National Academy of Sciences] (BookZZ.org)

role when apples are compared with oranges, but information is sold in markets that are both thin and highly specialized. We do not have priceequivalents for units; we even lack clear ways to identify the relative importance of different information arenas.

Given such difficulties, we do not tally quantities of information. Rather, we combine the quantity and importance issues, and, at best, talkof information’s value. That value is most likely to be revealed in contracts between two parties engaged in bilateral bargaining, suggesting thatthere will be substantial instability in the outcomes. To be sure, there are information services, trade journals and the like, sold at a price. Butwhen TI is sold in raw form, rarely is the same package sold to multiple parties at the same price. Below we observe that information is usuallysold in a package with other components; for example, a modern PC chip contains numerous technological innovations. And when patents orlicenses are sold, often the buyer already knows the information; the commodity purchased is the right to use it.

Giving Away the Secret. The benefit of TI is extremely difficult to judge. First, it may be difficult to know whether it will work orwhether it will expand production capabilities. Second, if it does work, will it facilitate new products? What products will be wanted, and howwidespread will be the demand? These questions are exceedingly difficult to answer, as contemplation of the wonders of the Internet makesclear.

This suggests that if some potentially valuable information were displayed on a shelf, it would be a challenge for the seller to price it, orfor the buyer to know whether to purchase. However, unless information is securely protected, it rarely gets the equivalent of shelf display.Merely informing a potential buyer about one’s product gives away a great deal of the benefit. Hence, information is shared alongside sheavesof nondisclosure agreements, and, even then, there is selective hiding of critical components. Frequently prototypes are demonstrated, but innerworkings may be hidden, much as magic stores demonstrate an illusion but not its working mechanism. But even to make it clear thatsomething is technologically feasible is to give away a great deal; it reveals that innovation is feasible, and someone thought the effort toproduce it was worth making.d

When TI is the product, fears of inappropriate use may cause both customers and technology providers to clam up. The experience ofJohnson-Grace, Inc. (JG), a small firm located no more than 1 mile from this conference, is instructive. For 2 years, it has had a superior imagecompression algorithm, which has been prominently employed by America Online. Some potential customers (online services) have beenreluctant to provide information that would enable JG to operate on their system. JG has resisted giving out source code, which would permitcustomers to understand better how their system worked but would also facilitate legal or illegal theft. For a period, for example, JG refused todiscuss with Microsoft its product that interleaves compressed sound and video. Knowing such a product could be developed might spurMicrosoft to do so.e

For most products, such as cars or television sets, the more consumers the merrier. The early consumers of such products gain as theybecome more widely used, say because repair facilities will be more convenient. With much TI, however, additional users diminish the value tocurrent users. When the TI is targeted to a particular industry or product, the loss is likely to be great.

Such losses imply that those in possession of TI will be vitally concerned whether it is made available to others and, if so, how widely itwill be employed. Here contracting encounters another hurdle. More than being difficult to count, information is impossible to meter; it is oftenbeyond anyone’s capability to state how widely a technology has been disseminated. (To be sure, in some circumstances it can be licensed on aper-unit basis for a limited set of products.)

Firms that utilize their own R&D frequently do not license it to competitors, which leads to inefficiency, since, from a resource standpoint,the marginal cost of use is zero.f The consumers who benefit from the increased competition cannot be charged for their gains. Moreover, itmay be impossible to limit the potential licensee to particular noncompetitive uses. Given this difficulty, firms developing TI often sell it to asingle entity.

The logical extension of the single entity concept is to create a new firm to produce a particular form of TI. That is why we see so manystart-up firms in the high-tech arena. Start-ups have the additional advantage of securing the majority of their benefits for the individuals whoactually provide and develop the innovative ideas. Such individuals may be forced to break off from an old, larger firm because they are unableto demonstrate the extraordinary value of their ideas or because compensation policies simply can’t reward the innovators sufficiently.

Finally, R&D cannot be taken to the bank as an asset to be mortgaged. Explaining the product to the bank would be difficult andpotentially disadvantageous competitively. Moreover, given the tremendous uncertainties about value, a default is not unlikely, and when thereis one the asset is likely to have little salvage value.

Bundling and Economies of Scale. TI has many of the characteristics of an acquired taste. The buyer has to try it before buying. Withexotic ice creams or Icelandic sagas, also acquired tastes, a relatively cheap small test can guide us about a potential lifetime of consumption.With information, by contrast, we may have to acquire a significant portion or all of the total product before we know whether we want it. Agood idea packaged alone is not enough, since its merits are hard to establish. What is usually required to convince a party to purchase TI is ademonstrated concept or completed product. In effect, there are significantly increasing returns to scale with respect to investment ininnovation, and if patent protection is required, there is possibly an indivisibility.

This increasing returns aspect of TI compounds contracting difficulties.g Even if there were no charge for the information, the costs ofevaluating it would discourage acquisition, however desirable that would prove ex post. Much information that might be sold is not evendisplayed for sale. When it is, elaborate legal documents relating to such matters as nondisclosure are required (at times with lawsuits tofollow). Finally, the information may be bundled into products, which can be

dArrow (ref. 6, pp. 5–6) makes this point with respect to the development of the atomic bomb. There were severe concerns aboutespionage leaks when the Soviet Union produced its own bomb. However, the primary “leak” may have come from the publicknowledge that the United States was able to produce a successful weapon.

eIn January of 1996, JG was sold to America Online, its major customer. Moving to common ownership of the buyer and seller of TIis a frequent solution to the problem of contracting for TI. Before the acquisition, as they became increasingly entwined, both JG andAmerica Online became vulnerable to “holdup” —i.e., exploitation because its value can be destroyed—by the other party.

fWhen there are significant network externalities, or other gains from extending the market, licensing is desirable. Witness the recentagreement with Phillips, Toshiba, etc., relating to the next generation of compact disk technology, and the subsidized sales of softwareproducts seeking to become the standard. Many commentators believe Apple Computer made a major mistake not licensing its superiorMacintosh technologies, which it has only begun to do recently.

gThis increasing returns feature relates to another contentious issue in technology policy. It suggests that government subsidies toR&D, in some circumstances, may enhance and not crowd out private efforts.

THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION 12744

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 94: [Proceedings of the National Academy of Sciences] (BookZZ.org)

demonstrated and purchased whole, though the unbundled information may be the commodity truly sought. Beyond this, the very nature ofinformation makes it difficult to peruse the landscape to find out what is available. Despite the miracles of the Internet, Nexis-Lexis, and thelike, there is no index of technologies that one might acquire. Much valuable TI, such as trade secrets, is not even recorded. As a consequence,many technologies sit on the shelf; valuable resources lie dormant.

What information is contracted, not surprisingly, often comes in completed bits. A superior video compression algorithm may be placedinto an applications program specialized for the information provider. A fledgling biotech firm sells its expertise to the pharmaceuticalcompany as a formulated product. And venture capitalists package their special expertise and connections along with a capital investment.Michael Ovitz, whose pre-Disney monopoly returns derived from his information network, made his money through deal-making, not the directsale of information.

Such packaging can play a number of useful roles, for example: (i) it may assure the buyer that the information is really valuable, since itworks in the product;h and (ii) it may facilitate price discrimination. Such discrimination trades off the inefficiency of a positive charge for azero cost service against the incentive gain of letting the information developer secure more for his output.

TI may be bundled as one component in a product, or it may be a process or item that is licensed with the protection of patent. The needfor a patent before information is readily sold, though understandable, incurs significant liabilities. To begin, it limits and delays what can besold. (The parallel in the physical product world would require a hard disk manufacturer to produce a whole computer before making a sale.)

Given the difficulties of contracting for information on an arm’s-length basis, frequently it is secured as part of some long-term, oftencontractual relationship.i One firm provides TI, the other offers complementary products, say, manufacturing or marketing capability. Thiscould be a joint venture, with say a manufacturer joining with a technology firm, with some agreed-upon division of profits. Alternatively, tosecure a long-term relationship, one firm—more commonly the complement—makes an equity investment in the other, possibly a completeacquisition. Even one-time contractual relationships may specify an enduring connection.j

Asymmetric Knowledge of Value. However packaged, asymmetries in knowledge will remain when information is sold. Even if thetechnology is well understood, the parties may differ on valuation. The winner’s curse—when a knowledgeable party allows you to buysomething that is worth less than you thought—will (appropriately) inhibit contracting. Consider the possible purchase of a patent that is worth1.5 times as much to B as to A, its owner. B’s subjective distribution on the value is uniformly distributed on the interval [0,1]; A knows thetrue value. Any positive bid by B will lose money on expectation; hence (inefficiently), the patent will not be sold.k A parallel argument applieswhen the acquirer, say a large company with well-developed markets, has more knowledge of the value of a technology than its seller, perhapsa start-up firm. When the patent is sold, it will be sold for too little, a phenomenon that inhibits a potential sale.

Given difficulties of contracting for information outside the firm, TI may be of greater value in a larger firm, where it can be deployed fora larger volume of products, where marketing skills are superior, brand names are better known, etc. When a small firm possesses TI, or hassuperior abilities to develop it, a larger firm may seek to acquire the small one so as to reap its technology and capabilities.l Such acquisitionsare common, but they are reduced in frequency because of information asymmetries. Small firms may have difficulty demonstrating thesuperiority of technology they already possess, much less their future ability to generate new knowledge. Moreover, a willingness tocontemplate sale hints at self-doubts.

R&D races, a favorite subject for economics study,m are also affected by asymmetries in knowledge of value. The greater is youropponent’s assessment of the payoff from winning, the more likely he is to stay in and the more resources he will devote. Hence, when you winthe race, the prize is less valuable. Assuming the participants understand this phenomenon, R&D races will be less profligate.

On the other side, failures of contract exacerbate the costs of R&D races. The challenge of demonstrating a workable technology (e.g., thephenomena that call for bundling) makes it difficult or unwise for the leader to demonstrate her advantage, hoping to induce her opponent(s) todrop out. For example, journal publication, which may deter competitors by demonstrating one’s lead in a race, also reveals secrets.

Patterns of Rents. The use of capital, a stock of resources, earns a rent. Machines thus have a rental price; risk capital earns a return, andskilled humans receive a premium wage. The rent is equal to the increment in output per period offered by the resource, which we can think ofbroadly as capital.

Information and knowledge are often labeled intellectual capital. But the services of such capital, say, how to conduct a physical processor design a circuit, does not offer a level benefits stream over time. It often offers its primary benefits almost immediately, subject only toconstraints such as time to process and understand. The story is told of the great Charles Steinmetz, called to repair a giant General Electricgenerator after many others had failed. Steinmetz marched around the colossus a couple of times and called for a screwdriver. He turned asingle screw, then said: “Turn it on,” and the machine sprang to life. When Steinmetz was questioned about his $10,000 bill, he responded: “10cents to turn the screw, $9999.90 to know which screw to turn.”

Those who possess intellectual capital, like scientists or lawyers, may even be rewarded with per-period excess wages. However, thisarrangement may not reflect the true pattern of productivity, which is extraordinarily high during a brief period of distillation—the colloquialbrain picking interlude—and then falls to ordinary levels when the capital is applied to totally new problems. To be sure, firms offertechnologies on a per-period basis, but not the information contained in that technology. If they did, 1 day’s purchase would offer an eternallicense.n

hEven seeing a successful product may be insufficient. If a product is sufficiently innovative, sales to other parties often serve as thebest evidence that it is worthwhile. This may offer protective cover to the purchasing decision maker. Interestingly, even venture capitalfirms, the touted sleuths of product discovery, often seek confirmation from peers. On average, 2.2 venture capitalists are involved infirst-round financing of companies (7). When positive decisions depend on the positive decisions of others, herding is a likely result.

iKogut (8) finds that long-term relationships induce and stabilize joint ventures for R&D, since they create the potential to penalizeand reward behavior among partners.

jS.Nichtberger (personal communication), who does product development for Merck, reports that when a pharmaceutical firmcontracts for a drug or technique, it traditionally requires exclusive rights to all drugs using the same technique for a category of disease.

kLet us say you bid 0.6. When the seller lets you have it, it will be worth on average 0.3 to the seller; hence, 0.45 to you. Inexpectation you will lose 0.15. This example is adapted from ref. 9.

lIn effect, this raises bundling to a higher level, with the small firm’s capabilities and personnel ties sold as a unit. Favorableemployment contracts are employed to stem leakage.

mSee ref. 10 for a recent treatment.nThis suggests that architects and technological consultants should offer their services at an initial rapidly declining hourly rate. The

first

THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION 12745

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 95: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Bundling is a second-best approach to the 1-day-tells-all problem. Even information possessed by a single individual may be unfolded aspart of a larger package, where, say, he custom-designs a process or device for a particular company. He can’t merely tell the secret and let thecompany do the development, because he can’t assure the company in advance that his information will be valuable.

THE PRODUCTION AND TRANSMISSION OF TIThe difficulties in contracting for R&D profoundly affect the way it is produced. Some firms have R&D as their stock in trade; their

related activities simply encapsulate the knowledge they produce. But for the vast majority of firms, R&D is not a central activity. Rather, theyproduce steel, manufacture cars, or sell securities. Superficially, it might seem that those in securities or steel would contract out for R&D. Thecare and nurturing of engineers and scientists may require a distinctive culture, not well-suited to bartering bonds or churning out ingots.Moreover, if research universities are indicative, there are significant economies of scale in conducting research. Many firms would seem tohave R&D divisions below efficient scale.

Surprisingly, a vast range of firms run their own R&D operations. This reflects, I believe, the difficulties of contracting for information.Even if a firm wanted to buy R&D from outside, it would have a difficult time doing so. Moreover, in going to the market for R&D, it would beexposing internal information it would rather keep proprietary.o Cohen and Levinthal (12), highlighting difficulties in transferring TI, talk of adual role for R&D: generating new information and enhancing “absorptive capacity.” The latter—the ability to identify, assimilate, and exploitinformation—helps explain why firms undertake basic research and why some ill-equipped firms do R&D at all.

Assuming that contracting challenges foster a tendency to self-reliance, what is lost? In theory, large firms should be able to spreadknowledge and information over a much wider base. Hence, other factors equal, they should have higher R&D intensity than small firms. Thisproves not to be the case. In 1991, firms undertaking R&D with fewer than 500 employees spent $6021 per employee on R&D (excludingfederal support), the most for any size category.p The largest firms, those with more than 25,000 employees, were second at $5169, presumablyreflecting the public good nature of information (ref. 13, p. 33), at least within the firm.q The high R&D expenditure levels of small firmssuggest that whatever disadvantages they have in deploying information is compensated by their advantages in producing it.

Universities, of course, are major producers of TI. Given their nonprofit and public-oriented mission, it might naively be thought that TImight flow more smoothly from them. Van de Ven (16) argues that there is a “stickiness” of such knowledge or, as Zucker et al. (17) phrase it,a “natural excludability.” Specific pieces of information may be less critical than insights and experience; moreover, universities and theirresearchers have gotten into the business of selling their TI.

Blumenthal et al. (18) report that, for biotechnology companies, university-industry relationships help 83% of them keep abreast ofimportant research (promoting their absorptive capacity), whereas 53% secure licenses for products. Some knowledge may flow in the oppositedirection, with 58% of companies suggesting such arrangements “risk a loss of proprietary information.”

Powell et al. (19) document that, in a field of rapid technological advance (biotechnology is their prime example), learning occurs within“networks of inter-organizational relationships.” Firms use ties to learn from each other. They conclude that “much of the relevant know-how isneither located inside an organization nor readily available for purchase.”

Together these authors paint a picture of information exchanged on a nonexplicit basis, in the form of implicit barter arrangements.Companies sponsor university research and receive in return subtle information about what fields and researchers are promising and on whattypes of technologies might prove feasible. More explicit agreements might give the sponsor privileged access to license technology. Professorstrain students; at a later date, they work together in a private sector venture. Favors are reciprocated, insights and experiences are exchanged,and information gets passed along webs of relationships. The exchanges may be between employees of different companies, or even within acompany, who make each other look good.r Though some information is paid for explicitly, much that could not possibly be contracted—perhaps an opinion on what research areas will prove promising—is offered gratis. Informational gifts may be part of a commercial courtshipritual, perhaps demonstrating one’s capabilities or hoping to start an escalating exchange of valuable knowledge.

Assuming contracting challenges, there are two inefficiencies in R&D locale: it is produced inefficiently, and what is produced issubstantially underutilized. The latter problem may not be extreme, since only 17% of R&D is spent in firms with fewer than 5000 employees.s

IMPLICATIONSEconomic analyses of TI usually start with the observation that such information is a public good. Excessive focus on this feature, I argue

here, has led us to slight the major class of market failures associated with TI that stems from its amorphous quality. This quality makesinformation hard to count, value, trade, or contract on in market or nonmarket transactions. The critical features of these two conceptions of TIare summarized in Table 1.

A thought experiment might ask what would happen if information remained a public good but were susceptible to contract. Fortunately,there are public goods that offer relatively easy contracting, such as songs or novels, which offer an interesting contrast with information. Suchgoods appear to be well-supplied to the market, with easy entry by skilled low-cost songwriters and novelists.

few hours call primarily on their intellectual capital on hand, subsequent hours on their time. Most such professionals design theirintroductory meetings to establish long-term relationships, and they experience the tension between displaying their capabilities andrevealing too much too soon. To be sure, English and economics professors spill their intellectual capital on a per-hour basis, but anengineering professor would hardly do so with commercially valuable proprietary knowledge.

oThis in-house bias even extends across oceans. Hines (ref. 11, p. 92) reports that for foreign affiliates of U.S. multinationals, 93% oftheir royalty payments to American companies went to their parents.

pThis result is biased; because a much smaller proportion of small firms undertake R&D, the relative R&D of small firms isoverstated.

qPerhaps surprisingly, small manufacturing firms do not do more R&D as a percent of net sales than large; both are at 4.1% (ref. 13,p. 19). Mansfield (14) finds that a 1% increase in a firm’s sales is associated with a 1.65% increase in its basic research expendituresand a 0.78% increase in R&D expenditures for process or product innovation. Scherer and Ross (ref. 15, pp. 654–656), in an overview,find R&D outlays are slightly less than proportional to sales, a longstanding phenomenon in the United States. In terms of productivity,they observe: “the largest manufacturers derived fewer patents and significant technological advances from their R&D money thansmaller firms.”

rvon Hippel (20) assesses “know-how” trading as a benefit to firms and/or their trading employees.sFigure for latest year available 1989 (ref. 13, p. 17).

THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION 12746

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 96: [Proceedings of the National Academy of Sciences] (BookZZ.org)

Table 1. Two conceptions of technological informationPublic goods Challenge to contract

Rivalry • Nonrivalrous • Strong rivalryExcludability • Nonexcludable • Exclusion mechanisms

Sticky to beginSecrecyPatentsLawsuits• Transmission through relationships

Good produced • Nuggets of knowledge • Bundled productsLocus of production • Most efficient knowledge producer • Inefficient internal reliance

• Absorptive capacity investment• Webs of relationships

Transmission • Open literature• Forums and seminars• Internet and mass media

• Human mules• Raiding and defection of personnel• Academic-industry relationships• Personal relationships

Critical concerns • Underprovision• For second best world, tension between intellectualproperty and pricing above marginal cost best world

• Underprovision• Inefficient production• Underexploitation• Protection of intellectual property• Facilitating contracts for information• Backward impact on university (secrecy, conflicts ofinterest)• Private benefits from government research expenditures

Policy measures • Substantial government subsidy• Required dissemination of government-sponsoredresults• Patents recognizing second best

• Government subsidy proportional to leakage• Direct government provision to avoid appropriation• Government-industry proprietary research relationships• Patents recognizing second best• Antitrust policy recognizing second best

Given contracting difficulties, information is likely to be produced in the wrong locale, by big firms rather than small, and in duplicativefashion rather than singly by the most efficient producer. These inefficiencies in production, moreover, may significantly reduce the output ofTI.t These problems do not arise with songs or novels.

If the public good nature of TI were the sole concern, government could merely secure it from the private sector, as it does with weaponsor social science research. To deal with contracting issues, research is undertaken directly by government laboratories, say the NationalInstitutes of Health campus, in preference to the private or nonprofit sector.u Government-funded collaborative research facilities, such asSematech, are designed to overcome duplicative research efforts. Such ventures are rare, in large part because it is hard to contract even for theproduction of R&D, say, to get companies to send their best scientists. If the collective inputs were merely dollars and if it were hard to claimprivate benefits from the output, collaborative efforts would be much easier to organize. That is why trade associations, which for the most partpossess these characteristics, are common.

Recognizing that contracting difficulties are a principal impediment to the effective production and exchange of TI should shift our policyattention. The effective definition of property rights becomes a central concern. Our patent system was developed for the era of the bettermousetrap and its predominantly physical products, whereas today we are designing better mice. Today’s TI is less contractible because it isless tangible, perhaps an understanding of how computers or genes deal with information. Much TI is not patented, due to both expense andinadequate protection (perhaps a half-million dollars to fight a patent infringement case in front of an ill-informed jury). What is patented setsfuzzy demarcations, as an explosion of litigation attests. Related policies for the protection of intellectual property (e.g., trade secrets andcopyright law) also persist from an outdated era.

Market structure significantly affects both the level and deployment of R&D activity. (The two most salient antitrust cases of the modernera—IBM and AT&T—involved the nation’s two technological giants.) Our mainline antitrust policies do not explicitly recognize the R&Dlink. However, the Department of Justice and Federal Trade Commission (DOJ-FTC) Horizontal Merger Guidelines (April 2, 1992) do allowfor an efficiency defense,v and cooperative research efforts receive favored treatment. More important, the general tenor of the contemporaryantitrust policy arena, including the DOJ-FTC 1994 guidelines on Intellectual Property Licensing, reflects a high sensitivity to R&Dproduction. The TI explosion has given birth to new organizational forms for confronting contracting difficulties. They range from thetraditional— vertical mergers involving media and information companies—to the highly innovative—webs of relationships, formal andinformal, involving universities, start-up firms, corporate

tHowever, if demand is inelastic, more may be spent than in a perfect world.uOver the past decade, government laboratories have undertaken collaborative research and development agreements with private

entities, which receive proprietary TI in exchange for their own R&D efforts. This approach, in effect, sacrifices public good benefits toenhance productivity. See ref. 21 for a discussion of contracting difficulties that remain.

vIn relation to TI, probably the most relevant defense cited is achieving economies of scale.

THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION 12747

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 97: [Proceedings of the National Academy of Sciences] (BookZZ.org)

giants, and venture capitalists—and play a major role in facilitating the production and spread of TI. The twenty-first century merits policiesaffecting a range of organizational forms, that explicitly take account of the effects of these structures on the production, dissemination, andutilization of TI.

Recognizing the importance of webs of relationships (8, 19) to R&D development suggests that regions, or industries, blessed with socialcapital (22) —trust, norms, and networks— will have substantial advantages, as Silicon Valley and Route 128 make evident. In recent years,Europe has made explicit efforts to build cooperative approaches to R&D among natural competitors, relying on substantial governmentsubsidies and coordination on research directions (23).

The R&D problem is often framed as one of providing public goods, with Federal funding as the implicit solution. Yet federal funding as aproportion of industrial R&D has fallen precipitously from the 1960s, when it exceeded company spending, to the 1990s, when it has been<40% as large.w Given contemporary political and budget realities, generosity in government funding, whatever its theoretical merits, isunlikely to guarantee the efficient production of R&D.

The second major government function in R&D production is its accepted role as definer and enforcer of property rights. However, boldnew frontiers are being crossed in defining technological realities—witness the Internet and genetic engineering. In such unfamiliar territory,appropriate property delineations are much harder to define. This is particularly true since other salient values, such as freedom of speech,privacy, and the sanctity of life, are deeply involved with technological advance.

The nature of TI, I have argued here, severely impedes its purchase and sale. When such inefficiencies are great, the struggle for secondbest outcomes will lead to new organizational forms to facilitate contracting. This implies that the vast increase in the role of TI, beyond anydirect effects in expanding production possibilities, will transform the structure of industry in developed nations, dramatically altering patternsof competition and cooperation.

Chang-Yang Lee provided skilled research assistance. Zvi Griliches, James Hines, Louis Kaplow, Alan Schwartz, and participants in theOctober 1995 National Academy of Sciences Colloquium on Science, Technology, and the Economy made helpful comments.1. Pakes, A. & Schankerman, M. (1984) in R&D, Patents, and Productivity, ed. Griliches, Z. (Univ. of Chicago Press, Chicago), pp. 73–88.2. Organization for Economic Cooperation and Development (1995) Main Science and Technology Indicators (Organization for Economic Cooperation

and Development, Paris).3. Hart, O. & Moore, J. (1988) Econometrica 56, 755–785.4. Fudenberg, D. & Tirole, J. (1990) Econometrica 58, 1279–1319.5. Rogerson, W.P. (1994) J. Econ. Perspect. 8, 65–90.6. Arrow, K. (1994) Information and the Organization of Industry, Rivista Internazionale di Scienze Sociali, Occasional Paper, Lectio Magistralis

(Catholic University of Milan, Milan).7. Lerner, J. (1994) Financ. Manage. 23, 16–27.8. Kogut, B. (1989) J. Ind. Econ. 38, 183–198.9. Samuelson, W. (1984) Econometrica 52, 995–1005.10. Grossman, G.M. & Shapiro, C. (1987) Econ. J. 97, 372–387.11. Cohen, W.M. & Levinthal, D.A. (1989) Econ. J. 99, 569–596.12. Hines, J.R., Jr. (1994) in Tax Policy and the Economy, ed. Poterba, J.M. (MIT Press, Cambridge, MA), Vol. 8, pp. 65–104.13. National Science Foundation (1993) Selected Data on Research and Development in Industry: 1991 (National Science Foundation, Arlington, VA),

NSF Publ. No. 93–322.14. Van de Ven, A.H. (1993) J. Eng. Technol Manage. 10, 23–51.15. Zucker, L.G., Darby, M. & Armstrong, J. (1994) Intellectual Capital and the Firm: The Technology of Geographically Localized Knowledge

Spillovers (National Bureau of Economic Research, Cambridge, MA), Working Paper No. 4946.16. Mansfield, E. (1981) Rev. Econ. Stat. 63, 610–615.17. Scherer, F.M. & Ross, D. (1990) Industrial Market Structure and Economic Performance (Houghton Mifflin, Boston, MA).18. Blumenthal, D., Gluck, M., Louis, K.S., Stoto, M.A. & Wise, D. (1986) Science 232, 1361–1366.19. Powell, W.W., Koput, K. & Smith-Doerr, L. (1996) Admin. Sci. Q. 41, 116–145.20. Von Hippel, E. (1987) Res. Policy 16, 291–302.21. Cohen, L.R. & Noll, R.R. (1995) The Feasibility of Effective Public-Private R&D Collaboration: The Case of CRADAs (Center for Economic Policy

Research, Stanford, CA), Publ. No. 412, Discussion Paper Series.22. Putnam, R.D., Leonardi, R. & Nanetti, R. (1993) Making Democracy Work: Civic Traditions in Modern Italy (Princeton Univ. Press, Princeton).23. Watkins, T.A. (1995) Doctoral dissertation (Harvard University, Cambridge, MA).

wIn 1991, excluding aircraft and missiles, Federal funds comprised 18% (671/3807) of basic research and 22% [(4918–471)/(24,084–3248)] of applied research (ref. 13, pp. 3, 24–25).

THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION 12748

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 98: [Proceedings of the National Academy of Sciences] (BookZZ.org)

This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and KennethSokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.

An economic analysis of unilateral refusals to license intellectualproperty

RICHARD J.GILBERTa AND CARL SHAPIROb

Department of Economics and bHaas School of Business, University of California at Berkeley, Berkeley, CA 94720ABSTRACT The intellectual property laws in the United States provide the owners of intellectual property with discretion to

license the right to use that property or to make or sell products that embody the intellectual property. However, the antitrust lawsconstrain the use of property, including intellectual property, by a firm with market power and may place limitations on the licensingof intellectual property. This paper focuses on one aspect of antitrust law, the so-called “essential facilities doctrine,” which mayimpose a duty upon firms controlling an “essential facility” to make that facility available to their rivals. In the intellectual propertycontext, an obligation to make property available is equivalent to a requirement for compulsory licensing. Compulsory licensing mayembrace the requirement that the owner of software permit access to the underlying code so that others can develop compatibleapplication programs. Compulsory licensing may undermine incentives for research and development by reducing the value of aninnovation to the inventor. This paper shows that compulsory licensing also may reduce economic efficiency in the short run byfacilitating the entry of inefficient producers and by promoting licensing arrangements that result in higher prices.

I. INTELLECTUAL PROPERTY AND THE ANTITRUST LAWSIn the past century, technical progress has continually transformed our society. As economists, in evaluating the role of technology in our

society we naturally focus on the funding of research and development (R&D) efforts and the financial rewards to those whose R&D efforts aresuccessful. As specialists in industrial organization, we are keenly interested in the property rights assigned to innovators. As students ofantitrust policy, we are especially interested in the interaction between intellectual property law, which rewards innovators by granting themsome protection from competition, and antitrust law, which seeks to ensure a competitive market system and limit the creation or maintenanceof monopoly power.

Intellectual property refers to creative work protected by patents, copyrights, and trade secrets (including know-how). These threeprotection regimes grant different rights of exclusion. Patents confer rights to exclude others from making, using, or selling in the United Statesthe invention claimed by the patent for a period of 17 years from the date of issue. (Legislation introduced to comply with the GATT treaty willchange the patent term to 20 years from the date at which the patent application is filed.) To gain patent protection, an invention (which may bea product, process, machine, or composition of matter) must be novel, nonobvious, and useful. Copyright protection applies to original works ofauthorship embodied in a tangible medium of expression. Copyright protection lasts for the life of the author plus 50 years, or 75 years fromfirst publication (or 100 years from creation, whichever expires first) for works made for hire. A copyright protects only the expression, not theunderlying ideas.c Unlike a patent, a copyright does not preclude others from independently creating similar expression. Trade secret protectionapplies to information whose economic value depends on its not being generally known. Trade secret protection is conditioned upon efforts tomaintain secrecy, has no fixed term, and does not preclude independent creation by others.

At a deep level, there is no inherent conflict between the two bodies of intellectual property and antitrust law; in the long run, intellectualproperty rights promote competition by rewarding innovative efforts.d But the long run is an elusive concept, and in practice, great tensionsarise between intellectual property and antitrust law. Indeed, efforts by patent and copyright owners to enforce their intellectual property areoften met by antitrust counterclaims: the assertion that the intellectual property owner enjoys monopoly power and is illegally protecting orexpanding its market position.

Economists and antitrust scholars have long attempted to define an economically efficient tradeoff between the protection of intellectualproperty and the reach of the antitrust laws (1). At the foundation of this tradeoff is the extent and duration of the grant of intellectual propertyrights (2, 3). Should a patentee have only the narrow right to prevent the sale of a duplicate work, or should that right extend to works thatembody similar ideas, and how long should such protection last? Robust conclusions are difficult to obtain, in part because the optimal patentscope depends not only on the proper level of protection for the first innovator, but also on incentives for subsequent innovations that build on,and potentially infringe on, the first patent (4).e

Closely related to the optimal scope of the grant of intellectual property protection is the question of how that grant may be exploitedwithout running afoul of the antitrust laws. As an example, permitting owners of intellectual property to organize cartels in unrelated marketswould increase the

The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked“advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.

Abbreviation: R&D, research and development.cCopyright protection does not extend to “an idea, procedure, process, system, method of operation, concept, principle, or discovery,

regardless of the form in which it is described, explained, illustrated, or embodied in such work” [17 U.S.C. §102(b)].d“[T]he aims and objectives of patent and antitrust laws may seem, at first glance, wholly at odds. However, the two bodies of law

are actually complementary, as both are aimed at encouraging innovation, industry and competition” [Atari Games Corp. v. Nintendo of America, Inc., 897 F.2d 1572, 1576 (Fed. Cir. 1990)]. The Federal Circuit has responsibility for appeals of cases involving patent rights.

eScotchmer (5) and Scotchmer and Green (6) provide a framework for analyzing the incentive effects of patent scope on cumulativeinnovation. Merges and Nelson (7, 8) offer several historical examples of the effects of the scope of intellectual property protection oninnovative performance.

AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY 12749

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 99: [Proceedings of the National Academy of Sciences] (BookZZ.org)

profits from invention and, therefore, enhance incentives for innovation. But such a blanket antitrust exemption for intellectual property wouldcause unacceptably large competitive distortions in the short run. Bowman (9) analyzes the patentantitrust tradeoff from the perspective of theone-monopoly-rent theory, which implies that there is a single profit inherent in the patent. Under this theory, efforts to leverage the patentgrant into other markets do not provide the patentee with additional returns and would not be pursued except for efficiency gains. However,even under the one-monopoly-rent theory, a patentee may engage in acts that adversely affect competition (such as organizing a cartel inunrelated markets or agreeing with producers of substitute products to divide markets and raise prices).f Kaplow (11) pursues a cost-benefit test,comparing the costs of specific conduct to its benefits in enhancing investment in R&D.

Although each of these approaches generates interesting insights, none has proven adequate to provide a clear prescription for antitrustpolicy applied to intellectual property that differs significantly from policy in other areas of the economy. Thus, in 1988 the U.S. Department ofJustice adopted the following enforcement policy: “[F]or the purpose of antitrust analysis, the Department regards intellectual property (e.g.,patents, copyrights, trade secrets, and know-how) as being essentially comparable to any other form of tangible or intangible property” (12). Asimilar statement appears in the 1995 U.S. Department of Justice/Federal Trade Commission Antitrust Guidelines for the Licensing ofIntellectual Property (13).

This paper focuses on one aspect of antitrust law, the so-called “essential facilities doctrine,” which may impose a duty upon firmscontrolling an “essential facility” to make that facility available to their rivals. The essential facilities doctrine has profound consequences forintellectual property protection and for competition in markets where firms own important inputs that are protected by patent, copyright, ortrade secret. In the intellectual property context, an obligation to make property available is equivalent to a requirement for compulsorylicensing. Some would argue that the essential facilities doctrine is one respect in which antitrust policy for intellectual property is clearlydifferent from antitrust policy for other forms of tangible and intangible property. There is considerable case law concluding that a patentee isfree to choose whether or not to license its intellectual property.g But the case law does not state that a failure to license cannot be the basis ofan antitrust offense.h

Even if patents cannot be challenged under the essential facilities doctrine, the case law is much less settled in the area of copyright. Mostof the recent legal battles over access to intellectual property have been in the context of computer software that is protected by copyright.Examples include cases where firms have sought access to proprietary vendor-supported diagnostic software for the servicing of the vendor’shardware. Another example is the copying of computer software to obtain access to proprietary interface codes to facilitate the development ofcomplementary application programs. Thus, the essential facilities doctrine, whether in the form of mandating access or in the related forms ofrequiring compulsory licensing or permitting copying without infringement, lies squarely at the intersection of antitrust and intellectualproperty law.

Section II introduces the legal concept of a unilateral refusal to deal, often addressed under the appellation of the essential facilitiesdoctrine. Section III considers why a profit-maximizing firm might choose to deny access to an important input rather than permit open accessat a monopoly price. There are many procompetitive justifications for a refusal to deal, such as contractual limitations that make it difficult forthe owner of the input to ensure quality and avoid free-riding. A refusal to deal also may increase entry barriers (because competitors have toproduce a substitute for the input that they cannot buy) and enhance price discrimination (if the owner of the input cannot discriminate amongbuyers for the sale of the input). In addition, a refusal to deal may permit higher profits because the owner of an important input may not beable to write a contract with an entrant that would compensate the firm for the loss of profits that would result from competitive entry.

Throughout this discussion, our emphasis is on the consequences of a refusal to deal for economic welfare. We argue that the welfareconsequences of a refusal to deal are ambiguous and that the requirement of mandatory access may lower economic welfare in the short run aswell as in the long run. Section IV discusses some recent approaches to the evaluation of demands for compulsory access that have beenconsidered in U.S. courts and in the European Community. Section V concludes with the observation that the essential facilities doctrine doesnot provide a consistent legal or economic justification for the mandatory licensing of intellectual property.

The future battleground over refusals to deal is likely to be in the proper scope of intellectual property protection under the copyright laws,particularly for computer software. Rather than compel the owner of copyrighted software to license that software to others, the legal andeconomic issues are more likely to focus on the conditions under which the software should be protected by copyright in the first place. This isalso likely to be a more productive inquiry than a policy of selective compulsory licensing.i

II. REFUSALS TO DEAL AND THE ESSENTIAL FACILITIES DOCTRINEUnder the antitrust laws, conduct by a firm with market power may be illegal if the effect of that conduct is to tend to create or sustain a

monopoly, and if that monopoly is not the consequence of superior skill, foresight, or business acumen, or historical accident. A firm withmonopoly power does not violate the antitrust laws merely by charging a monopoly price.j Nonetheless, under some instances, a refusal to dealby a firm or a joint venture with monopoly power may be deemed an antitrust offense. In other words, although antitrust law permits a firm tocharge the price it pleases, the firm may be required to set some price at which it will sell to others, including rivals.

The refusal to deal label has been applied to many cases with very different competitive circumstances (19). This discussion focuses on asituation in which an integrated firm (or joint venture) controls a factor of production that is costly to reproduce and competes in anothermarket against one or

fBaxter notes that “a promise by the licensee to murder the patentee’s mother-in-law is as much within the ‘patent monopoly’ as isthe sum of $50; and it is not the patent laws which tell us that the former agreement is unenforceable and subjects the parties to criminalsanctions” (10).

gSee, for example, SCM Corp. v. Xerox Corp., 645 F.2d 1195 (2d Cir. 1981) cert, denied 455 U.S. 1016 (1982) and Zenith RadioCorp. v. Hazeltine Research, Inc., 395 U.S. 100 (1969). Furthermore §271(d) of the 1988 Amendments to the Patent Act specifies thata refusal to license a patent cannot be the basis for a patent misuse claim.

hIndeed, the recent jury verdict in Image Technical Services v. Eastman Kodak Company (Civil No. C-87–1686 BAC, March 1995)finds Kodak’s unilateral refusal to sell patented parts to be an antitrust offense. C.S. testified on behalf of Kodak in this case. See ref.14 for an analysis of the issues involved in this and related cases.

iFarrell (15, 16), Farrell and Saloner (17), Menell (18), and R.H. Lande and S.M.Sobin (unpublished work) offer useful perspectiveson the efficient scope of protection for computer software. A recent case that raises important issues on the scope of copyrightprotection is Lotus Dev. Corp. v. Borland Int’l, Inc., F.3d 807 (1st Cir. 1995).

jSee, for example, United States v. Grinnell Corp., 384 U.S. 563, 571 (1966) and United States v. Aluminum Co. of America, 148F.2d 416, 430 (2nd Cir. 1945).

AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY 12750

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 100: [Proceedings of the National Academy of Sciences] (BookZZ.org)

more firms that desire access to the factor of production. The factor of production could be a physical input or intellectual property that isowned or controlled by the integrated firm. Examples are a local telephone network, a distribution network, a patented product or process, andthe control of proprietary interface standards.k The firms seeking access may be competitors in upstream, downstream, or otherwisecomplementary markets.

A refusal to deal by a vertically integrated firm appears on its face to adversely affect competition by denying rivals a product or servicethat is a necessary input for effective competition.l This is hardly a complete analysis, however, because it does not account for the incentives tocreate the essential input or the price at which that input can optimally be sold. Clearly, the mere fact that a firm controls an input that isvaluable to its competitors cannot be sufficient to compel a duty to deal, as a firm can have many innocent reasons for refusing to supply arival. In MCI Communications Corporation v. AT&T, 708 F.2d 1081 (7th Cir.), cert, denied, 464 U.S. 955 (1983), MCI argued that access toAT&T’s local switching equipment was essential to compete in the long-distance telephone market. The Seventh Circuit upheld a jury verdicton liability. In its decision, the court described the necessary elements of an essential facilities claim: (i) control of an essential facility by amonopolist; (ii) a competitor’s inability practically or reasonably to duplicate the essential facility; (iii) the denial of the use of the facility to acompetitor; and (iv) the feasibility of providing the facility.m

These conditions do not characterize the circumstances under which compulsory access to a facility or to intellectual property would bebeneficial to economic welfare. A firm may choose to deny access to an actual or potential competitor (at a price that would allow the actual orpotential entrant to earn a non-negative return) for many different reasons. These include reasons that are likely to enhance economicefficiency. For example, a hardware vendor may refuse to allow independent firms to service its machines if the independents cannot ensure adesired level of service quality. Furthermore, a refusal to deal can prevent free-riding that would diminish incentives for investment andinnovation.

A refusal to deal also may be motivated by the desire of the owner of an important input to prevent the entry of new competition, either inthe market for the input or in the market for a product that is produced with the input. By refusing to sell an input, a potential competitor has tocompete as a de novo entrant both in the market for the input and in the market for the final product. This “two-level” entry requirement mayraise the cost of entry into the final product.

A refusal to deal also may enable an owner of an input to exploit its market power more effectively by promoting price discrimination inthe sale of the final product. For example, a hardware vendor may refuse to accommodate independent service organizations because service isa convenient “metering” device by which the vendor can monitor and charge customers according to their intensity of use. Such pricediscrimination can lead to increased output by expanding sales to price-sensitive customers.n A detailed factual inquiry is needed in any givencase to determine whether a refusal to deal that is based solely on improved opportunities for price discrimination reduces or enhances overallefficiency.

An owner of a necessary input also may refuse to sell or license the input to preserve its market power in the production of the finalproduct. The owner of a necessary input would not benefit from the licensing or sale of the input unless the owner can construct a contract thatcompensates the owner for the loss of revenue that may result from entry. Such a contract can be difficult to construct in many circumstances.The next section focuses specifically on the incentives of the owner of an essential input to execute a contract to license the input and on theconsequences of a refusal to deal for economic efficiency.

III. WHY REFUSE TO DEAL RATHER THAN SET A HIGH PRICE?To better understand the market-power-preservation rationale for a refusal to deal, we employ the game-theoretic framework developed in

Katz and C.S. (25). Firms 1 and 2 compete to sell a homogeneous product with initial constant marginal costs a1 and a2 and zero fixed costs. Inthe first stage of the game, firm 1 acquires a process innovation that lowers its marginal cost to m1<a1. In the second stage, firm 1 chooseswhether to offer a license for the process to firm 2, and firm 2 chooses whether to accept or reject the license. If firm 1 licenses firm 2 (and firm2 accepts the license), firm 2’s marginal cost (excluding any licensing fees) falls to m2 < a2. Otherwise, firm 2’s marginal cost remains at a2.

Firm 1’s decision to license the new technology to firm 2 is similar to the decision to provide access to a facility. Access is often ofparticular significance in network industries, for which the facility may be a common interface or proprietary standards that enable the supplyof complementary network products and services.o For example, J.Church and N.Gandal (unpublished work) consider competition in a marketwhere firms offer complementary products, such as hardware and software, and have the choice of making their products compatible with theproducts of a rival. In this framework, intentional incompatibility has similar effects as a refusal to deal.

When Is a Facility Essential? The legal opinions that address the subject of essential facilities do not define the technical requirementsthat must be satisfied for an input to be essential. Consider a production function y=f(xα,xβ), where xα, xβ are inputs into the production of y.Input α is clearly essential if f(0, xβ)=0 for any xβ, but this is not the only reasonable definition of an essential input. In most circumstances, thedetermination of whether an input is essential to competition will depend on the price of the input and the price of the product (or products) thatuse the input. At prevailing prices, an electric utility’s transmission lines may be essential to compete in the sale of electricity. But at muchhigher electricity prices, it may be profitable to build new transmission lines or to contract for transmission from other sources. The secondprong of the MCI essential facilities test, “a competitor’s inability practically or reasonably to duplicate the essential facility,” implicitlyincludes a presumption about market prices in the determination of what is practical or reasonable.

We propose the following economic definition of an essential input. Consider an integrated firm that owns or controls input 1 andproduces an outputy. Let pm be the monopoly price of outputy when the integrated firm faces prices w for all inputs other than input 1. Let C(y,w) be the total cost of producing outputy for an equally efficient firm that faces an infinite price for input 1 and prices w for all other inputs. Wesay that input 1 is essential if C(y, w) > pmy for all y.p That is, without input

kOther examples include the control of quality or safety standards [see Anton and Yao (20)].lSee Werden (21). Ordover, Salop, and Saloner (22) and Riordan and Salop (23) each provide an illuminating analysis of related

competitive effects in the context of vertical mergers.mSee MCI Communications Corporation v. AT&T at 1132–1134. The Supreme Court recently affirmed this general approach. “It is

true as a general matter a firm can refuse to deal with its competitors. But such a right is not absolute: it exists only if there arelegitimate competitive reasons for the refusal.” [Image Technical Services v. Eastman Kodak Company, 504 U.S. 541 (1992)].

nSee ref. 24 for an analysis of how a refusal to deal, implemented by a price squeeze, can facilitate price discrimination.oBaker (26), Carlton and Klammer (27), D.W.Carlton and S.C.Salop (unpublished work), and H.Hovenkamp (unpublished work)

discuss issues that bear on mandatory access for network joint ventures.pWe take pm as a parameter in the definition, independent of the

AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY 12751

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 101: [Proceedings of the National Academy of Sciences] (BookZZ.org)

1, an equally efficient firm cannot profitably produce any level of output even if the output is sold at the integrated firm’s monopoly price.Under these conditions, without access to input 1, the equally efficient firm cannot exercise any constraint on pricing by the integrated firm.

The definition of essential can be extended to include the price of the input. Firm 2 may not be able to compete against an integratedmonopolist unless the input is available at a price that is sufficiently low. But how low should the price be? A particularly inefficient firm maynot be able to compete unless it can purchase an input at a price that is less than the input’s marginal cost. Our preferred definition states that aninput is essential only if an equally efficient firm cannot compete when the input is not available, or equivalently, when its price is infinite.

In our simple example, we consider the innovation by firm 1 to be essential if firm 2 cannot compete without the innovation when firm 1sets any price that is less than or equal to its monopoly price. The input in our example is the innovation. Firm 1 is a vertically integratedproducer of both the input and the final output. Let p1

m (m1) be firm 1’s monopoly price of the final output when its marginal cost is m1. By ourdefinition, firm 1’s technology is essential if a2 > p1

m (m1); that is, if firm 1’s technological innovation would eliminate competition when firm1 sets a monopoly price, if firm 2 does not have access to the innovation.

When Would Firm 1 Refuse to License the Innovation to Firm 2? Firm i’s profits depend on its cost, the cost of its rival, and on thecompetitive circumstances of the industry. The terms of a license affect industry costs and may also influence the intensity of competition. Forexample, firms’ pricing decisions may depend on whether a license calls for a fixed royalty or for royalties that vary with the licensee’s output.There is a mutually acceptable license if, and only if, total industry profits when there is licensing exceed total industry profits when firm 1does not license. Whether this is the case will depend on m1, m2, and a2, and on the form of the licensing arrangement.

Following Katz and C.S. (25), we explore the implications for licensing under two licensing regimes. In both regimes, firms 1 and 2 areNash-Cournot competitors in the absence of a licensing arrangement. In the first regime, the firms negotiate a fixed-fee license that imposes noconditions on the firms’ choices of prices and outputs. The license requires only a fixed payment from firm 2 to firm 1. Conditional on thelicense, the firms compete as Nash-Cournot competitors with marginal costs m1 and m2. In the second licensing regime, the firms negotiate aroyalty structure that supports the joint profit-maximizing outputs. We refer to the second regime as licensing with coordinated production.

Alternative 1: Nash-Cournot Competitors with a Fixed Licensing Fee. Katz and C.S. (25) derive the conditions on m1, m2, and a2 thatare necessary for firm 1 to enter into a profitable licensing arrangement with firm 2. These conditions are summarized in Fig. 1. In the areabounded by abcdef, firm 1 will refuse to license firm 2. Generally, it is profitable for firm 1 to exclude firm 2 by refusing to offer firm 2 alicense when, relative to firm 1’s cost, (i) firm 2’s cost if excluded is large and (ii) firm 2’s cost with a license is not too small. Firm 1 will notchoose to license an essential innovation, defined by a2 > p1

m (m1), unless firm 2’s marginal cost with the license is very small. Refusals to dealare not limited to essential innovations. Even if firm 2 could compete without a license, firm 1 may choose to exclude firm 2 from theinnovation unless it would substantially reduce firm 2’s cost.q

FIG. 1. Outcomes with fixed-fee licensing.In the Nash-Cournot case with constant marginal costs, if licensing is privately rational, it is also welfare-enhancing. Licensing is privately

rational when it increases industry profits. In addition, licensing lowers industry costs. In the Nash-Cournot case with constant marginal costs,total output is higher and price is lower when costs are reduced, so consumers are also better off when firm 1 voluntarily licenses firm 2.

Compulsory licensing in this context is a fixed-fee license that firm 2 will accept; that is, a royalty that is less than firm 2’s profit with thelicense. Compulsory licensing can increase welfare, but not always. When firm 2’s marginal cost with a license is close to firm 1’s monopolyprice, and significantly above firm 1’s marginal cost, a license can decrease welfare because it substitutes high-cost production by firm 2 forlower-cost production by firm 1 (25). The area in Fig. 1 for which compulsory licensing will decrease welfare is defined by the region boundedby agdef. Thus, with fixed-fee licensing, a compulsory licensing requirement will lower welfare even in the short run if the licensee would havehigh costs in the absence of the license and also relatively high costs with the license compared with the licensor. This is a case in which thelicense is essential for the licensee to compete, but the licensee would not be a very efficient competitor.

Alternative 2: Licensing with Coordinated Production. Firm 1 will always license firm 2 if their joint-maximizing profits exceed theirstand-alone profits and if the license agreement can enforce the joint-maximizing outcome. Joint profit-maximizing licensing arrangements canbe implemented in different ways, including a suitably defined two-part tariff or a forcing contract that requires each firm to pay a penalty if itsoutput deviates from the contracted level (which requires that the firms be able to monitor each others’ outputs). In our example with firms thathave constant marginal costs, only the firm with the lower marginal cost will produce in a joint profit-maximizing arrangement. Thus, with m2> m1, firm 1 in

nonintegrated firm’s output. More generally, price will vary with the nonintegrated firm’s output and can be a further constraint onthe nonintegrated firm’s profits.

qThese results contrast with the conclusion in Economides and Woroch (unpublished work), who consider a model of interconnectingnetworks and show that foreclosure (refusal to deal) is not a profit-maximizing strategy. However, in their model, the owner of theessential facility and the potential entrant produce differentiated products. Entry adds value that can be captured by the owner of theessential facility.

AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY 12752

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 102: [Proceedings of the National Academy of Sciences] (BookZZ.org)

effect pays firm 2 to exit the industry.r In more general circumstances with increasing marginal costs, both firms may produce at positive levelsin a coordinated licensing arrangement.

FIG. 2. Outcomes with coordinated production.A joint profit-maximizing license may increase economic welfare in the short run by achieving more efficient production. However, when

firms 1 and 2 produce products that are substitutes, a joint profit-maximizing license also permits collusion. Fig. 2 shows the range of firm 2’smarginal costs for which licensing reduces welfare in the short run. This is the area bounded by a, m1, c, and d in Fig. 2. In this regime,licensing lowers welfare if firm 2’s costs are not too large without the license. Licensing raises welfare in this case if the licensee’s cost withouta license is high, but not so high that it cannot compete.

When licensing can achieve coordinated production, the firms will have private incentives to reach an agreement. There is no scope forcompulsory licensing in this situation. Nonetheless, compulsory licensing, if available, can be a source of inefficiency. Firm 2 could usecompulsory licensing as a threat to wrestle more favorable licensing terms from firm 1.

The jointly profit-maximizing licensing arrangement high-lights the importance of complementarities, network externalities, and othereffects, such as product differentiation, in the evaluation of the benefits of compulsory licensing. In the simple example where firms 1 and 2produce homogenous products, licensing that coordinates production reduces welfare for many parameter values. However, if the productsproduced by firms 1 and 2 are complements, licensing with coordinated production would eliminate double-marginalization and result in bothgreater profits and lower prices for consumers.

Implications for Compulsory Licensing. The incentives for and consequences of licensing differ sharply in the two regimes, which differonly in the contract that the licensor can enter into with the licensee. In the first regime, there are efficient licenses that are not voluntary. Forthis reason, compulsory licensing can increase welfare in the short run. However, there are also licensing arrangements involving high-costlicensees that are neither voluntary nor efficient. Compulsory licensing under such circumstances leads to lower welfare in the short run. Thesecond regime poses a risk of overinclusive licensing. Thus, in the first regime, the policy dilemma raised by compulsory licensing is how toavoid compelling licenses to high-cost licensees. In the second regime, the dilemma is how to prevent firms from entering into licenseagreements when those firms would be reasonably efficient competitors on their own.

Compulsory licensing rarely imposes a specific form for the license, other than the requirement that a royalty be “reasonable.” This lack ofdefinition complicates the assessment of compulsory licensing because the consequences of a compulsory license for economic efficiencydepend, inter alia, on the form of the licensing arrangement. We have considered the polar cases of a fixed-fee license and a license thatachieves coordinated production. A royalty that is proportional to the licensee’s sales has economic effects that are similar to the effects of afixed-fee license if the royalty rate is small. A fixed-fee license has a zero royalty rate and the fixed-fee itself has no consequence for totaleconomic surplus because it is only a transfer of wealth between the licensee and the licensor. Thus, a small royalty that is proportional to salesis likely to raise the same types of concerns that were identified for fixed-fee licenses. Specifically, even a “royalty-free” license can harmeconomic efficiency by facilitating the entry of a high-cost firm. Of course, if the royalty rate is large, the costs imposed on the licensee may belarge enough to cause de facto exclusion, so that the compulsory license would not provide economically meaningful access.

These results pose obvious difficulties for designing a public policy that may require compulsory access to a firm’s technology. Unlesscompulsory access policies are designed and implemented with great care, firms will have incentives to misrepresent their costs to obtain alicense, and compulsory licensing may not improve economic welfare even in the short run. It is considerably easier to state the theoreticalconditions under which a firm will refuse to deal than to determine if compulsory licensing is beneficial in particular market circumstances.

How Does an Obligation to License Affect Incentives for R&D? In general, the effects of licensing on the incentives to invest in R&Dare complex and may lead to under- or overinvestment in R&D (25). Conceptualizing investment in R&D as a bid for an innovation producedby an upstream R&D laboratory, we note that a compulsory licensing requirement is likely to reduce the incentives for R&D for two reasons.First, a compulsory license reduces the profits of the winning bidder by forcing the winner to license in situations where it is not privatelyrational to do so. Second, compulsory licensing is likely to lower the value of the winning bid because it increases the profits of the losingbidder. Under compulsory licensing, the losing bidder is assured that it will benefit from the innovation, assuming the owner of the technologyis compelled to license the technology at a price that the licensee would be willing to pay. The size of the winning bid is determined by a firm’svalue of owning the technology, less the value to the firm if the technology is in the hands of its rival. Compulsory licensing lowers the firstcomponent and raises the second.

Thus, compulsory licensing can have two negative effects on economic welfare. It can reduce welfare in the short run by compellinginefficient licensing. It also can reduce welfare in the long run by reducing incentives for innovation. In general, the effects of compulsorylicensing may act to increase or decrease economic welfare in both the short and the long run, depending on specific parameter values and thedynamics of

AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY 12753

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

rThe royalty arrangement has to prevent firm 2 from entering as an inefficient producer, which may require a provision restrictingfirm 2 from using its own technology (28).

Page 103: [Proceedings of the National Academy of Sciences] (BookZZ.org)

competition. It is this indeterminacy that makes compulsory licensing a potentially very costly public policy instrument.

IV. RECENT LEGAL APPROACHES TO ANALYZING REFUSALS TO DEALLegal opinions addressing unilateral refusals to deal have attempted to analyze the requirement to provide access either directly as an

essential facility by applying the MCI factors or indirectly by evaluating the effects and motivation for the alleged anticompetitive conduct. Arecent example of the latter approach is in Data General Corp. v. Grumman Systems Support Corp., 36 F.3d 1147 (1st Cir. 1994). Data Generalsold mini-computers and also offered a line of products and services for the maintenance and repair of the computers that it sold. Grummancompeted with Data General in the maintenance and repair of Data General’s computers. In addition to other antitrust claims, the courtaddressed whether Data General illegally maintained its monopoly in the market for the service of Data General computers by unilaterallyrefusing to license its diagnostic software to Grumman and other competitors.

The court considered whether Data General’s refusal to license “unjustifiably harm the competitive process by frustrating consumerpreferences and erecting barriers to competition.” It concluded otherwise because, despite a change in Data General’s policy in dealing withindependent service organizations, there was no material effect on the competitive process. Data General was the dominant supplier of repairservices for its own machines, both when it chose to license its diagnostic software to independent service organizations and later, when itchose not to license independents.

The court’s analysis in Data General failed to address the central economic question, which is whether a policy that requires Data Generalto license its software to independent service organizations would enhance economic welfare. Moreover, a focus on preserving the competitiveprocess raises obvious difficulties that have been emphasized by several authors. Areeda (29) notes that “lawyers will advise their clients not tocooperate with a rival; once you start, the Sherman Act may be read as an anti-divorce statute.” Easterbrook (30) makes a similar argument andnotes the contradiction posed by policies that promote aggressive competition on the merits, which may exclude less efficient competitors, anda policy that imposes an obligation to deal.

Other countries have not been more successful in arriving at an economically sound rationale for compulsory licensing. An example is therecent Magill decision by the European Court of Appeals. The case was the result of a complaint brought to the European Commission byMagill TV Guide Ltd. of Dublin against Radio Telefis Eireann (RTE), Independent Television Publications, Ltd. (ITP), and BBC. The caseinvolved a copyright dispute over television program listings. RTE, ITP, and BBC published their own, separate program listings. Magillcombined their listings in a TV Guide-like format. RTE and ITP sued, alleging copyright infringement.

The Commission concluded that there was a breach of Article 86 of the Treaty of Rome (abuse of dominant position) and ordered the threeTV broadcasters to put an end to that breach by supplying “third parties on request and on a non-discriminatory basis with their individualadvance weekly program listings and by permitting reproduction of those listings by such parties.” The European Court of First Instance upheldthe Commission’s decision, as did the European Court of Appeals.

The decision by the European Court of Appeals stated that the “appellants’ refusal to provide basic information by relying on nationalcopyright provisions thus prevented the appearance of a new product, a comprehensive weekly guide to television programmes, which theappellants did not offer and for which there was a potential consumer demand. Such refusal constitutes an abuse…of Article 86.” Moreover, thecourt said there was no justification for such refusal and ordered licensing of the programs at reasonable royalties.

This is an expansive rationale, in the absence of a clear definition of what constitutes a valid business justification. The analysis discussedin this paper could be extended to consider the economic welfare effects of compulsory licensing of a technology that enables the production ofa new product. The results described here are likely to apply to the new product case, at least for circumstances in which the licensee’s and thelicensor’s products are close substitutes. Thus, it is likely that compulsory licensing to enable the production of a new product would haveambiguous effects on economic welfare, even ignoring the likely adverse consequences for long-term investment decisions.

The analysis in this paper is unlikely to support the argument that economic welfare, either in the short run or in the long run, is enhancedby an obligation to license intellectual property (or to sell any form of property) whenever such property is necessary for the production andmarketing of a new product for which there is potential consumer demand. It should be noted, however, that this analysis focuses on the effectsof compulsory licensing on economic efficiency as measured by prices and costs. It does not attempt to quantify other possibly importantfactors, such as the value of having many decision-makers to pursue alternative product development paths. Merges and Nelson have arguedthat the combination of organization failures and restrictive licensing policies have contributed to inefficient development of new technologiesin the past, and that these failures could have been ameliorated with more liberal licensing policies (31).

V. CONCLUDING REMARKSThe essential facilities doctrine is a fragile concept. An obligation to deal does not necessarily increase economic welfare even in the short

run. In the long run, obligations to deal can have profound adverse incentives for investment and for the creation of intellectual property.Although there is no obvious economic reason why intellectual property should be immune from an obligation to deal, the crucial role ofincentives for the creation of intellectual property is reason enough to justify skepticism toward policies that call for compulsory licensing.Equal access (compulsory licensing in the case of intellectual property) is an efficient remedy only if the benefits of equal access outweigh theregulatory costs and the long run disincentives for investment and innovation. This is a high threshold, particularly in the case of intellectualproperty.

It should be noted that in Data General Corp. v. Grumman Systems Support Corp., the court analyzed Data General’s refusal to deal as aviolation of section 2 (monopolization) without applying the conditions that have been specified in other courts as determinative of an essentialfacilities claim. Had the court done so, it might well have concluded that Data General’s software could not meet the conditions of an essentialfacility because it could be reasonably duplicated. The purpose of patent and copyright law is to discourage such duplication so that inventorshave an incentive to apply their creative efforts and to share the results with society. In this respect, compulsory licensing is fundamentally atodds with the goals of patent and copyright law and should be countenanced only in extraordinary circumstances.

Despite the adverse incentives created by a refusal to deal, whether for intellectual or other forms of property, courts appear to view withsuspicion a flat refusal to deal, even while they are wary of engaging in price regulation under the guise

AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY 12754

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.

Page 104: [Proceedings of the National Academy of Sciences] (BookZZ.org)

of antitrust law.s The fact remains, however, that the courts cannot impose a duty to deal without inevitably delving into the terms andconditions on which the monopolist must deal.t This is a typically a hugely complex undertaking. The first case in the United States thatordered compulsory access, United States v. Terminal R.R. Ass’n, 224 U.S. 383 (1912) and 236 U.S. 194 (1915), required a return visit to theSupreme Court to wrestle with the terms and conditions that should govern such access (26).u The dimensions of access are typically socomplex that ensuring equal access carries the burden of a regulatory proceeding. F.Warren-Boulton, J.Woodbury, G.Woroch (unpublishedwork) and P.Joskow (unpublished work) consider alternative institutional arrangements for markets with essential facilities, such as structuraldivestiture and common ownership of bottleneck facilities. However, none of these institutional alternatives is without significant transactionand governance costs that are difficult to address even in a regulated environment.

With specific reference to intellectual property, the future battleground over a firm’s obligation to deal with an actual or potentialcompetitor is likely to concentrate in the domain of computer software. This is where competitive issues have surfaced, issues such as access todiagnostic tools that are necessary to service computers,v access to software for telecommunications switching,w and access to interface codesthat are necessary to achieve interoperability.x This debate is more likely to focus on what is protect able under the copyright laws than on whatprotectable elements are candidates for compulsory licensing. In a utilitarian work such as software, it is particularly difficult to ascertain theboundaries between creative expression that is protectable under copyright law and other, functional, elements. Thus, it is more likely thatessential facilities will give way to the prior issue of determining the scope of property over which firms may claim valid intellectual propertyrights. This seems the more sensible direction for public policy. Our analysis has not demonstrated a clear understanding of the conditions thatlead to the conclusion that the owner of any type of property should, for reasons of economic efficiency, be compelled to share that propertywith others. A more productive channel of inquiry appears to us to focus on the types of products that justify intellectual property protectionand the appropriate scope of that protection.

We are grateful for comments by Joe Farrell and seminar participants at the University of California at Berkeley.1. Nordhaus, W. (1969) Invention, Growth and Welfare: A Theoretical Treatment of Technological Change (MIT Press, Cambridge, MA).2. Gilbert, R.J. & Shapiro, C. (1990) Rand J. Econ. 21, 106–112.3. Klemperer, P. (1990) Rand J. Econ. 21, 113–130.4. Kitch, E. (1977) J. Law Econ. 20, 265–290.5. Scotchmer, S. (1991) J. Econ. Perspect., Winter, 5, 29–41.6. Scotchmer, S. & Green, J. (1990) Rand J. Econ. 21, 131–146.7. Merges, R.P. & Nelson, R.R. (1990) Columbia Law Rev. 90, 839–916.8. Merges, R.P. & Nelson, R.R. (1994) J. Econ. Behav. Organ. 25, 1–24.9. Bowman, W. (1973) Patent and Antitrust Law (Univ. Chicago Press, Chicago).10. Baxter, W.F. (1966) Yale Law J. 76, 277.11. Kaplow, L. (1984) Harvard Law Rev. 97, 1815–1892.12. U.S. Department of Justice (1988) Antitrust Enforcement Guidelines for International Operations, Nov. 10.13. U.S. Department of Justice and the Federal Trade Commission (1995) Antitrust Guidelines for the Licensing of Intellectual Property, April 6.14. Shapiro, C. (1995) Antitrust Law J. 63, 483–511.15. Farrell, J. (1995) Stand. View June, 46–49.16. Farrell, J. (1989) Jurimetrics J. 30, 35–50.17. Farrell, J. & Saloner, G. (1987) in Product Compatibility as a Competitive Strategy, ed. Gabel, H.L. (North-Holland Press), pp. 1–21.18. Mennell, P. (1987) Stanford Law Rev. 39, 1329–1372.19. Glazer, K.L. & Lipsky, A.B., Jr. (1995) Antitrust Law J. 63, 749–800.20. Anton, J.J. & Yao, D.A. (1995) Antitrust Law J. 64, 247–265.21. Werden, G. (1988) St. Louis Univ. Law Rev. 32, 433–480.22. Ordover, J., Saloner, G. & Salop, S.C. (1992) Am. Econ. Rev. 80, 127–142.23. Riordan, M.H. & Salop, S.C. (1995) Antitrust Law J. 63, 513–568.24. Perry, M.K. (1978) Bell J. Econ. 9, 209–217.25. Katz, M.L. & Shapiro, C. (1985) Rand J. Econ. 16, 504–520.26. Baker, D. L (1993) Utah Law Rev. Fall, 999–1133.27. Carlton, D. & Klammer, M. (1983) Univ. Chicago Law Rev. Spring, 446–465.28. Shapiro, C. (1985) Am. Econ. Rev. 75, 25–30.29. Areeda, P. (1990) Antitrust Law J. 58, 850.30. Easterbrook, F.H. (1986) Notre Dame Law Rev. 61, 972–980.31. Reiffen, D. & Kleit, A. (1990) J. Law Econ. 33, October, 419–438.

sOf course, many monopolists, such as local telephone companies, face price regulation, but not under antitrust law.tFor example, in D&H Railway Co. v. Conrail, 902 F.2d 174 (2nd Cir. 1990), the Second Circuit Court of Appeals found that

Conrail’s 800% increase in certain joint rates raised a genuine issue supporting a finding of unreasonable conduct amounting to a denialof access by Conrail. Compare this, however, with Laurel Sand & Gravel, Inc. v. CSX Transp., Inc., 924 F.2d 539 (4th Cir. 1991), inwhich the plaintiff, a shortline railroad, received an offer from CSX for trackage rights but alleged that the rate quoted was so high as toamount to a refusal to make those trackage rights available. The Fourth Circuit found on these facts that there could be no showing thatthe essential facility was indeed denied to the competitor.

uThe Terminal Railroad Association was a joint venture of companies that controlled the major bridges, railroad terminals, andferries along a 100-mile stretch of the Mississippi River leading to St. Louis. Reiffen and Kleit (31) argue that the antitrust issue inTerminal R.R. was not the denial of access but rather the horizontal combination of competitors in the joint venture.

vSee, for example, Data General, discussed above, and Image Technical Services, Inc. v. Eastman Kodak Company, 504 U.S. 541(1992).

wThe MCI case is an example.xSee, for example, Atari Games Corp. v. Nintendo of America, Inc., 897 F.2d 1572, 1576 (Fed. Cir. 1990).

AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY 12755

Abou

t thi

s PD

F fil

e: T

his

new

dig

ital r

epre

sent

atio

n of

the

orig

inal

wor

k ha

s be

en re

com

pose

d fro

m X

ML

files

cre

ated

from

the

orig

inal

pap

er b

ook,

not

from

the

orig

inal

type

setti

ng fi

les.

Pag

e br

eaks

are

true

to th

e or

igin

al; l

ine

leng

ths,

wor

d br

eaks

, hea

ding

sty

les,

and

oth

er ty

pese

tting

-spe

cific

form

attin

g, h

owev

er, c

anno

t be

reta

ined

, and

som

e ty

pogr

aphi

c er

rors

may

hav

e be

en a

ccid

enta

lly in

serte

d. P

leas

e us

e th

e pr

int v

ersi

on o

f thi

s pu

blic

atio

n as

the

auth

orita

tive

vers

ion

for a

ttrib

utio

n.