Air University Review, March-April 1969

Forecasting the Progress of Technology

Major Joseph P. Martino

The very nature of the Air Force is strongly influenced by technology. Not only the equipment it uses but also organization, the skills needed by its members, and the physical facilities it must have depend upon available technology. Likewise, the Air Force of the future will be influenced strongly by the technology available then. Actions which affect the Air Force of the future, including the recruiting of personnel, the training given new personnel, and decisions about new construction, must take into account future technological developments. How can the planner determine what the technology of the future will be like, so that he can take account of it in his plans and decisions? This involves an art and science known as technological forecasting.” Just what is technological forecasting? And how is it done?

In a sense, technological forecasting has been going on for centuries. Flying machines, long-distance communications, sound-recording apparatus, and so on have been discussed speculatively by many thinkers. Bacon and Da Vinci are only two of the great names associated with speculation of this kind. Since the beginning of the Industrial Revolution, writers of fiction have frequently made forecasts of advanced technology, as a vehicle for the story they wanted to write. In what way, then, does modern technological forecasting differ from such speculation?

Ralph Lenz, one of the pioneers of technological forecasting within the Air Force, has described it as follows:

Technological forecasting may be defined as the prediction of the invention, characteristics, dimensions, or performance of a machine serving some useful purpose. . . . The qualities sought for the methods of prediction are explicitness, quantitative expression, reproducibility of results, and derivation on a logical basis.

The differences between technological forecasting and speculation, then, lie primarily in the attempts of the forecaster to achieve precision in the description of the useful machine whose characteristics he is forecasting and in his attempts to place the forecast on a sound scientific foundation through the use of logical and explicit methods. A well-done forecast will state the predicted characteristics of the machine being forecast and make clear the means by which the forecast was arrived at.

However, the forecast of a future invention must be distinguished from the act of invention itself. A forecast may predict levels of performance that are well beyond the current state of the art; it may even predict levels of performance that exceed the theoretical or physical limits of currently used devices or machines. The forecast will not specify how these limitations are to be overcome; it will state only that by a certain time in the future the limitations will have been overcome by means as yet unknown, possibly including the invention of a new device not subject to the limitations of current devices. In short, a forecast predicts that an invention will have been made but does not do the inventing.

How is it possible to forecast the detailed characteristics of future machines, especially when these machines may rely on inventions and discoveries not yet made? A wide variety of methods is in use for making these forecasts, five of which will be described. These are intuitive forecasts, consensus methods, analogy, trend extrapolation, and structural models.

intuitive forecasts

Intuitive forecasting is almost certainly the most widely used method. It is the kind of forecast obtained by “asking an expert.” The assumption behind the use of this method is that the expert in some field of technology has a broad background of knowledge and experience upon which he can draw to forecast where his field is going. However, the record shows that the experts have been far from infallible. Arthur C. Clarke, in Profiles of the Future, describes some famous negative predictions, made by unquestioned authorities who were forecasting in their fields of expertise and who turned out to be one hundred percent wrong. Perhaps the most striking example is the forecast implicit in the statement made by the British Astronomer Royal in 1956, that “space travel is utter bilge.” The Library of Congress has compiled a very extensive list of expert predictions, entitled “Erroneous Predictions and Negative Comments Concerning Exploration, Territorial Expansion, Scientific and Technological Development.” As the title implies, this survey includes not only statements about the feasibility of certain technological advances but also statements about the economic value of geographic and scientific exploration. Everyone of the predictions in this survey was made by a distinguished authority who should have been well informed in the field in which he made his prediction, and every one of them was proven wrong, the proof often coming not long after the ink was dry on the page of the forecast.

What is the lesson to be drawn from this? That experts are always wrong, and therefore intuitive forecasts are worthless? Not at all. In the first place, there are many examples of the experts’ being right. These examples just don’t make as exciting reading as errors do. Second, the fact that people still do consult experts in preference to people who know nothing about a subject indicates that an expert is more likely to be right than is a non-expert. Putting it another way, even though an expert may be wrong, his intuitive forecast may still be the best forecast available. This, in fact, is the nub of the problem. The real trouble with intuitive forecasting, according to Lenz, is that it is “impossible to teach, expensive to learn, and excludes any process of review.” The real goal is not to get rid of the experts but to devise methods which are teachable and which are less intuitive and more explicit, so that it becomes possible to have a forecast checked by several people, just as any engineering design or calculation can be checked.

consensus methods

One of the simplest methods for overcoming some of the disadvantages of intuitive forecasts is the use of a panel of experts. The notion behind this is that the interaction between several experts is more likely to ensure consideration of aspects which any single individual might overlook. More of the factors bearing on a situation are likely to be considered, and there is a better chance that a hidden bias of one panel member will be offset by a contrary bias in another member. The forecast may be prepared by a panel meeting face to face, or it may be prepared by a panel which never meets but interacts in other ways.

Probably the most common type of consensus forecast is that prepared by a panel which meets together. This method has proven successful in the past. The U.S. federal government, especially the Department of Defense, made extensive use of this method. One the largest groups ever assembled for this one was the Air Force’s Project Forecast, which had representatives from 30 Department of Defense organizations, 10 non-DOD federal organizations, 26 universities, 70 industrial corporations, and 10 not-for-profit corporations. These people were organized into 12 technology panels and 5 capability panels. They met during a six-month period in 1963 and produced a 14-volume report on the technology required to meet the defense needs of the 1970s.

Despite the widespread use and apparent success of face-to-face panels, they do have a number of disadvantages, all stemming from the well-known problems of committee action. A dominant personality may unduly influence the results. Fatigue of the group as a whole may result in a false consensus. There may be unwillingness on the part of members to abandon a publicly expressed opinion, even after hearing contrary arguments. And there is the opposite possibility of producing a watered-down least common denominator out of a desire to avoid offending anyone.

In an attempt to overcome these difficulties, researchers at the RAND Corporation devised the “Delphi Procedure,” which makes use of a panel of experts to arrive at a consensus but avoids the drawbacks of committee action by using a series of questionnaires instead of having the committee members meet face to face. In the first questionnaire, they are asked to make their forecasts on the topic of interest. The replies are compiled as a composite forecast, which shows the extent of the differences of opinion among the members of the panel but preserves the anonymity of the panelists and their opinions. In the second questionnaire, the panelists are asked to comment on the composite forecast and give reasons why they disagree with the composite result, if they do disagree. In the third and subsequent questionnaires, the panelists are presented with the current composite forecast as well as a summary of the reasons the panelists gave for changing it (i.e., arguments as to why an event would take place earlier or later than the majority of the panel thinks it will).

In each succeeding round of questionnaires, the panelists are expected to consider the arguments of the other panelists and either defend their positions with counterarguments or change their positions to agree with the majority. The anonymity of the procedure makes it easier for the panelists to consider arguments on their merits, without being influenced by their personal opinions of the panelists who originated the argument. In addition, panelists find it easier to abandon their earlier positions without losing face, if they become convinced that their earlier positions were in error. In practice, four or five rounds of questionnaires are sufficient for the panelists to converge on an agreed prediction. Figure 1 shows the behavior of one experimental Delphi panel on a single question: the estimated date of an anticipated event. The three panelists in the middle retained their original opinion. The two “early” and two “late” panelists revised their initial opinions to converge toward the middle. One member, holding an extreme position, neither influenced the remainder of the panel nor was influenced by it. This result is typical of panel behavior in a Delphi sequence. Some experiments at RAND indicate that the Delphi Procedure does improve the accuracy of group forecasts, but the method is too new to have received extensive validation. It does offer considerable promise and undoubtedly will be more widely used in the future. One of the biggest Delphi panels ever formed was used by the corporate planning office of TRW, Inc., to obtain a technological forecast in the areas of technology of most concern to the company.

Figure 1. Converting estimates of the date of an event

The consensus methods go a long way toward overcoming some of the objections to intuitive forecasting. Because of the interaction between the panelists, arguments for and against specific predictions tend to be made explicit, and it is possible for an outsider to review the proceedings of the panel after the forecast is complete, to see what factors the panel considered and how it arrived at its conclusions. However, there is still a large subjective element in forecasts obtained by consensus methods. Other methods of forecasting make a deliberate attempt to reduce this subjectivity.

forecasting by analogy

This method attempts to find analogies between the thing to be forecast and some historical event or well-known physical or biological process. To the extent that the analogy is a valid one, the original event or process can be used to make a prediction about the future development of some area of technology.

The use of historical analogy is actually quite common in everyday life. Expressions such as “We tried something like that once before and here’s what happened” are certainly well known to everyone. The major difference between the ordinary use of historical analogy and its use in technological forecasting is that the technological forecaster uses it consciously and deliberately, examining the “model” situation and the situation to be forecast in considerable detail to determine the extent to which the analogy between them is valid. The introduction and spread of an earlier technological innovation, the social impact of some previous invention, the delay between the introduction of some specific technology in one social situation and its introduction in some other and different situation, the delay between the adoption of a specific technology in a certain industry and the adoption of a successor technology in the same industry -all are illustrations of historical situations that can be used as models for predicting the future progress of some technology under study. Even though history never repeats itself exactly, the use of historical analogies can give considerable insight into the likely course of development of some technology of current interest. An example of an extensive use of this approach is the book, The Railroads and the Space Program: An Exploration in Historical Analogy (Bruce Mazlish, ed.). As the title indicates, the contributors to this volume attempted to find similarities between the U.S. space program and the development of the railroads in the nineteenth century and to use these similarities to make predictions about the space program.

Another type of forecast by analogy, much less common in everyday life but in fairly wide use by technological forecasters, is the analogy with physical or biological processes. An especially common approach is the use of growth curves to predict the advance of some technology. Both individuals and populations of many living species have growth curves that follow an S shape. It has been observed that many technological devices follow this same pattern-a slow start, then a rapid rise, followed by a leveling off and obsolescence. Figure 2 shows clear-cut examples of this pattern in the field of illumination technology. Here two specific classes of devices illustrate this growth pattern. There are actually good reasons for the similarity between growth in performance of a technological device and the growth of an individual or population. In both, growth tends to be the cumulative result of a large number of separate accretions or advances, and there are often considerable difficulties to be overcome at the outset, causing the growth to be slow. Once these difficulties are overcome, the stage is set for rapid growth, until some limit is encountered. Biologically, this limit is usually environmental, such as a fixed food supply. Similarly in technology, the limit is usually “environmental” the sense that it is extrinsic to the technology-generation process. It frequently comes from some natural limit on the performance of some specific class of device. Since technologies do tend to follow the S-shaped growth curve, it appears natural to try to forecast technological progress by using this method. It is especially applicable to technologies where there is some known upper limit to the possible performance, such as the speed of light or the achievement of 100 percent efficiency.

Figure 2. S-shaped growth curves of lighting devices

The major strength of this method is that it eliminates much of the subjectivity of either intuitive or consensus methods of forecasting. Its major weakness, however, is that the exact extent of the analogy between the model and the thing to be forecast is often not evident until too late to do any good. For instance, the plot of performance versus time for some device often gives no advance warning that the curve is going to change from slow start to rapid growth or pass through the inflection point and slow down. The points at which these changes occurred can often be recognized only in retrospect. Thus, useful as this method is, it does not completely satisfy the needs of the technological forecaster. There is still a need for methods which, like the use of analogies, eliminate the subjectivity of expert opinion but which make better use of past data to develop predictions of when higher levels of performance will be reached.

trend extrapolation

Trend extrapolation is one way of getting around the problem of predicting when the S-curve is going to change direction. Instead of concentrating on a single device and attempting to predict the future course of development of that device, the trend extrapolation method considers a series of successive devices which performed similar functions. These can be considered individual representatives of a broad area of technology. It is then necessary to find a single performance characteristic of these devices that can be expressed numerically. The forecast is made by plotting the performance of each device against the year in which it was achieved. If a trend is apparent, this trend is projected and becomes the forecast. An example is shown in Figure 3, which considers the same area of technology as Figure 2, that is, illumination technology. Instead of plotting the course of development of a single device, however, each successively developed device becomes a single point on the curve. Note that even though not all the devices shown in the figure are electrical in nature, the energy consumption of each of them can be converted into watt-equivalents, so that a uniform ordinate, efficiency in lumens per watt, can be used for all the devices. While several points could be plotted for each device, there is usually no value in doing so. Successive devices usually have major differences in performance, on the order of 100 percent or more, while improvements to a single device usually are on the order of a few percent. If the curve were drawn in detail, with several points for each device, and if an accurate representation were made of the plateaus usually reached by specific devices, the curve would actually be in stairsteps. The straight line shown is the envelope of the true curve. Hence this method of forecasting is sometimes referred to as the use of “envelope” curves.

Figure 3. Trend in movement of illumination devices

Use of trend extrapolation in this way avoids the problem of making detailed predictions about the development of specific devices. On the other hand it provides less information about the actual devices that will make it possible to achieve the predicted performance. The curve says only that the performance will be attained. It does not say anything about whether existing devices can be improved to attain that performance or whether a new device will be invented.

There is a useful variant of the straightforward form of trend extrapolation, known as the precursor method. It involves finding a relationship between two areas of technology with one leading the other by a predictable interval. An example is shown in Figure 4, which compares the top speeds of U.S. combat aircraft with those of U.S. transport aircraft. As the trend lines indicate, combat aircraft appear to be leading transport aircraft by a slowly widening gap. On the assumption that these trends will continue, such a graph could be used to predict the future speed of transport aircraft, based on already-achieved speeds of combat aircraft. The credibility of this type of forecast is higher than that of a straight forward trend extrapolation, especially where there is some logical connection between the two trends. Such a connection is plausible in the case of combat and transport aircraft. However, in some apparently correlated trends, there may be no logical connection whatsoever between the two technologies. Hence the method must be used with care. In any case, the method cannot be used for making projections farther ahead than the lag times between the two technology areas.

Figure 4. Speed trends of combat and transport aircraft

To digress a little, the data shown in Figure 4 can also be considered as two examples of straightforward trend projection. As such, they contain some interesting features. The highest-speed bomber point on the graph represents the SR-71, which was developed in secret by Lockheed for the Air Force. The date shown for it is the date its existence was publicly announced, and the speed shown is its publicly announced speed. Since it was probably operational before its existence was announced, the point should probably be moved to the left. Also its actual top speed probably exceeds that publicly announced and the point should be moved higher. Applying either or both of these “correction factors” would move the point closer to the trend line for combat aircraft. The lesson here is that even secret technological advances tend to follow the same trends as preceding nonsecret advances.

Now consider the points representing transport aircraft. As they show, there has been essentially no increase in top speeds for new transports throughout the 1960s. This resulted from the following factors: operation at speeds just below mach 1.0 produces difficulties associated with the onset of compressibility and formation of shock waves; operation just above mach 1.0 is highly uneconomic because of high drag penalties; the technology needed to operate in the efficient but high temperature, high-supersonic regime was not yet available in transport aircraft. As a result, several successive transport designs continued to have top speeds in the neighborhood of 550 knots. If the graph showed all the civilian transports introduced in the 1960s, the impact of these factors would appear even more clearly. However, the highest transport point shown (actually a prediction, since it has not been achieved yet) is for the supersonic transport (SST). Once the technology became available to permit operation at speeds near mach 3.0, where operation is much more efficient than at speeds just above mach 1.0, transport design reverted to the trend line followed by most preceding transports. The factors that dominate transport design were temporarily stymied by the difficulties of transonic operation, but once this barrier was hurdled they again exerted their control.

Trend extrapolation, whether of the straightforward variety or the precursor method, is at once the simplest and most sophisticated method of technological forecasting currently available. In concept it is quite simple. It involves only the plotting of some quantitative characteristic of the technology against time and extrapolating any observable trend.  But sophistication can enter this process quite rapidly, one of the first possible Sources being the choice of the characteristic to be plotted. In the case of aircraft, for instance, speed is a fairly obvious characteristic. However, for transport aircraft, productivity, measured in tons payload X miles-per-hour cruising speed, is somewhat less obvious but is more directly related to their real function than is top speed. (Such a plot, incidentally, would show that throughout the 1960s the productivity of successive new models of transport aircraft grew steadily, even though the top speed remained relatively static.) In any event, considerable sophistication may be involved in choosing a characteristic that not only truly represents the ability of devices to function as expected but also can be applied to successive devices that may operate on different principles while performing the same function. Likewise, choosing the scale on which to plot the characteristic is not always simple. Probably a logarithmic scale is most frequently used. Others such as cumulative normal distribution, cumulative lognormal distribution, etc., may be used. The usual purpose in choosing a scale is to allow the trend, if any, to show as a straight line. For instance, if the growth of the characteristic being plotted is expected to be exponential, plotting on a logarithmic scale will produce a straight line. If the scale is poorly chosen, the trend may be nonlinear and therefore hard to project. Finally, even with a characteristic and a scale carefully chosen, the points do not usually lie on a smooth curve. Drawing a trend line may involve nothing more than an “eyeball” fit with a straightedge, or it may involve sophisticated mathematical curve-fitting techniques. Thus while trend extrapolation is simple in principle, it can rapidly become sophisticated in use.

However, whether the extrapolation method is used in its conceptual simplicity or involves some sophisticated mathematical techniques, it is based on an important underlying assumption: that the conditions which prevailed in the past and were responsible for the well-behaved trend observed in the data will continue unchanged into the future, at least as far as the time of the desired prediction. No amount of mathematical sophistication in treatment of the data can make up for the breakdown of this assumption. In many cases, though, the exact nature of the conditions responsible for a trend is not even known, let alone whether they will remain constant. But suppose it is known that some relevant conditions are going to change. Then it may no longer be possible to make a prediction by extrapolating past trends. For instance, suppose a majority of the public decided it simply would not tolerate the operation of an SST over inhabited land areas. Under these changed conditions, a straightforward projection of the past trend in transport aircraft speed would not be justified. Or suppose the government or a major corporation makes a policy decision to accelerate the growth of some technology by deliberately changing some relevant condition, such as level of resources applied-as, for instance, the federal government did to rocket technology with the decision to put a man on the moon. Trend extrapolation gives little or no hope of providing accurate forecasts of the progress of the accelerated technology. Not only that, it gives no guidance as to which conditions should be altered, to achieve a desired rate of progress. In short, if a change in the relevant conditions is big enough, whether it is forecast but not under anyone’s control or is deliberately introduced by someone, extrapolating past trends is of little value as a means of forecasting.

structural models

The structural model represents an attempt to develop a mathematical or analytical model of the technology-generation process. As with mathematical models of any process, the purpose of constructing a model of the technology-generation process is to single out certain elements as being relevant to the process, make explicit some of the functional relationships among these elements, and express these functional relationships in mathematical form. A characteristic feature of such models is that they tend to be abstractions; certain elements are omitted because they are judged to be irrelevant, and the resulting simplification in the description of the situation is intended to be helpful in analyzing and understanding it.

Figure 5 shows an example of a model of the technology-generation process in block diagram form. This represents an attempt to model the flow of knowledge, from discovery through engineering into technology. In principle, a mathematical relationship would specify the rate at which new knowledge is produced, based on the number of scientists at work and the type and extent of scientific research facilities available to those scientists. Similarly, a mathematical relationship would specify the rate of progress of some parameter of technology (such as lumens per watt, used in Figures 2 and 3), based on the rate of production of new knowledge and the engineers and facilities available to exploit the new knowledge.

Figure 5. Structural model of the technology-generation process

Each of the blocks, of course, conceals a submodel. For instance, the number of scientists available to work in a specific field is not a static figure. It is increased by migrations from other fields and by new graduates from colleges. It is decreased by deaths, emigrations, and diversions of scientists to teaching. Diversion to teaching, while it may lead to a short-term reduction in the numbers of scientists working full-time in some field, is essential if the number of new graduates is to be increased. So the number of scientists available, over time, is a result of the interaction of several complex phenomena, some of which are subject to manipulation as a result of policy choices. Furthermore, the blocks are not independent of each other. An increase in the amount of scientific research facilities available can be accomplished only by diverting engineers from the exploitation of new knowledge to the design and construction of new facilities. These submodels and their interactions are typical of what is involved in constructing a model of the technology-generation process and of finding mathematical expressions for the relationships among the elements of the model.

What is the current state of the art in constructing structural models of the technology-generation process? Unfortunately, existing models are both quantitatively and qualitatively deficient. In the model of Figure 5, for instance, we simply do not know enough to specify the mathematical relationship between number of scientists at work in a field, the amount of research facilities available to them, and the rate of production of new knowledge. It is clear that the rate of production of new knowledge increases with an increase in the number of scientists working in a particular field. However, the relationship is not a simple one, and in particular it is not linear. Simply because of communication problems, the average rate of discovery per scientist falls off as the number of scientists in a field increases. Because of these and many other problems, today it is not possible to make quantitative statements about the relationships shown in the model. At best, then, such models can only be qualitative.

However, here again the lack of knowledge is a hindrance. The model shown, for instance, implies that technology is produced out of knowledge generated through scientific research. But we know this is not the full story, either. Many instances of new technology arise out of sheer empiricism, with science later providing explanation and understanding. Thermodynamics, which followed rather than preceded the steam engine, is only one example. Not enough is known about the empirical foundations of technology to allow us to construct a model of the technology-generation process that is even qualitatively correct.

Despite the deficiencies of the current models, constructing models is one of the most promising lines of development in improving our capability to do technological forecasting. First of all, it is clear that qualitatively and quantitatively correct models of the technology-generation process will allow us to go beyond any of the other currently used techniques. Second, the research needed to develop the knowledge to improve current models is fairly well defined and is being actively pursued at a number of centers. Hence it is fairly safe to predict that within a few years rudimentary models will be available that will allow us to make quantitative predictions of the impact on technological growth of changes in allocation of resources, construction of new facilities, etc. Instead of being forced to assume that conditions will remain unchanged, we will be able to determine the effect of deliberate changes in conditions.

Within the past decade or so, technological forecasting has progressed from something resembling a black art to the point where it is beginning to look like a science. It will probably never approach being an exact science, since it deals with predicting what human beings will do, and they are a notoriously unpredictable lot. However, it has already reached the point where we can identify meaningful measures of technological progress and use them to predict further progress, provided that the conditions which existed in the past remain unchanged. Under some circumstances, we can even make qualitative predictions about the impact of policy decisions on technological progress. In the reasonably near future we can expect to be able to make quantitative predictions about technological progress, given information about the factors which determine that progress. We should even be able to make deliberate plans to achieve specified rates of progress and know what it will cost in men and resources to achieve those rates. When that day arrives, technological forecasting will be used as regularly in making business and political decisions as economic forecasting is now.

Holloman AFB, New Mexico

Credits

The figures used in this article were obtained from the following sources:

Figure 1, J. P. Martino, “An Experiment with the Delphi Procedure for Long-Range Forecasting,” Air Force Office of Scientific Research Report AFOSR 67-0175.

Figures 2, 3, and 4 (data only), “Report on Technological Forecasting,” Interservice Ad Hoc Committee, published by the Office of the Chief of Research and Development, U.S. Army.


Contributor

Major Joseph P. Martino (Ph.D., Ohio State University) is the Assistant for Research Analysis, Office of Research Analyses, Holloman AFB, New Mexico. Other assignments have been as Project Engineer, Inertial Bombing Systems Section, Armament Laboratory, Wright Air Development Center; as student AFIT, Ohio State University; in the Mathematics Division, AFOSR; with the Advanced Research Projects Agency in Bangkok, Thailand; and Assistant for Research Coordination, Air Force Office of Scientific Research, office of Scientific Research, Office of Aerospace Research.  Major Martino is a graduate of the Squadron Officer School, Air Command and Staff College, and Armed Forces Staff College. He was chairman of the Special Warfare Working Group of the 15th Military Operations Research Symposium.

Disclaimer

The conclusions and opinions expressed in this document are those of the author cultivated in the freedom of expression, academic environment of Air University. They do not reflect the official position of the U.S. Government, Department of Defense, the United States Air Force or the Air University.


Home Page | Feedback? Email the Editor