CONSISTENT UNDERESTIMATION BIAS, THE ASYMMETRICAL LOSS FUNCTION, AND HOMOGENEOUS SOURCES OF BIAS IN STATE REVENUE FORECASTS William R. Voorhees* ABSTRACT. One component of revenue forecast error has been attributed to the phenomena of consistent underestimation bias due asymmetrical loss. Because underestimation of revenue forecast results in less loss to forecasters than overestimations, there appears to be a bias for forecasters to underestimate revenue forecasts. This paper confirms this hypothesis. Additionally, with the greater usage of national forecasting organizations that provide economic forecasts on which revenue forecasts are based, a secondary source of forecaster bias may be present in many state level forecasts. This hypothesis is supported by the increase in number of states using such organizations and a decrease in the standard deviation of the annual mean percentage state forecast error. INTRODUCTION Revenue forecast error often is attributed to consistent underestimation bias due to an asymmetrical loss function. Because forecasters are subject to a greater loss when they overestimate revenue than when they underestimate revenue, there is an incentive for forecasters to under forecast revenues and thus avoid losses they may encounter with overestimated revenue. Forecaster loss is manifested in many forms including loss of potential salary increases, loss of reputation as a forecaster and loss of job responsibilities. Research to date has considered the revenue forecaster as the source of underestimation bias, but the recent usage of external economic forecasts may also be introducing bias into forecasts. Many states utilize a conditional forecasting process where a forecast is made for economic -------------------* William Voorhees, Ph.D., is an Assistant Professor, School of Public Affairs, Arizona State University. His publications and research interests include topics in revenue forecasting, governmental accounting, and public finance. conditions and then the revenues are forecast from the economic forecast. If the economic forecast is underestimated, then an accurate revenue-forecasting model will underestimate revenue. Recent trends indicate that states are increasing their reliance on external forecasts generated by a very limited number of national forecasting consultants. In October of 2002, the two primary firms, Data Resources, Inc., (DRI) and Wharton Econometric Forecasting Associates (WEFA) merged into a single company called Global Insights, further reducing sources of economic forecasts. Companies providing economic forecasts on a fee basis may well be operating under a similar set of asymmetrical risk factors as are revenue forecasters. Renewal of contracts for economic forecasts depends on both the accuracy and the impact that the error of the forecast has on governmental disruption. Because an under forecast will always result in less disruption than an over forecast, third party economic forecasters are incented to underestimation bias. As more states utilize a limited number of economic forecasting services, the error for the economic forecast should become homogenized as indicated by a decreasing variance of error across state revenue forecasts. This paper first considers the literature on underestimation bias and the effects on forecasts attributed to fiscal stress. Next state forecasts are examined for a consistent underestimation bias for the years 1979 thru 2002. Finally, aggregate state forecast error variances are examined for consistency across years, which would indicate the introduction of a homogenous error source. SOURCES OF BIAS IN REVENUE FORECASTS In addition to random error, bias also creates an opportunity for the forecast to be in error. Although all forecasts have error, an unbiased, strongly rational forecast has error that is attributable only to randomness. A forecast is said to be strongly rational if the forecast and the actual revenues, conditional upon the influences of a set of full information, are equal (Feenberg, Gentry, Gilroy & Rosen, 1989). In other words, given a set of full information available and taking into account the effects of full information, the difference between the forecast and the actual revenue should be zero. Weak rationality exists when the set of information is incomplete yet the forecasters achieve the correct answer. When bias is introduced into a strongly rational forecast, the error takes on an additional systemic element in addition to the element of randomness. Forecast error at the federal level has raised many concerns in recent years. During the 1980s, actual revenues often fell short of the projections leaving the government with larger than anticipated deficits, while in the later part of the 1990s, actual revenues have been exceeding the forecasts. Overly optimistic assumptions were the primary cause of the shortfalls of the 1980s (Shumavon, 1981). David Stockman, Director of the Office of Management and Budget during much of that period, claims that optimistic assumptions were utilized intentionally to justify the Reagan administration’s tax reduction package (Stockman, 1985). Howard (1987), who argues there is little difference between Congressional Budget Office (CBO) and Office of Management and Budget (OMB) projections, presents a different picture. The similarity of OMB and CBO projections would suggest that political motivations are not the foundation of a consistent bias. At the local level, several studies have indicated that there appears to be a conservative or underestimation bias present in their forecasts (Larkey & Smith, 1984). In a 1988 study, Frank found that Florida cities tended to underestimate revenues by approximately 8% (Frank, 1988). A 1992 study found that in Pennsylvania, both cities and counties underestimate total revenues consistently, but that property tax and intergovernmental revenues were consistently overestimated (Bretschneider, Bunch and Gorr, 1992). At the state level, mixed results on the presence of bias have been reported. One study of general sales tax forecasts in twenty-eight states for the years 1981 through 1986 found overestimates with a mean percentage error of 0.06. A 1989 study by Cassidy, Kamlet, and Nagin, consisting of twenty-three states between 1978 and 1987 found that 59% of their forecasts were underestimations with a mean percent error of 0.51. According to the researchers, the results cast substantial doubt on the prevailing literature that there is an underestimation bias at the state level. They base this on a t-test, finding that the results were only marginally significantly different from zero at p = 0.046. Because no corrections were utilized for serial correlation within states, they argue that the standard error would exceed the “conventional standards” (Cassidy, Kamlet & Nagin, 1989). Contradicting these findings are other studies that have found that there is an underestimation bias in revenue forecasting generally attributed to an asymmetrical loss function of the forecasters. Feenberg et al. (1989) studied three states and found they all had a bias towards underestimation. Another study that investigated forecasting in New Jersey found that forecasts favored underestimation bias with the means of the forecast errors being significantly different from zero (Gentry, 1989). A study on forecasting in Illinois also found a conservative bias (Albritton & Dran, 1987). Finally, a 1992 study found that the use of outside consultants resulted in an underestimation bias except when there political competition existed (Bretschneider and Gorr, 1992). Taken in total, these results suggest that the evidence for forecast bias must be considered inconclusive. Political Aspects of Forecast Error Bias The political environment is clearly one possible source of asymmetrical loss for forecasters. However, the political environment may favor either underestimation or overestimation depending on policy objectives of the political actors and their influence over the forecasters. In discussing this relationship, Wachs argues that public administrators must consider both psychological and sociological dimensions of forecasting and that forecasters often are faced with ethical dilemmas. The organizational locus of the forecaster may cause the forecaster to produce forecast outcomes that optimize the policy preferences of the organization’s goals (Wachs, 1982). In interviews of state forecasters, it was found that the low-end forecast was usually accepted. As one interviewee stated, “I am a hero when there is more money than I predicted and a villain when there is less. Let me tell you, it is better to be a hero than a villain” (Rodgers & Joyce, 1996). In a study on the forecast accuracy between the Congressional Budget Office (CBO) and the Office of Management and Budget (OMB), Howard (1987) found that the OMB produced consistently optimistic forecasts of assumptions. Howard attributes this to an optimistic bias of the executive budget. In a similar study, Shumavon found that the CBO consistently produced more accurate forecasts than the OMB. He attributed the increased accuracy to organizational differences that insulated the CBO staff from partisan political pressures (Shumavon, 1981). Rubin (1987) has suggested that conservative revenue estimates may be a result of not only the lack of knowledge about the economy and overly simple forecasting techniques but also a conscious political decision to distort the estimates. The effort to buffer against uncertainty may combine with the fiscal conservativeness of the finance officials to create a “normal” bias toward underestimation. This tendency may be exaggerated if there is conservative political leadership trying to reduce the level of taxation and services: revenue underestimates may hold down service levels and cause cuts in services. The eventual revenue surpluses resulting from underestimates of revenue encourage reduction in tax levels. Low revenue estimates may discourage expansionary departmental requests, or failing that may enhance the power of the city administrator to cut departmental budget requests (Rubin, 1987). As suggested by Rubin, the political argument generally is based on the claim that conservative politicians are more likely to cause the forecast to be minimized in order to reduce the level of services. Cassidy, Kamlet, and Nagin (1989) make the argument that the direction of forecast error is difficult if not impossible to predict by either party or ideological position. While one may argue that conservative politicians attempt to minimize revenue forecasts in order to reduce spending and liberal politicians attempt to maximize revenue forecasts to increase spending and service levels, the opposite can be shown. For example, at the federal level, David Stockman (1985) claims that forecasts were overestimated intentionally to pave the way for President Reagan’s tax cuts. Thus while ideology may indeed influence estimation, the direction of the influence is ambiguous. While the direction of the forecast may not be logically attributable to partisan/ideological influences, the accuracy of the forecast is another question. Accuracy is measured as the absolute value of the deviation from the actual and does not account for direction. Partisan or ideological influences may arise from competition between parties, which challenge the assumptions of the revenue forecast. The lack of a dominant party may result in reduced competition and lead to an imbalance in political power, thus forecasts may go unchallenged (Bretschneider, Gorr, Grizzle & Klay, 1989). In their 1989 study, Bretschneider et al. found evidence that party dominance was influential in state revenue forecasts. This result is supported partially by Gentry (1989) who, in an extension of Feenberg, Gentry, Gilroy, and Rosen’s (1989) study on forecasting rationality, found that party dominance has significant influence on New Jersey inheritance tax forecasts. This result is further supported by the case study of Ohio forecasting by Shkurti and Winefordner. They suggest, “forecast accuracy can be achieved even in a highly partisan political environment, provided that the officials involved perceive the advantages of submitting unbiased forecasts” (Shkurti & Winefordner, 1989). Another study found that unified governments produce more accurate forecasts than do split governments, attributing the accuracy to the efforts of the party in power to avoid a loss of political capital (Voorhees, 2000; Voorhees, 2004). However, contrary to the above results, Cassidy, Kamlet, and Nagin (1989) found that party dominance does not influence forecast accuracy. Fiscal Stress, Underestimation Bias, and the Asymmetrical Loss Function Several studies have considered the effects of stress on forecast accuracy. In California, stress induced by Proposition 13 was found to cause underestimates resulting in greater accuracy (Chapman, 1982). A study of 133 Illinois cities found that cities with growing revenue, low property taxes, and no fiscal stress tended to underestimate revenues. On the other hand, cities with stagnant revenue growth, high property taxes, and fiscal stress tended to overestimate revenues (Rubin, 1987). These results might be explained by the natural tendency of forecasters towards an underestimation bias. If forecasters perceive a greater loss when the forecast is higher than actual revenues than when it is less than actual revenues, they might be incented towards underestimation. Shortfalls can dramatically affect the operations of government requiring cutbacks and tax increases. Both of these options are distasteful to politicians and the public. On the other hand, a surplus from underestimated revenue is not seen as negatively as the shortfall. In some cases, the public may even see a surplus as a positive indicator of government performing better than expected. This creates an asymmetrical loss curve where the loss to a forecaster is less when revenues are underestimated than when they are overestimated. If the asymmetrical loss function is at work during periods of nonstress, then what maladies might occur during periods of fiscal stress? Fiscal stress may cause an asymmetrical loss curve to shift thus making overestimation less costly than normal and underestimation more costly. The effect of this shift is to offset some of the underestimation bias. The shift in the asymmetrical loss curve results from additional pressures on forecasters to predict desirable forecasts as opposed to accurate forecasts resulting in less underestimation bias than normal. At the same time, revenues are likely to be falling in real dollars (or failing to grow at their historical rates). These two effects result in a convergence of the forecast and actual revenue in times of stress. Thus, the shift in the asymmetrical loss curve, coupled to declining revenues, actually improves forecast accuracy. One interesting topic that has yet to be addressed in the literature is whether overestimation risk results in an additive or multiplicative influence. If we assume a linear function, this would be manifested by either an increase in the Y-intercept or by the slope of the line. In the former case, the risk of overestimation as compared to underestimation, is raised equally across all levels of overestimation, while in the later, the excess risk due to overestimation increases with the overestimation. National Economic Forecasts: A Homogenous Source of Bias So far, research has assumed that the asymmetrical loss function is a phenomenon of state and local forecasters. However, research on sales tax forecasting shows that 75% of the states first develop an economic forecast before the revenue forecast (Klay & Grizzle, 1992). In recent years, states have been relying more on external or national forecasting firms such as DRI and WEFA (now Global Insights) to provide them with the economic forecast on which they base their revenue forecast. Supposedly, the volume of forecasts made by these firms would allow them to utilize the best techniques and hire the most knowledgeable forecasters, something many states may be constrained from doing (Bretschneider & Schroeder, 1988). In addition to utilizing national forecasting firms, 44% of the states utilize a council of economic advisers to help arrive at economic forecasts (Voorhees, 2002). These conditional economic forecasts need to be considered as possible sources of forecast bias A survey taken by the author in the spring of 1999 showed that 39 states out of 46 or 85% of the states responding utilized national forecasting firms in 1997. This is compared to a 1990 study that found 29 states out of 44 or 66% of the responding states utilized a national economic forecasts (Federation of Tax Administrators, 1993). This represents a 19% increase in the use of national forecasting firms over a period of just eight years and indicates a significant change in forecasting policies within state governments, the effects of which have not been adequately measured. If one were to consider that most state forecasters share information on forecasts, the effective saturation of the national forecasting firms might actually have an impact close to 100% of the states. A single forecaster performing both economic and revenue forecasts might result in bias in both the economic and revenue forecasts, but the threat of a highly inaccurate forecast would constrain the total bias. On the other hand, if the tasks of forecasting the economy and revenue are separated and assigned to two independent forecasters, the constraint on the error bias would be expected to decrease. If so, then this would result in an increase in error if the bias of both the economic and revenue forecasts were biased in the same direction and there is an absence of contravening factors. Were state forecasters prescient as to the economic forecast bias, the economic forecast bias could be incorporated into the revenue forecast bias increasing the overall accuracy It is reasonable to assume that an asymmetrical loss function would be in existence for national forecasting firms just as it is believed to be for state forecasters. During 1997–1998, it was found that both state economic forecasters and private forecasters tended to underestimate economic forecasts (Davis & Boyd, 1999). In theory, the asymmetric loss function might be more applicable to the national forecasting firms than to state forecasters. An economic forecast that results in a shortfall would surely raise the attention of state officials and possibly jeopardize future contracts between the government and the forecasting firm. Additionally, individual forecasters for the national firms would be subject to the same asymmetrical risks as a forecaster for the state or local government. In contrast, an economic forecast that resulted in a revenue windfall would not be nearly as serious to state officials and the contract would not be placed in as great of jeopardy. Testing for Consistent Underestimation Bias Using secondary data from the Fiscal Survey of States (National Governors Association, 1978–2002), tests for underestimation bias, consistency and homogeneous sources are performed. The current literature list several methods for measuring forecast error including mean absolute percentage error (MAPE), mean absolute deviation (MAD) and root mean squared error (RMSE) (Chase, 1995; Makridakis & Wheelwright, 1989). This study utilizes the mean percentage error (MPE) because it preserves the directionality of the error and identifies bias. Because the other methods utilized either an absolute error or a squared error, directionality is lost and with it the ability to determine bias. The MPE is calculated by subtracting the actual revenue (A) from the forecast revenue (F) for a given each state (i) for a each year (y), dividing the result by the actual forecast for the respective state and year, summing across all forecasts and then dividing by the number of forecasts (n). This average will typically be close to zero since the positive random errors will tend to offset the negative random errors. If this is not close to zero, then bias should be suspected. MPE Fyi Ayi Ayi n (1) For a given year, this formula will result in a single percentage representing aggregate forecast error for all states and will have a positive or negative sign. A positive number indicates overestimation bias, a negative number indicates underestimation bias and a zero indicates no bias. The test for bias consistency is performed by calculating the MPE for the population (fifty states) by year and then comparing over forecast years to under forecast years. If the MPEs for each year are substantially positive or negative then it can be said that the bias is consistent. On the other hand, if no bias exists, then one would expect approximately half of the errors to exceed zero and half to fall below zero. Finally, the issue of homogenous bias source is considered. Is it possible that a substantial portion of forecast error is being introduced based on the economic forecasts of a few economic forecasting firms? While this question cannot be answered without considering the actual economic forecast error, something that is not readily available for most states, one is able to gain some insight as to the influence of these homogenous sources. One may consider the total variance of the revenue forecast as consisting of two parts, a variance attributable to the economic forecasts and a variance attributable to the revenue forecasts. As states make greater use of economic forecasts produced by national forecasting firms, the economic forecast portion of the variance will decrease due to increased homogeneity of data sources and assumptions. However, how the greater use of national firms would affect the accuracy of the economic forecast is ambiguous. Initially it might be presumed that better forecast techniques would be utilized to vastly improve the forecast. However the use of a national firm may be due to personnel resource constraints rather than any attempt to improve accuracy. Because most states utilize a conditional forecast, the portion of forecast variance attributable to the revenue forecast would be a function of the revenue forecasting techniques including organizational processes. Improvements in revenue forecasting techniques should result in both a decreasing variance and improved accuracy. RESULTS When observations for the 24-year period are aggregated, we find a mean percentage forecast error of -1.45. This seems reasonable, as many states will plan for a two percent error in their budgetary process. The standard deviation for this distribution was 8.6. Table 1 lists the percentage error for the states by year. From this table one finds that for 20 of the 24 years the aggregate forecast is a negative amount, adding strength to the hypothesis that the bias is consistent. These results confirm the studies by Feenberg, Gentry et al. (1989) and dissuade us from accepting the previous studies that deny the existence of an underestimation bias. Additionally, in the periods 1979 through 2002 there were only 339 (28%) occurrences of positive forecasts out of 1194 observations. If only random error were influencing the forecasts one would expect that the mix of overestimations and underestimations would approximate 50%. This further indicates that a consistent underestimation bias is present. Plotting a regression line through the annual mean percentage error, we find that on the average underestimation error has been increasing by approximately 0.05% every year (see Chart 1). CHART 1 State Forecast Mean Percentage Error: 1979-2002 12.0 10.0 Percentage Mean Error 8.0 Linear (Percentage Mean Error) Mean Percentage Error 6.0 4.0 2.0 0.0 -2.0 -4.0 y = -0.0485x + 95.234 R2 = 0.0087 -6.0 -8.0 1975 1980 1985 1990 Year 1995 2000 2005 Next, the data is tested to see if a homogeneous bias source, such as a national forecasting firm, might be present. This is tested by looking at the standard deviation of the annual mean percentage errors. Chart 2 illustrates a substantial decrease in the standard deviation over the years indicating that forecast errors are becoming more homogeneous across states regardless of whether the forecast is accurate or not. In other words, the standard deviation from the aggregate state percentage forecast error in any given year has been decreasing over time. This correlates negatively to the increase in the use of national economic forecasting firms, which might well explain the decreases in the standard deviation over the years. Consider first that the revenue forecast is a function of the economic forecast and that error in the economic forecast is reflected in the revenue forecast and ultimately in the revenue forecast error. If each state were to produce its own economic forecast, the distribution of the economic forecast error would presumably be greater than if the states utilized a single economic forecast. If, however, states migrate to a single economic forecast, one would expect to see the variance of the revenue forecast error across states to diminish somewhat. These results support the suggestion that state forecasts are being influenced by a shift to national forecasting firms. However, the reader needs to be cautioned that these are only preliminary indicators and that other factors might influence a reduction of standard deviations. TABLE 1 Aggregate State Forecast Error by Year Year 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 Mean Percentage Error N -1.9062 -0.8210 -3.4124 5.2109 9.6861 -5.6386 -4.3825 -0.1945 -2.4946 -4.0832 -5.8261 -1.0458 1.3551 -0.7212 -2.3000 -4.0591 -3.4985 -1.9350 -3.5516 -3.3154 -1.5193 -4.2467 -2.0722 6.1046 48 50 50 50 50 50 50 50 49 50 50 50 50 50 50 50 50 49 50 50 49 49 50 49 Std. Deviation 10.0463 9.8720 7.8213 15.0738 13.3564 11.6660 5.9091 5.0431 6.9557 6.5679 8.0596 7.1554 6.6474 5.0665 6.5791 8.2115 6.4665 4.5458 4.1748 4.1026 5.2154 6.1827 6.4397 5.3425 CHART 2 Standard Deviation of Mean State Forecast Error, 1979-2003 Standard Deviation of Mean State Forecast Error 16.0 14.0 Standard Deviation of Mean Error 12.0 Linear (Standard Deviation of Mean Error) 10.0 8.0 6.0 4.0 y = -0.2999x + 604.17 R2 = 0.4975 2.0 0.0 1975 1980 1985 1990 1995 2000 2005 Year CONCLUSION This paper has considered the problem of underestimation bias and found evidence of it during a twenty-four year period between 1979 and 2002. The results show that states averaged a 1.45% underestimation error. Consistency of the underestimation was also confirmed by testing the forecast error for each of the years individually where all but four of the twenty-four years were found to be underestimates. Finally, in the belief that national forecast firms are influencing the revenue forecast error, the standard deviations of the annual mean percentage error forecasts were examined. It was found that the standard errors of the annual mean percentage forecasts have been decreasing over time. Although other variables may also be influencing this trend, it is believed that both the increased usage and conglomeration of national economic forecasting may be introducing a homogenous source of bias into the state revenue forecast equation. In terms of underestimation bias, this poses a troubling situation for state forecasters in that both state revenue forecasters and national economic forecasters are introducing underestimation bias into the revenue forecast. The allocation of the revenue and economic forecast to different parties, each with incentives to underestimate, also prevents the revenue forecaster from understanding the degree of bias that may be present in the economic forecast. Correcting a situation of double bias is not an easy task. One approach might be to incent forecasters, both revenue and economic, towards accuracy and at the same time reduce their loss exposure when negative forecasts do occur. Naturally, this is easier said than done. REFERENCES Albritton, R., & Dran, E. (1987). “Balanced Budgets and State Surpluses: The Politics of Budgeting in Illinois.” Public Administration Review, 47(2): 143–187. Bretschneider, S., Bunch, B., & Gorr, W. (1992). “Revenue Forecast Errors in Pennsylvania Local Government Budgeting: Sources and Remedies.” Public Budgeting and Financial Management, 4 (3): 721-743. Bretschneider, S., & Gorr, W. (1992). Economic, organizational and political influences on bias in forecasting state sales tax receipts. International Journal of Forecasting, 7: 457-466. Bretschneider, S., Gorr, W., Grizzle, G., & Klay, E. (1989). “Political and Organizational Influences on the Accuracy of Forecasting State Government Revenues.” International Journal of Forecasting, 5: 307–319. Bretschneider, S., & Schroeder, L. (1988). “Evaluation of Commercial Economic Forecasts for Use in Local Government Budgeting.” International Journal of Forecasting, 4: 33–43. Cassidy, G., Kamlet, M., & Nagin, D. (1989). “An Empirical Examination of Bias in Revenue Forecasts by State Governments.” International Journal of Forecasting, 5: 321–331. Chapman, J. (1982). “Fiscal Stress and Budget Activity.” Public Budgeting & Finance, 2 (2): 83–87. Chase, C. (1995). “Measuring Forecast Accuracy.” The Journal of Business Forecasting, 14 (3): 2–25. Davis, E., & Boyd, D. (1999, March). “States’ Economic Assumptions for 1998, 2999 Show Cautious Optimism.” Tax Analyst: 1-7. Federation of Tax Administrators (1993). State Revenue Forecasting and Estimation Practices. Washington, DC: Author. Feenberg, D., Gentry, W., Gilroy, D., & Rosen, H. (1989). “Testing the Rationality of State Revenue Forecasts.” The Review of Economics and Statistics, 71 (2): 300–308. Frank, H. (1988). Model Utility Along the Forecast Continuum: A Case Study in Florida Local Government Revenue Forecasting (Unpublished Doctoral Dissertation). Tallahassee, FL: Florida State University. Gentry, W. (1989). “Do State Revenue Forecasters Utilize Available Information?” National Tax Journal, 42 (4): 429–439. Howard, J. A. (1987). “Government Economic Projections: A Comparison between CBO and OMB Forecasts.” Public Budgeting & Finance, 7 (3): 14–25. Klay, W., & Grizzle, G. (1992). “Forecasting State Revenues: Expanding the Dimensions of Budgetary Forecasting Research.” Public Budgeting & Financial Management, 4 (2): 381–405. Larkey, P. D., & Smith, R. A. (1984). “The Misrepresentation of Information in Governmental Budgeting.” In L.S. Sproull & P. D. Larkey (Ed.), Advances in Information Processing in Organizations (pp. 63-92). New York: JAI Press. Makridakis, S., & Wheelwright, S. (1989). Forecasting Methods for Management. New York: John Wiley & Sons. National Governor’s Association/National Association of Budget Officers (1978-2002). The Fiscal Survey of States. Washington, DC: Author. Rodgers, R. & Joyce, P. (1996). “The Effect of Underforecasting on the Accuracy of Revenue Forecasts by State Governments.” Public Administration Review, 56 (1): 48–56. Rubin, I. S. (1987). “Estimated and Actual Urban Revenues: Exploring the Gap.” Public-Budgeting-and-Finance, 7 (1): 83–94. Shkurti, W., & Winefordner, D. (1989). “The Politics of State Revenue Forecasting in Ohio, 1984–1987: A Case Study and Research Implications.” International Journal of Forecasting, 5: 361–71. Shumavon, D. (1981). “Policy Impact of the 1974 Congressional Budget Act.” Public Administration Review, 41 (3): 339–348. Stockman, D. (1985). The Triumph of Politics: Why the Reagan Revolution Failed. New York: Harper and Row. Voorhees, W. (2000). The Impact of Political, Institutional, Methodological and Economic Factors on Forecast. (Unpublished Ph.D. Dissertation). Bloomington, IN: Indiana University. Voorhees, W. (2002). “Institutional Structures Utilized in State Revenue Forecasting.” Journal of Public Budgeting, Accounting & Financial Management, 14 (2): 175-196. Voorhees, W. (2004). “More is Better: Consensual Forecasting and State Revenue Forecast Error.” International Journal of Public Administration, 27 (8&9): 651-671. Wachs, M. (1982). “Ethical Dilemmas in Forecasting for Public Policy.” Public Administration Review, 42 (5): 562–567.