Document 12300151

advertisement

Explore ECON

Organised by Parama Chaudhury, Cloda Jenkins,

Christian Spielmann & Frank Witte

This Workshop is sponsored by the Department of Economics at UCL.

Programme

13:15-13:55

North Cloisters

Registration and Coffee

13:55-14:00

Events Marquee,

Front Quad

Welcome Address

Sir Professor Richard Blundell

14:00-15:00

Events Marquee,

Front Quad

Session 1 Presentations: Testing Economic Theories

Session Chair: Dr Marcos Vera-Hernández

 Yihan Dong and Kin Fung : How does Tariff Reduction Affect Wages Across

Countries? How Important is the Total Factor Productivity in Wage

Determination

 Mateusz Stalinski : Consequences of Incomplete Employment Contracts in a

Laboratory Experiment

 Sukhi Wei : Open Source: In Search of Cures for Neglected Tropical Diseases

 Paul Kimon Weissenberg : The Case for a Fiscal Union in the Eurozone

15:00-15:10

Events Marquee,

Front Quad

First Year Challenge Presentations/Awards

Introduced by Dr Parama Chaudhury

15:10-16:10

Events Marquee,

Front Quad

Session 2 Presentations: Policies and Interventions

Session Chair: Professor Martin Cripps

 Nareen Baktor Sing : Realising the Microfinance Dream: How Can Microfinance

Really Help the Poor?

 Daniel Sonnenstuhl : How Much would you Pay for Fair-Trade Clothes?

 Teresa Steininger : Can Early Childhood Intervention Programs be Successful on a Large Scale: Evidence from Head Start

 Leonie Westhoff : Economic Costs of Mental Illness in the United Kingdom: The

Case for Intervention

16:10-16:30

North Cloisters

Posters & Multimedia Presentations

 Elliot Christensen , Amin Oueslati and Liese Sandeman: India and Competitive

Federalism

 Wenhui Gao and Georgios Kotzamanis : How Has Syriza Affected the Greek

Economy

 Nathaniel Greenwold and Dmitry Pastukhov : Is the Islamic State an

Economically Viable State?

 Ali Merali and Ciprian Tudor : Is Your Degree Worthless? An Experiment

 Delon Qiu : Is Deflation to Blame for Japan's Economic Woes?

 Kristin Wende : An Evaluation of Economic Policy in Germany from 1918 to 2016

 Yanhan Cui and Danyun Zhou : Stagnation in Student Performance, Can

Performance Related Pay be the Solution?

Coffee Break

16:30-17:45

Events Marquee,

Front Quad

Session 3 Presentations: Dissertation Research

Session Chair: Dr Valerie Lechene

You Lim : The Effect of Low Interest Rates on Household Expectation Formation in the U.S.

Robert Palasik : How to Reform Banking Governance?

Fran Silavong : The Impact of House Prices on Fertility Decision and its Variation based on Population Density

Fangzhou Xu : How to Measure the Economic Impact of UCL’s Newham Campus on the Local Region

17:45-18:30

Events Marquee,

Front Quad

Keynote Address

Sharon White , Ofcom

18:30-20:00

North Cloisters

Reception and Prize-giving

Judging Panel

Janet Henry , Global Chief Economist, HSBC

Gill Hammond , Director of the Centre for Central Banking Studies, Bank of England

Adam Lyons , Economics Advisor, Europe Group, Department for International Development, (DfiD,

UK Government)

Stephen Smith , Deputy Head of Department of Economics, UCL

Dobi

Dockes

Dong

Eaton

Farishta

Feldman

Fung

Christensen

Christoph

Cripps

Crockford

Cui

Darby

Denham

Abitbol

Anzolin

Azmi

Baktor Sing

Bowing

Carlin

Cattaneo

Cea Moore

Chairil

Chaudhury

Fung

Fung

Gao

Gitman

Greenwold

Handel Subbiah

Hoelscher

Hofmann

Ikanade-Agba

Noel

Caroline

Yihan

Rebeca

Rohail

Natasha

Dilly

Elliott

Joel

Martin

Viv

Yanhan

Rachel

Robert

Anthony

David

Syaza

Nareen

Flora

Wendy

Sofia

Camila

Amirah

Parama

Kin

Kenneth

Wenhui

Stefan

Nathaniel

Neerav

Patrick

Arne

Ben

Delegate List

Amin

Karl

Robert

Dmitry

Sophie

Ian

Delon

Lorenzo

Helen

Miriam

Ali

Konrad

Slava

Aishah

Amir

Cloda

Kieron

Pawel

Ye Huan

Andrey

Georgios

Sean

You

Chen Yue

Arturo

Jeff

Romero Yáñez

Rowley

Gyorgy Attila Ruzicska

Leise Sandeman

Balli

Simon

Sarkaria

Schröder

Toluwalase

Shuhaira

Seriki

Shaidan

Lotti

Matthews

Matthiessen

Merali

Mierendorff

Mikhaylov

Omar

Oueslati

Overdick

Palasik

Pastukhov

Peter

Preston

Qiu

Jabarivasal

Jenkins

Jones

Kaminski

Khor

Khvostov

Kotzamanis

Krishnani

Lim

Lok

Fran Silavong

Anthony

Daniel

Guglielmo

Christian

Mateusz

Teresa

Robert

Daniel

Rachel

Jin

Smith

Sonnenstuhl

Spalletti Trivelli

Spielmann

Stalinski

Steininger

Stevens

Szabo

Tan

Tan

Vincent

Hong

Guido

Ciprian

Marcos

Nirusha

Sukhi

Tong

Tran

Tubaldi

Tudor

Vera-Hernández

Vigi

Wei

Paul-Kimon Weissenberg

Kristin Wende

Leonie

Frank

Westhoff

Witte

Fangzhou

Anastasia

Huijing

Xu

Yermakova

Yu

Darya

Danyun

Henry

Zakharova

Zhou

Zhu

MAP

H

OW DOES TARIFF REDUCTION AFFECT WAGES

A

CROSS COUNTRIES

?

H

OW IMPORTANT IS THE TOTAL FACTOR PRODUCTIVITY IN WAGE

DETERMINATION ?

2 nd

Kin Kwan Fung

year

1

B.Sc Economics

University College London

Yihan Dong

B.Sc Statistics, Economics and Finance

2 nd year

University College London

Explore Econ Undergraduate Research Conference

March 2016

1

The assistance of our research supervisor, Dr. Parama Chaudhury, Senior Teaching Fellow at University

College London, is gratefully acknowledged.

How does tariff reduction affect wages in different countries?

Does total factor productivity play a role in wage determination?

Part 1. Introduction

Is trade liberalisation beneficial to workers and the economy as a whole? The paper sets up a model based on the analytical framework of classical labour economics, in order to examine the long-standing stance held by the World

Bank and the IMF – opening up and trading are integral to successful economic reforms.

1.1 Defining trade liberalisation

The effect of trade liberalisation on the global economy is widely discussed by economists. However, as geographical and chronological effects are taken into account, the true influence seems ambiguous. Most policymakers at international economic organisations, such as the World Bank and the International Monetary Fund, would argue that opening up to trade is an integral part of any economic reform; they are unanimous in drawing the positive relationship between social welfare and openness to trade. This paper measures the impact of tariff change on wage levels in different industries. Many studies have documented that trade reforms increase efficiency and growth of the economy (Hay 2001; Muendler 2002). Nevertheless, according to Harrison and Hanson (1994), these papers are typically plagued by serious econometric and data problems.

Trade liberalisation is the process of decreasing or eliminating barriers to trade between countries, including reduction and abolition of tariffs, removal and augmentation of import quotas, removal of fixed exchange rates, reduction of regulation on imports and foreign exchange controls. (Black, Hashimzade and Myles 2012). Our paper focuses on the effects of tariff reduction on key labour market characteristics.

1.2 Previous research

Pavcnik et al. (2004) argues that industry affiliation is an important indicator of sensitivity of wage level to changes in trade policy. Therefore, studies that do not control for industry-specific variables may not generate reliable results. This argument is supported by Verhoogen (2008), who investigated panel data on Mexican manufacturing plants and found that more productive plants with a higher quality of production were more sensitive to depreciation of the exchange rate and exhibit greater wage changes. This finding explains why there was a larger wage inequality in Mexico during the peso-crisis period (1993-1997). However, the quality-upgrading mechanism driven by other forms of trade liberalisation was not examined in details before. This paper will analyse the relationship between tariffs and wages taking into account the effect of tariff changes on productivity growth, using panel data for different industries. Amiti and Davis (2008) did similar research on the topic using Indonesian manufacturing census data for the period 1991 to 2000. By separately examining the impacts of final and intermediate changes in the tariffs on goods, they found that reduction in input tariffs is more likely to raise wages for importdependent firms compared to local-sourcing firms. On the other hand, reduction in tariffs on output boosts wages of exportcompeting firms but decreases wages of importing firms.

The three studies above focused on three specific developing countries (Brazil, Mexico, and Indonesia). It is tempting to generalise these results, and claim that the indentified relationship between tariffs and wages hold for economies and industries not included in the studies. However, country-specific characteristics – such as population, level of technological development, phase of the business-cycle, and labour market policies – have to be taken into account before any conclusion is made. This paper will estimate this omitted effect using industry-specific data for multiple countries. In order to focus on estimating the effect of industry and country-specific variables, we exclude

the impact of intermediate tariff changes on goods from our model, by assuming that there is no correlation between input sources of a firm and its industry and country affiliation. We argue that the tariff level of intermediate goods make negligible differences to the cost of production. Under this assumption, the model is expected to indicate the unbiased and robust effect of industry and country characteristics on the relationship between tariff reduction and wage changes.

1.3 Structure of the paper

This research paper is divided into four parts. Following the introduction in Part 1 , Part 2 presents the analytical framework and the model we are using in our research. We will lay out the possible feedback channels for tariffs to have effect on wages. Part 3 provides a general description of the data. Meanwhile, Part 3 presents the empirical results of regression models used in the research. Part 4 exposes some future directions our research result can lead to.

Part 2. Methodology

1.1

Theoretical framework

Adopting a classical economics framework of international economics, the marginal productivity of labour within a specific industry of a specific country can be described by 𝜆 𝑖𝑗

(country i, industry j) such that: 𝜆 𝑖𝑗

=

𝜕𝐹(𝐶 𝑖

, 𝑁 𝑗

𝜕𝑁 𝑗

) where

𝑁 𝑗

≔ the total number of workers in the industry;

𝐶 𝑗

≔ the measure of national characteristics;

𝐹 ≔ the transformation that maps the technology level and the number of workers within the industry to an appropriate level of marginal productivity. As commonly noted, function 𝐹 is non-negative, decreasing and concave.

Reiterating that labour income equals the marginal revenue product of labor,

𝑊 𝑖𝑗

= 𝑃 𝑖𝑗 𝜆 𝑖𝑗 where

𝑃 ≔ the local price of the homogenous output (country i, industry j).

If labour is homogeneous and perfectly substitutable, which is a strong assumption upon which we are building our regression models and analytical framework, we can logically derive a labour market clearing condition that describe the non-existence of the situation that workers move to another industry for higher wages. In a particular country:

𝑊 𝑖

= 𝑃 𝑗 𝜆 𝑗

= 𝑃

−𝑗 𝜆

−𝑗

Here we denote −𝑗 for all other industries apart from 𝑗 .

This theoretical framework provides the backbone that supports our research paper.

2.2 Regression models

Our research method is akin to the one adopted by Stone and Cepeda (2011). The differenced are that we do not take into consideration heterogeniety of labour and we control for national characteristics. In Section 2.1 the following

condition was obtained by means of the marginal revenue product approach ( 𝑖 subscript corresponds to the 𝑖𝑡ℎ country, 𝑗 corresponds to the 𝑗𝑡ℎ industry, and 𝑡 denotes time):

𝑊 𝑖𝑗𝑡

= 𝑃 𝑖𝑗𝑡 𝜆 𝑖𝑗𝑡

Taking natural logarithms on both sides of the equation yields: 𝑙𝑛𝑊 𝑖𝑗𝑡

= 𝑙𝑛𝑃 𝑖𝑗𝑡

+ 𝑙𝑛𝜆 𝑖𝑗𝑡

This can be treated as a column vector if more than one factor of production is included (e.g. labour, capital, land).

In this case, elements of the vector 𝑊 𝑖𝑗𝑡

are the prices of factirs of production. Suppose that vector 𝑠 𝑖𝑗𝑡

denotes shares of every input in the production process. Then, our equilibrium condition can be generalised and written in a vector form: 𝑠 𝑖𝑗𝑡

𝑇 𝑙𝑛𝑊 𝑖𝑗𝑡

= 𝑙𝑛𝑃 𝑖𝑗𝑡

+ 𝑙𝑛𝜆 𝑖𝑗𝑡

Taking differences between two subsequent periods:

Δ𝑠 𝑖𝑗

𝑇 𝑙𝑛𝑊 𝑖𝑗

= Δ𝑙𝑛𝑃 𝑖𝑗

+ Δ𝑙𝑛𝜆 𝑖𝑗

In order to isolate effects of changes in trade on factor prices, the effects of general structural variables on prices and productivity have to be loosen up. It can be achieved by conducting regression in two steps. First, we regress changes in prices and productivity on a set of structural variables. Then, we use the estimation results from the first step to recover changes in primary factor prices attributable to each structural variable change. This procedure is useful because the set of dependent variables for stage two is not directly observable.

𝑆𝑡𝑎𝑔𝑒 1: Δ𝑙𝑛𝑃 𝑖𝑗

+ Δ𝑙𝑛𝜆 𝑖𝑗

= 𝛼 + 𝛾 𝑇 Δ𝑧 𝑖𝑗

+ 𝜀 𝑖𝑗

A vector Δ𝑧 𝑖𝑗

contains structural variables such as tariff level, level of import and export, contribution to the economy, etc. In the second step 𝑛 regressions are run (for each significant parameter ̂ ): 𝛾 𝑘 𝑘𝑖𝑗

= 𝜇 𝑘

+ 𝛿Δ𝑙𝑛𝑊 𝑘𝑖𝑗

+ 𝜈 𝑘𝑖𝑗

Although this methodology can be adopted to analyse multiple factors of production, only labour is considered in our research.

Part 3. Graphic analysis and regression results

3.1 Data Description (Trends in wages and tariffs)

We would first expose the trends of the two key parameters of interest, wages and tariffs, using the data from Trade,

Production and Protection 1976-2004 dataset by Nicita N. and M.Olarreaga (2006), We split the dataset into three parts by country classifications: developed countries, developing countries and countries in transition ( Table 5 ).

Countries are classed regarding Statistical Annex issued by the United Nations in 2012.

From 1976 to 2002, average wages in the developed countries that the data covers shows three episodes of increase (1976-1980, 1984-1994,

1999-2000) and three declines (1980-1984, 1994-1999, 2000-2002), while the average wages among the developing countries rose steadily from 1976 to 2001 ( Figure 1 ). It is obvious that the average wages of developed countries fluctuated more dramatically than those of the counterparts. We identify at least three reasons that can possibly lead to such an inaccuracy. First, the average wages, obtained by dividing the wage bills data by total numbers of employees in corresponding industries of each country, are nominal, thus not adjusted to real wages. The inflation bias is thus inherent in the data. Second, We obtain Figure 1 data by taking a simple average across industries, so

the numbers are not properly weighted. Third, the data calculated are highly dependent on raw data availability, which affected the representativeness of the outright figure.

To reduce the error arisen from ‘The average’ calculating method, we extract the weighted average wages by dividing the sum of all wage bills within one year by the corresponding total number of employees to get the weighted average wages ( Figure 2 ). Average wages of developed countries showed a consistent upward trend, while that of developing countries displayed a similar trend, albeit ascending slower. All three lines drop in 2002 due to deficiency of data, as pointed out in the prior paragraph.

Although this dataset gives useful information on industry total wage bills and total number of employees, it does not show comprehensive trend of the movement of wages. We will use an alternative dataset – the Occupational

Wages around the World (OWW) database by Freeman and Oostendorp (2012) to indicate trends in wages in the short-run in the following section.

Meanwhile, using the first dataset, we examine the trend of tariffs in developed and developing countries ( Figure 3,

4 and 5 ). Tariffs are classified by industry affiliation, which is indicated by the three-digit ISIC codes in the attachment. Industry description is affixed ( Table 6 ). Average tariff of tobacco manufactures (ISIC 314) is far higher than that in other industries, across countries. It ranges from 17.0% to 40.6% in developed countries and from

19.1% to 78.7% in developing countries. For other industries, in the developed countries, the tariffs remain below

10% and shows a mild decreasing trend (except 311, 313, 314, 322 and 324), while a more pervasive trend is observed for developing countries from 1976 to 2004. In Figure 6, where the average tariffs of all industries in developing and developed countries are calculated, it can be deduced that both developing and developed countries adopted trade liberalisation policies over the past two decades. Average tariff of developed countries was 8.0% in

1991. The figure showed an incessant decreasing trend and ended at 4.4% in 2004. Average tariff of developing countries has an upward movement from 1991 (18.6%). It peaked in 1994 (26.3%) and dropped in the following ten years to 11.5% in 2004.

3.2 Preliminary Regressions – analysis for the short run

We extract the adjusted hourly wage rates for different industries and occupations from the Occupational Wages around the World (OWW) database, produced by Freeman and Oostendorp (2012), and merge the information with trade and tariff data provided by Nicita N. and M.O Larreaga, in order to form a more reliable indicator of global trade and the labour market. Note that all the wage rates are reported in dollars, and values of import and export are reported in thousand dollars.

At the first stage of the empirical analysis, We choose both a developed country and a developing country to explore the effect of trade liberalisation policies on import, export and wage rates. The United Kingdom is chosen as a representative of developed countries, due to its long-standing status in the league. The weighted average tariff of manufacturing industries in the United Kingdom dropped from 5.7% in 1988 to 2.4% in 2003. The data covers the ten main manufacturing industries (311, 321, 332, 342, 351, 352, 371, 382, 383, 384) of the country. Among the developing countries, Pakistan has been a typical and thus representative country opening up to trade for the past two decades. Another reason for such a country-combination nomination also relies on the relatively sparring trade relation between Pakistan and the United Kingdom. The weighted average tariff of manufacturing industries in

Pakistan decreased from 52.8% in 1995 to 19.7% in 2004. The data covers the seven main manufacturing industries

(311, 321, 323, 342, 351, 352, 384) of the country.

As the Occupational Wages around the World (OWW) database only overlap with Trade, Production and Protection

1976-2004 dataset on a small proportion of industries and years, at this stage we only consider data collected in two subsequent time periods. We use a first differences specification to perform the analysis, where the years 2002 and

2003 are examined. The model is:

1) ∆𝑤𝑎𝑔𝑒_𝑈𝐾_𝑖 = 𝛽

0

+ 𝛽

1

∆𝑡𝑎𝑟𝑖𝑓𝑓_𝑈𝐾 𝑖

+ 𝛽

2

∆𝑖𝑚𝑝𝑜𝑟𝑡_𝑈𝐾 𝑖

+ ∆𝜇 𝑖

2) ∆𝑤𝑎𝑔𝑒_𝑃𝑎𝑘𝑖𝑠𝑡𝑎𝑛 𝑖

= 𝛿

0

+ 𝛿

1

∆𝑡𝑎𝑟𝑖𝑓𝑓_𝑃𝑎𝑘𝑖𝑠𝑡𝑎𝑛 𝑖

+ 𝛿

2

∆𝑖𝑚𝑝𝑜𝑟𝑡_𝑃𝑎𝑘𝑖𝑠𝑡𝑎𝑛 𝑖

+ 𝛿

3

∆𝑒𝑥𝑝𝑜𝑟𝑡_𝑃𝑎𝑘𝑖𝑠𝑡𝑎𝑛 𝑖

+ ∆𝜀 𝑖

Here 𝑖 denotes occupation types, and ∆ refers to the backward difference operator. This model is based on the assumption that fixed effects (including industry affiliation) has been eliminated by calculating the differences between paired observations. The value of import and export of each industries in the United Kingdom show collinearity in both 2002 and 2003 ( Table 1 and 2 ). Therefore, value of export is excluded from the model when analysing data of the United Kingdom. However, for Pakistan, total value of import and export of each industry show negligible correlation ( Table 3 and 4 ), so total value of export is included as a covariance. Regression results are displayed as below (p-values are reported in the brackets): 𝑖

( 1.7 × 10 −11

−1 ∆𝑡𝑎𝑟𝑖𝑓𝑓 𝑖

+ 5.783 × 10

) ( 0.486

) ( 0.774

)

−9 ∆𝑖𝑚𝑝𝑜𝑟𝑡 𝑖 𝑖

= 1.139 × 10 −2 − 1.085 × 10 −1 ∆𝑡𝑎𝑟𝑖𝑓𝑓 𝑖

− 1.979 × 10 −8 ∆𝑖𝑚𝑝𝑜𝑟𝑡 𝑖

− 6.657 × 10

(0.137784) (0.000676) (0.352747) (0.432064)

−9 ∆𝑒𝑥𝑝𝑜𝑟𝑡 𝑖

There is no significant evidence to reject that 𝛽

1

= 𝛽

2

= 0 in the case of the UK. This implies that, at least in the short run, trade liberalisation policies do not alter values of import and wage rates in the UK statistically significantly. In the Pakistan case, on the other hand, the coefficient of tariff changes in Pakistan is statistically negative. For the seven industries in Pakistan that the data covers, trade liberalisation results in an increase in the growth rate of wages. As yet, wages in developing countries are more sensitive to tariffs than in the counterparts.

However, due to the unavailable of sufficient data, this deduction cannot be testified by the statistical result.

It is reasonable to suggest that the reduction of tariffs boosts international trade. We analyse the effect of tariff reduction on the value of import to see if the argument is true:

∆𝑖𝑚𝑝𝑜𝑟𝑡 𝑖𝑗

= 𝛼

0

+ 𝛼

1

∆𝑡𝑎𝑟𝑖𝑓𝑓 𝑖𝑗

+ ∆𝛾 𝑖𝑗

where 𝑖 = 1 denotes observations in the UK and 𝑖 = 2 denotes Pakistan; 𝑗 signifies occupations. Here ∆𝛾 𝑖𝑗 error term. The results are shown below (p-values are reported in the brackets):

is an

̂

1𝑗

= 1037900 + 388496∆𝑡𝑎𝑟𝑖𝑓𝑓

(0.309) ( 0.828

)

1𝑗

∆𝑖𝑚𝑝𝑜𝑟𝑡

2𝑗

= 157219 − 681857∆𝑡𝑎𝑟𝑖𝑓𝑓

2𝑗

(0.1278) (0.0126)

There is significant evidence that reduction in tariffs raises the total value of import of the manufacturing industries in Pakistan. On the other hand, there is negligible evidence that total import value of the manufacturing industries in the UK is influenced by tariff levels. This result supports the argument that as trade liberalisation increases the competition for the domestic firms in developing countries, less developed countries are more likely to suffer from trade deficit after lowering the barrier to international trade. It is also argued that under trade liberalisation, the sector balance of developing countries is more likely to be shifted, causing a decrease in labour demand and structural unemployment, and, therefore, lower wages and social welfare. The first-half of the regression outputs also supports this argument. These issues will be studied further in Section 3.3

.

Table 1 . Correlation matrix - the UK 2002 wage02 tariff02 import02 export02 wage02 1 0.12803238 -0.05076964 -0.04467823 tariff02 0.12803238 1 0.17563327 -0.05871051 import02 -0.05076964 0.17563327 1 0.94721312 export02 -0.04467823 -0.05871051 0.94721312 1 wage03 tariff03 import03 export03

Table 2 . Correlation matrix - the UK 2003 wage03 tariff03 import03

1

-0.02088016

-0.07681046

-0.04378443

-0.02088016

1

0.28050322

0.00225885

-0.07681046

0.28050322

1

0.93488371 export03

-0.04378443

0.00225885

0.93488371

1 wage02 tariff02 import02 export02

Table 3 . Correlation matrix - Pakistan 2002 wage02

1

0.9055436

-0.00149325

-0.1508307 tariff02

0.9055436

1

-0.1953851

0.1094565 import02

-0.00149325

-0.1953851

1

-0.2001654 export02

-0.1508307

0.1094565

-0.2001654

1 wage03 tariff03 import03 export03

Table 4 . Correlation matrix - Pakistan 2003 wage03 tariff03 import03

1

0.89333272

0.21182433

-0.08875042

0.89333272

1

-0.02336664

0.19351617

0.21182433

-0.02336664

1

-0.19876383 export03

-0.08875042

0.19351617

-0.19876383

1

3.3 Regression results on structural factors and total factor productivity

After getting an idea of how trade liberalisation affects wages in developing and developed countries in different ways, we move on to investigate how variables related to trade can influence wages in a range of industries. In this chapter, we are going to follow the methodology outlined in Section 2.2

to analyse total factor productivity (TFP) and wages data of the United States from 1976 to 2004. We combine information of the US from the Trade,

Production and Protection Dataset by Nicita and Olarreaga (2006) (we call it TPP in short) together with the

NBER-CES Manufacturing Industry Database (1958-2009) (R. Becker, W. Gray, J. Marvakov, 2013). Recall the

Stage 1 regression model defined in Part 2 (Methodology) :

∆ ln 𝑃 𝑖

+ ∆ ln 𝜆 𝑖

= 𝛼 + 𝛾 𝑇 ∆𝑧 𝑖

+ 𝜀 𝑖 where 𝑖 denotes the ith industry. 𝑃 is the local price of homogenous output, and 𝜆 is defined as marginal productivity of labour. 𝑙𝑛𝜆 is assumed to be well approximated by the growth of total factor productivity (TFP). Here 𝑧 is a vector containing structural variables. Note that 𝛼 is the intercept, and 𝛾 is a vector containing the coefficients for each structural variable. The following analysis is conducted using a fixed effects model. Note that as local price of homogenous output of each industry does not vary as much as the marginal productivity of labour, and considerably hard to estimate accurately, here we assume that ∆𝑙𝑛𝑃 𝑖 effect on ∆𝑙𝑛𝑊 𝑖

(Recall from Section 2.2

that ∆𝑙𝑛𝑊 𝑖

≪ ∆𝑙𝑛𝜆

= ∆𝑙𝑛𝑃 𝑖 production – labour.) Our model estimated is displayed below: 𝑖

. Therefore the magnitude of Δ𝑙𝑛𝑃 𝑖

+ ∆𝑙𝑛𝜆 𝑖

has negligible

, here we only consider one factor of

Δ𝑙𝑛𝜆 𝑖

= 𝛼 + 𝛾

1

∆𝑡𝑎𝑟𝑖𝑓𝑓 𝑖

+ 𝛾

2

∆𝑖𝑚𝑝𝑜𝑟𝑡 𝑖

+ 𝛾

3

∆𝑒𝑥𝑝𝑜𝑟𝑡 𝑖

+ 𝛾

4

∆𝑐𝑜𝑛𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛_𝑡𝑜_𝐺𝐷𝑃 𝑖

+ 𝜀 𝑖

Here 𝑡𝑎𝑟𝑖𝑓𝑓 represents import weighted average tariff rate applied on goods entering the US, 𝑖𝑚𝑝𝑜𝑟𝑡/𝑒𝑥𝑝𝑜𝑟𝑡 represents the value of imported/exported goods entering/leaving the US in thousand dollars, and 𝑐𝑜𝑛𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛_𝑡𝑜_𝐺𝐷𝑃 represents the proportion of value added of the industry to the total GDP of the country. In

NBER-CES dataset, industries are coded with six-digit 1997 NAICS system. By matching each of these observations with the three-digit ISIC coded observations in TPP , we get average TFP data for 26 manufacturing industries in the US for the year 1976 to year 2004. In NBER-CES, TFP growth of every industry is provided in two different formats. Here we simply calculate the mean value of each pair of TFP growth value to provide an estimate.

The GDP data is provided by Conference Board Total Economy Database (2015), reported in 1990 US Dollars. In a fixed effects model, ∆𝑥 𝑗

is calculated by:

∆𝑥 𝑗

= 𝑥 𝑗𝑘

− 𝑛

1 𝑛

∑ 𝑥 𝑗𝑙 𝑙=1

Where 𝑗 denotes the 𝑗𝑡ℎ observation, 𝑘 denotes the base year and 𝑛 denotes the total number of years. The regression results are shown, as the p-values are specified in the parenthesis:

Δ𝑙𝑛𝜆 𝑖

= −0.0096 − 0.0051∆𝑡𝑎𝑟𝑖𝑓𝑓 𝑖

+ 0.0000∆𝑖𝑚𝑝𝑜𝑟𝑡 𝑖

+ 0.0000∆𝑒𝑥𝑝𝑜𝑟𝑡 𝑖

+ 8.5230∆𝑐𝑜𝑛𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛_𝑡𝑜_𝐺𝐷𝑃 𝑖

(0.166) (0.042) (0.553) (0.388) (0.225)

Controlling for Δ𝑖𝑚𝑝𝑜𝑟𝑡 𝑖 i.e.

Δ𝑇𝐹𝑃 𝑖

, ∆𝑒𝑥𝑝𝑜𝑟𝑡 𝑖

and ∆𝑐𝑜𝑛𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛_𝑡𝑜_𝐺𝐷𝑃

, and therefore determines 𝑙𝑛𝑤𝑎𝑔𝑒 𝑖 𝑖

, ∆𝑡𝑎𝑟𝑖𝑓𝑓 𝑖

has a significant effect on ∆𝑙𝑛𝜆 𝑖

,

according to our theoretical framework specified in Section 2.1

.

Using the result from Stage 1 , we now estimate the Stage 2 equation. Recall the model from Section 2.2

: 𝛾 𝑘 𝑘𝑖

= 𝜇 𝑘𝑖

+ 𝛿∆𝑙𝑛𝑤𝑎𝑔𝑒 𝑘𝑖

+ 𝜐 𝑘𝑖

As ̂

1

is the only significant parameter in Model 1 , we now estimate:

−0.005083Δ𝑡𝑎𝑟𝑖𝑓𝑓 𝑖

= 𝜇 𝑖

+ 𝛿Δ𝑙𝑛𝑤𝑎𝑔𝑒 𝑖

+ 𝜈 𝑖

Regression output:

−0.005083Δ𝑡𝑎𝑟𝑖𝑓𝑓 𝑖

= 0.003231 + 0.097589Δ𝑙𝑛𝑤𝑎𝑔𝑒 𝑖

(0.146) (0.110)

Here 𝛿̂ is positive, which can be interpreted as tariff reduction boosts wages, ceteris paribus , following our reasoning in Part 2 . However, the parameter is not significant enough (p-value>0.1).

As we are also interested in the characteristics of the magnitude of growth in total factor productivity (TFP), we run

Model 3 to determine Δ𝑙𝑛𝜆 𝑖 shown below:

using some independent variables that are intuitively related. The estimated equation is

Δ𝑙𝑛𝜆 𝑖

= 𝛽

0

+ 𝛽

1 𝑖𝑓𝑊𝑇𝑂 𝑖

+ 𝛽

2

Δ𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 𝑖

+ 𝛽

3

Δ𝐺𝐷𝑃 𝑖

+ 𝜃 𝑖

Here 𝑖 denotes the 𝑖𝑡ℎ country, and 𝑙𝑛𝜆 represents country-level TFP growth, which is regressed on three country specific characteristics: whether the country is a member of WTO, population and GDP. 𝑖𝑓𝑊𝑇𝑂 𝑖 is a binary variable, which takes value 1 if the 𝑖𝑡ℎ country is a member of WTO, and value 0 if the country is not. Fixed effects model is still applied here. The data from the Conference Board Total Economy Database (2015), where GDP is reported in 2014 US Dollars. Outcome:

Δ𝑙𝑛𝜆 𝑖

= −2.501 + 1.461𝑖𝑓𝑊𝑇𝑂 𝑖

+ 0.000Δ𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 𝑖

+ 0.000∆𝐺𝐷𝑃

(0.0015) (0.0676) (0.6391) (0.4058) 𝑖

As we see here, if the country is in WTO or not has a comparatively significant positive effect on the country’s overall TFP growth, ceteris paribus . To analyse the influence of WTO membership on a country’s TFP growth rate, now we exclude the non-WTO countries from the dataset, and replace 𝑖𝑓𝑊𝑇𝑂 𝑖 year the 𝑖𝑡ℎ country joined WTO. We get the following results:

by 𝑦𝑒𝑎𝑟𝑊𝑇𝑂 𝑖

, which indicates the

∆𝑙𝑛𝜆 𝑖

= 181.500 − 0.092𝑦𝑒𝑎𝑟𝑊𝑇𝑂 𝑖

+ 0.000Δ𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 𝑖

+ 0.000∆𝐺𝐷𝑃

(0.379) (0.377) (0.945) (0.638) 𝑖

In this equation, none of the parameters is significant. Model 3 is highly intuitive, and may consist of great bias in the parameters. One obvious weakness of the model is that growth in TFP is generally considered to have a reverse effect on GDP. Here we include GDP to control for potential omitted variable such as level of development of the country, which can be correlated with both membership in WTO and the increasing rate of TFP. However, it is very likely that there are other omitted variables or instrumental variables we did not control for. In Part 4 , we will put forward some directions to investigate the issue further.

Part 4. Discussion and conclusion

At the start of our discussion, we would like to provide a new definition to the overall remuneration of workers: wages. In reality, where laborers are heterogonous and different in skill level, they receive a markup of their wages

(basic wages) evaluated at the competitive equilibrium. The markup sometimes is more generally referred as wage premium. Thus workers receive the sum of “basic wages” and “wage premium” as the overall wages.

This paper estimated the effect of import tariffs on wages, and how it can vary depending on total factor productivity levels . Our empirical approach can be a potential way to overcome some weaknesses of current methodologies used to estimate the effects of trade liberalisation on the wage premium, for example, omission of the influence of TFP growth. In our analysis, we evaluated the impact of trade reforms on basic wages by regression output prices with relevant structural factors, which comprise tariff level, in the first stage. After that, the relationship between the tariff level and the wages was analysed in the second stage. Result has shown to be insignificant, ambiguous and thus inconclusive. As a potential direction for further research, we can assess the “market price” o f wage premium by solving the firm profit maximization problem with respect to wage premium, at the micro-level. Adopting the theoretical framework as same as Lourenco S. Paz (2014), we can strengthen his empirical estimation method by

assessing the relationship between tariff and technological progress (TFP), so that the total effect of a change in the tariff level on wage premium can be obtained.

However, this one-stage OLS regression approach may entice three potential problems:

First, there are possible endogenous variables. One example would be terms of trade and TFP. Terms of trade is put into the regression to control for the effect of the tariff level on TFP, however, there is a possible reverse causation of TFP on terms of trade. A stylistic example would be that a TFP increase reduces the production costs of firms, which then lead to a better terms of trade. Possible remedies include incorporating instrumental variables for each of the endogenous variables and conduct multi-stage OLS regression. Nomination of instrumental variables for terms of trade include home inflation rate, average import prices, and even the home central bank interest rate.

Second, the presence of omitted variables may lead to either upward or downward bias to the estimators, causing inconsistency. The regressions we run clearly omits the influence of the uneven government policies in supporting

Research and Development programme within the country across time. Another example will be the individual selfselection of labour force participation (opting out/in) due to wage change. The notion that trade reform driving out the unskilled workers from the home economy will affect the efficiency of tariff on TFP . Saying that, the timesensitive constant captures some general trends that apply to all countries, including the improving education level, increase of women participation in the workforce.

Finally, heteroscedasticity may be an important, but insignificant element to our research. It refers to the variance of u is non-uniform for at least some of the structural factors. This affect the consistency of estimators and thus our estimators are not BLUE (Best Liner Unbiased Estimators). Simple remedies include having running GLS and using heteroscedasticity-robust regression.

Therefore, we should try refining this model on the three aspects if further empirical analysis is carried out.

References

1.

Amiti, M., and Davis, D., (2008) “Trade, Firms, and Wages: Theory and Evidence.” NBER Working Paper,

14106.

2.

Becker, R., Gray, W., Marvakov J. (2013) NBER-CES Manufacturing Industry Database. http://www.nber.org/nberces/ Accessed 11 March 2016.

3.

Black, J., Hashimzade, N. and Myles, G. (2012) A Dictionary of Economics. 4th Ed. Oxford University

Press.

4.

Freeman, R. and Oostendorp, R. (2012) Occupational Wages around the World (OWW) . Available at www.nber.org/oww/ . Accessed 11 March 2016.

5.

Harrison, A., & Hanson, G. (n.d.). “Who Gains from Trade Reform? Some Remaining Puzzles.”

6.

Hay, D. (n.d.

). “The Post -1990 Brazilian Trade Liberalisation and the Performance of Large Manufacturing

Firms: Productivity, Market Share and Profits.” The Economic Journal, 620 -641.

7.

Lourenço S. Paz (2014) Trade liberalization and the inter -industry wage premium: the missing role of productivity, Applied Economics, 46:4, 408-419, DOI: 10.1080/00036846.2013.848031

8.

Nicita A. and M. Olarreaga (2006), Trade, Production and Protection 1976-2004, World Bank Economic

Review 21(1).

9.

Pavcnik, N., Blom, A., Goldberg, P. and Schady, N. (2004) “Trade liberalization and industry wage structure: Evidence from Brazil.” The World Bank Economic Review, 18(3). p. 319 -344

10.

Stone, S. and R. Cavazos Cepeda (2011), “Wage Implications of Trade Liberalisation: Evidence for

Effective Policy Formation”, OECD Trade Policy Papers , No.122, OECD Publishing.

11.

The Conference Board. (2015) The Conference Board Total Economy Database™, May 2015, http://www.conference-board.org/data/economydatabase/ Accessed 11 March 2016.

12.

“Trade, Technology, and Productivity: A Study of Brazilian Manufacturers, 1986 1998.” (2012, March 23).

Retrieved October 27, 2015.

13.

Verhoogen, E., (2008) “Trade, Quality Upgrading, and Wage Inequality in the Mexica n Manufacturing

Sector.” The Quarterly Journal of Economics, 123(2). p. 489 -530

Table 5. Country Coverage 1

Japan

Latvia

Lithuania

Malta

Netherlands

New Zealand

Norway

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Switzerland

United Kingdom

United States

Developed

Country Name

Australia

Austria

Belgium-Luxemburg

Bulgaria

Canada

Cyprus

Czech Republic

Denmark

Finland

France

Germany (76-90 West)

Greece

Hungary

Iceland

Ireland

Italy

Developing

Code Country Name

AUS Algeria

AUT Argentina

BLX Bangladesh

BGR Benin

CAN Bolivia

CYP Botswana

CZE Brazil

DNK Cameroon

FIN Chile

FRA China

DEU Colombia

GRC Costa Rica

HUN Cote D'lvoire

ISL Ecuador

IRL Egypt

ITA El Salvador

JPN Ethiopia

LVA Gabon

LTU Ghana

MLT Guatemala

NLD Honduras

NZL Hong Kong SAR 2

NOR India

POL Indonesia

PRT Iran

Mexico

Code Mongolia

DZA Morocco

ARG Mozambique

BGD Myanmar

BEN Nepal

BOL Nigeria

BWA Oman

BRA Pakistan

CMR Panama

CHL Peru

CHN Philippines

COL Qatar

CRI Senegal

CIV Singapore

ECU South Africa

EGY Sri Lanka

SLV Taiwan Province of China

(United Republic of)

ETH Tanzania

GAB Thailand

GHA Trinidad and Tobago

GTM Tunisia

HND Turkey

HKG Uganda

IND Uruguay

IDN Venezuela

IRN Yemen

ROM Israel

SVK Jordan

SVN Kenya

ISR

JOR

KEN Armenia

In Transition

Country Name

ESP (Republic of) Korea KOR Azerbaijan

SWE Kuwait KWT Kyrgyzstan

CHE Macau SAR

GBR Malawi

MAC (Republic of) Moldova

MWI Russian Federation

USA Malaysia

Mauritius

MYS Ukraine

MUS

TZA

THA

TTO

TUN

TUR

UGA

URY

VEN

YEM

PER

PHL

QAT

SEN

SGP

ZAF

LKA

TWN

MEX

MNG

MAR

MOZ

MMR

NPL

NGA

OMN

PAK

PAN

Code

ARM

AZE

KGZ

MDA

RUS

UKR

1 Country coverage varies between regressions depending on data availability. Countries are classed regarding Statistical Annex issued by the

United Nations in 2012. Available at: http://www.un.org/en/development/desa/policy/wesp/wesp_current/2012country_class.pdf

2 Special Administrative Region of China.

Table 6. Industry Coverage 3

ISIC Code Industry Description

311 Food products

313 Beverages

314 Tobacco

321 Textiles

322 Wearing apparel except footwear

323 Leather products

324 Footwear except rubber or plastic

331 Wood products except furniture

332 Furniture except metal

341 Paper and products

342 Printing and publishing

351 Industrial chemicals

352 Other chemicals

353 Petroleum refineries

354 Miscellaneous petroleum and coal products

355 Rubber products

356 Plastic products

361 Pottery china earthenware

362 Glass and products

369 Other non-metallic mineral products

371 Iron and steel

372 Non-ferrous metals

381 Fabricated metal products

382 Machinery except electrical

383 Machinery electric

384 Transport equipment

385 Professional and scientific equipment

390 Other manufactured products

3 This paper contains analysis on 28 manufacturing industries listed above. Source: Trade, Production and Protection 1976-2004 dataset by

Nicita N. and M.Olarreaga (2006).

CONSEQUENCES OF INCOMPLETE EMPLOYMENT CONTRACTS

IN A LABORATORY EXPERIMENT

Mateusz Stalinski

1

B.Sc Economics

2 nd

year

University College London

Explore Econ Undergraduate Research Conference

March 2016

1

I am very grateful to Professor Antonio Cabrales for his invaluable guidance throughout the preparations as well as during the experiment. I would like to thank Dr Frank Witte, Dr Cloda Jenkins, and Dr Parama

Chaudhury for giving feedback on my experimental design. I would also like to express my sincere gratitude to to Noel Dobi (UCL), Ali Merali (UCL), Ciprian Tudor (UCL), and Aleksandra Goch (VILO Bydgoszcz) for helping me with organising the experiment.

1 Introduction

This experiment explored the impact of incomplete employment contracts on wages. The paper relied on data obtained in a series of laboratory experiments held in two locations: University

College London (36 participants), and the Upper School no. 6 in Bydgoszcz, Poland (168 participants). This experiment aimed to present an alternative model of the labour market with incomplete employment contracts, which provides numerical predictions for different input parameters. Behaviour of workers and firms is modelled separately using techniques of discrete dynamic programming. The experimental design allows for long-term employment contracts with firms deciding whether to dismiss some of their workers at the end of each period. In the model, shirking is associated with a risk of being dismissed, which reflects real-world features of the labour market. Furthermore, data collected in this experiment sheds light on determinants of dismissal decisions made by firms (a topic which has not been thoroughly studied thus far).

Finally, the model can be used to display workers’ best response functions for different scenarios

(parameters in the model).

The inability of employers to directly control workers’ effort levels has significant economic consequences. Firms are likely to pay a premium over perfectly competitive wages in a labour market in order to motivate employees to work efficiently. Secondly, it is important to check whether employers’ decision rules regarding dismissals vary with level of incomplete employment contracts. The research question was: to what extent are estimates computed by means of the proposed model consistent with observed empirical results?

The most well-known explanation of involuntary unemployment was proposed by Shapiro and

Stiglitz (1984). According to their model, shirking behaviour can be prevented by increasing expected cost of losing a job (by offering higher wages). Firms set wages so that workers never shirk ( non-shirking condition ). The model predicts that the lower the chance of being caught shirking ( q ), the higher the wage required to enforce the non-shirking condition. This hypothesis is supported by Altmann et al. (2013) who obtained estimates for two extreme values of q: 0

(treatment) and 1 (control). One of the main limitations of the Shapiro and Stiglitz framework is the fact that workers have only two actions: work (effort=1) or shirk (effort=0). The experiment provides justification for existence and optimality of the non-shirking condition even with more than two choices available to employees.

1

2 Methodology

The 204 students were randomly assigned to roles within groups and received printed instructions

(Figure 1). All participants were tested to ensure comprehension. Data collection (Figure 2) was facilitated by software programmed in o-tree (Chen, Schonger and Wickens, 2016) and z-tree

(Fischbacher, 2007). Participants were permitted to view their results afterwards (Figure 3).

Students were entered into a lottery and winners received £50 in cash (Figure 4). The probability of winning the lottery depended on performance during the game. Choosing lottery tickets as a reward medium and linking the reward to total profit obtained during all rounds of the experiment ensured: monotonicity , salience , and dominance – conditions required for a correct experimental design (Friedman, Cassar, and Selten, 2004). Each game consisted of 10 rounds.

Figure 1: All participants received written instructions and a piece of paper for making notes

Figure 2: Z-tree facilitates running sessions via the LAN

Figure 3: Students were very excited to see results of the group available on the experimenter’s computer

Figure 4: Deputy Headmaster of the Upper School no. 6 in Bydgoszcz congratulates the winner of the lottery

2

STEP 3:

3 The experimental design

Each game required 14 participants: 10 sellers (workers) and 4 buyers (employers). Employment contracts could last for more than one period. At the end of each period, employers decided which workers to dismiss. In the treatment condition, quality choices are revealed to employers with q=60% probability. It corresponds to incomplete employment contracts. In the control condition, all workers’ decisions are shown to their employers ( q=100% ). The timeline for each round is presented in the table below.

Table 1: Details of the experimental design (pictures of screens seen by participants provided)

FIRST ROUND

STEP 1:

Buyers make 0, 1, or 2 public offers to sellers, specifying one integer price .

STEP 2:

Each bid appears on the screen of every seller. The first participant who accepts a given offer becomes its creator’s trading partner. Each seller can deliver products for only one buyer. There are always at least two sellers without a trading contract.

Each seller who has a trading partner chooses their effort level: with precision 0.5.

3

STEP 4:

Sellers’ quality choices are revealed to buyers. For each trading partner there is a q% probability that their decision will be displayed. A buyer may terminate some of the contracts. All other sellers are automatically the buyer’s trading partners in the next period. It is not possible to terminate a contract with a seller whose effort level was not revealed.

STEP 5:

Payoffs for a given period are calculated and displayed to participants.

ALL OTHER ROUNDS

STEPS 1-2:

Buyers can make new offers if they have less than two trading partners. Buyers specify one price for all of their suppliers (old and new). New offers appear on the screen of every seller without a trading partner. The first participant who accepts a given offer becomes its creator’s trading partner.

STEPS 3-5:

These steps are exactly the same as in the first round.

Payoff function for sellers is defined as follows:

It is important to note that if a contract is terminated at the end of a round, a seller does not receive the price. Buyers’ payoffs are calculated according to the formula below:

.

4

4 Discussion of results

4.1

Termination of contracts

One aim of the study was to identify decision factors for terminating a contract. Clearly, effort

(quality) levels play a crucial role in the process. However, more factors should be taken into account. Employers, instead of basing their decisions solely on quality levels, used effort to wage ratio , which can be defined as a percentage of price which is offered as effort:

To check this hypothesis a probit model for specification with a latent variable was used:

Regressions of two types were estimated (in the first one – quality is excluded from the model).

Treatment and control conditions were analysed separately (Table 2).

Probit regression term perratio

_cons

Table 2: Probit regression excluding ‘quality’

Number of observations = 297 Pseudo R2 = 0.4367

( treatments )

Coefficient

-.0989009

3.475724

Standard Error

.0108186

.4231037 z

-9.14

8.21

P>z

0.000

0.000

Probit regression term perratio

_cons

Number of observations = 396

( controls )

Coefficient Standard Error

-.1156464

5.291947

.0131668

.6606748 z

Pseudo R2 = 0.2933

-8.78

8.01

P>z

0.000

0.000

5

For both treatments and controls, coefficients are statistically significant with p-values lower than

0.0001. The McFadden’s Pseudo R

2 s are high; values of 0.2-0.4 correspond to 0.7-0.9 OLS R

2

(Louviere, Hensher and Swait, 2000).

Coefficients yielded by the second type of regression are presented in Table 3.

Probit regression term perratio quality

_cons

Table 3: Probit regression including ‘quality’

Number of observations = 297 Pseudo R2 = 0.5016

( treatments )

Coefficient

-.0688369

-.7197774

4.322064

Standard Error

.0128159

.1476895

.5152203

z

-5.37

-4.87

8.39

P>z

0.000

0.000

0.000

Probit regression term perratio quality

_cons

Number of observations = 396

( controls )

Coefficient

-.0964069

Standard Error

.0138554

-.6997424

6.452646

.1324244

.734403

z

Pseudo R2 = 0.3653

P>z

-6.96

-5.28

8.79

0.000

0.000

0.000

In both treatment and control groups coefficients for quality are significantly different to 0. This shows that some part of the decision came directly from values of effort, regardless of the offered price. Yet, for the modelling part, it was assumed that the probabilities depended solely on the effort-wage ratio to keep the model tractable.

Figure 5 shows probabilities of contract termination estimated from the first set of regressions.

The graph shows predicted probabilities for different values of perratio. The firms’ decision rule on contract termination varies with degree of incomplete employment contracts. In the case of perfectly complete contracts, a higher quality/price ratio is necessary for a given probability of the termination.

6

Figure 5: Probability of contract termination given quality/price ratio (first set of regressions)

The two flat parts of the curve indicate two different effects. For values of perratio lower than

0.33, employers receive negative profit (as their payoff is a difference between quality multiplied by 3 and price). Thus, it is not surprising that for ratios below 0.33, probability of a contract’s termination is close to 1. The linear segment of the curves ends close to 0.5 in both cases.

Furthermore, the ratio of 0.5 occurred for 26% of all choices. The high frequency can be explained by existence of a 50-50 social normal .

4.2

Modelling firms’ and worker’s behaviour

In each period workers can be in one of the two states: employed, unemployed . For a given individual, their state in period t is denoted by . The value 1 corresponds to the state of employment, while the value 0 corresponds to the state of unemployment. In general, in each state they have a set of actions . Note that and .

Elements of the set are denoted by .

7

Workers are assumed to maximise their lifetime expected utility. This is equivalent to choosing an optimal set of actions for each state in every period

1

(in this case – optimal effort for ).

For each period we define a value of being in state : where u t

is a within-state utility function. To perform optimisation it is necessary to find transition probabilities (Table 4) – probabilities of moving from one state to another given actions taken. To compute the probabilities, estimates obtained for the probit regression (Table 2) are used. For simplicity, perratio is denoted as c .

Table 4: Transition probabilities for each possible pair of subsequent states

Transition (T)

1->1

1->0

0->1

0->0

Equation

2

Description

Firstly, workers, whose effort is not revealed, cannot be dismissed. This occurs with probability of 1-q . Secondly, some workers, whose effort level is revealed, will be retained.

The second equation follows from the first one and the fact that probabilities of events covering the sampling space add up to 1.

Values of a used are average probabilities of finding employment if unemployed in the samples: 0.5 in controls and 0.49 in treatments.

1

A formal proof of this fact can be found for example in Powell (2011) in Section 3.10.

2 stands for standard normal cumulative distribution function.

8

Finally, by substituting transitional probabilities, values of being in each state can be fully displayed:

In order to find workers’ optimal action(s) in the period t (

), it is necessary to find all maximising . The optimal x (if between 0 and 4) has the following form: where f(.) positively depends on w , q and and negatively on . The exact functional form is given in the footnote

3

. Workers exert more effort when:

 employment contracts are less imperfect ( x* increases as q increases),

 probability of finding new employment is lower i.e. duration of unemployment is higher

( x* increases as a falls),

 employment rent is higher ( x increases as increases).

The original optimisation problem can be solved by backward induction. For the last round

, therefore, it is possible to perform maximisation and find x*. The estimated best response function is presented in Figure 6. The optimal points for employers (who are constrained by the workers’ best response curve) occur at the kinks: 6.3

for controls and 8.2

for treatments. Those values are used to calculate and , which are automatically and for the second last period. Then, the process is repeated. Optimal wages for each period are displayed in Figure 7 (generated by the model).

9

4

3,5

3

2,5

2

1,5

1

0,5

0

0

Control

Treatment

2 4 6

Wage

8 10 12

Figure 6: Best response curves generated by the model for the final round of the game

8,5

8

7,5

7

6,5

6

5,5

0 1 2 3 4 5 6 7 8 9 10

Period

Control

Treatment

Figure 7: Optimal wages by period in treatments in controls

According to the model, higher degree of incomplete employment contracts is associated with higher wage required to motivate workers to exert maximum effort (4.0). A more informal explanation is as follows. In the treatment condition employers are faced with significant risk – workers might try to shirk, believing that they are likely to avoid the termination of their contract.

10

In fact, in four out of ten cases, they would be right. Is there a way in which employers could make shirking too risky for employees? They can achieve this by offering sufficiently high wages. Furthermore, the model predicts a significant upward trend in wage rates for treatments and a much flatter increase for controls. This can be explained by the fact that termination of a contract is more costly to workers on the beginning of the game, thus less is required to motivate them. As the game progresses, dismissals affect fewer rounds of the game, therefore, higher incentive is necessary. This effect is one of the most important reasons for using the dynamic programming approach to model workers’ behaviour.

Otherwise we would assume that agents do not care about future consequences of their decisions, which is certainly not the case in the labour market. The increase in wages is much weaker in controls which might be explained by higher turnover, which corresponds to lower risk of long-term unemployment.

Predictions generated by means of the model can be compared to findings from the experimental data. The graph below (Figure 8) shows average wages for each round for both conditions.

6,5

6

5,5

5

4,5

4

9

8,5

8

7,5

7

1 2 3 4 5 6

Round

7 8 9 10

Figure 8: Average wages set by employers in controls and treatments

(bars have length of one standard deviation for each round)

Control

Treatment

11

Average wage is higher in treatments than in controls for all rounds. The difference becomes greater as the game progresses. Furthermore, in the treatment condition we observe a strong upward trend in wages (up to round 7) as predicted by the model. The following fall might be explained by the fact that in rounds 6 and 7, average wage was above the optimum, which led to lower profits for buyers, who adjusted the prices downwards in the next two rounds. The upward trend then continues for the final round.

The graph provides support for the research hypothesis that average wages will be higher in the treatment condition than in the control. Due to high number of observations (more than 350 per sample) it was justified to use a t-test with unequal variances to test whether average wage in controls is statistically different than in treatments. Average wage in treatments (7.42) is lower than average wage in controls (5.90) by 1.52. The test statistic (-12.82) is large enough in absolute terms so that the null hypothesis (the means are equal) can be rejected at 1% significance level (p<0.0001).

4.3

Concluding remarks

Incomplete employment contracts are beneficial for workers who are employed. They receive a significant wage premium over the market equilibrium. At the same time, firms’ profits are much lower in treatments than in controls. It is also important to note that in the real labour market not all workers benefit from incomplete contracts. The number of people unemployed is higher (due to higher wage premium) and mobility of labour is lower. Long-term unemployment is more likely under incomplete employment contracts. Therefore, it can be concluded that incomplete employment contracts are a major source of income inequality. They might negatively affect long-run economic growth by depressing investment (due to lower firms’ profit) and increasing number of long-term unemployed. As a result, it justified to consider policy solutions aiming to offset negative consequences of incomplete employment contracts.

Word count: 1998 (excluding figures, tables, and references)

12

5 References

Akerlof, G. (1982). Labor Contracts as Partial Gift Exchange. The Quarterly Journal of

Economics , 97(4), p.543.

Altmann, S., Falk, A., Grunewald, A. and Huffman, D. (2013). Contractual Incompleteness,

Unemployment, and Labour Market Segmentation. The Review of Economic Studies , 81(1),

Chen, D., Schonger, M. and Wickens, C. (2016). oTree - An open-source platform for laboratory, online, and field experiments. Journal of Behavioral and Experimental Finance , 9, pp.88-97.

Fehr, E. and Falk, A. (1999). Wage Rigidity in a Competitive Incomplete Contract Market.

Journal of Political Economy , 107(1), p.106.

Fischbacher, U. (2007). z-Tree: Zurich toolbox for ready-made economic experiments. Exp Econ ,

10(2), pp.171-178.

Friedman, D., Cassar, A. and Selten, R. (2004). Economics lab . London: Routledge.

Friedman, D. and Shyam Sunder, (1994). Experimental methods . Cambridge [England]:

Cambridge University Press.

Gneezy, U. (2013). Does high wage lead to high profits? An experimental study of reciprocity using real effort. The University of Chicago GSB, Chicago

Louviere, J., Hensher, D. and Swait, J. (2000). Stated choice methods . Cambridge: Cambridge

University Press.

Powell, W. (2011). Approximate dynamic programming . Hoboken, N.J.: Wiley.

Shapiro, C., Stiglitz J. E. (1984). Equilibrium Unemployment as a Worker Discipline Device.

American Economic Review , 74(3), 433–444.

Yellen, J. (1984). Efficiency Wage Models of Unemployment. American Economic Review ,

74(2), pp.200-205.

13

!

O

PEN

S

OURCE

: I

N

S

EARCH OF

C

URES FOR

N

EGLECTED

T

ROPICAL

D

ISEASES

Sukhi Wei 1

B.Sc Economics With A Year Abroad

2 nd year

University College London

Explore Econ Undergraduate Research Conference

March 2016

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

1

Special thanks to Dr Parama Chaudhury and Dr Frank Witte.

!

Behind every new drug is an arduous journey. The estimated cost of research and development (R&D) varies, but could amount to $2 billion across 10-15 years (Adams &

Bratner, 2006). While upfront cost is high, the marginal cost of producing medicine is low.

Thus, a pharmaceutical company needs patent protection, which grants monopoly power that enables a markup to recover its expenditure. Once the patent expires (typically after 10 years upon release), generic versions of a drug flood the market, eroding its profitability. Price of the second global best-selling drug, blood-thinning Plavix, fell from $162 to $10 per month when its patent expired in 2012 (Strawbridge, 2012). Another type of product faces a similar cost structure: software. Depending on its complexity, software development can be an expensive endeavour. Windows Vista operating system exhausted $6 billion over 5 years upon release, but to reproduce it costs virtually nothing. Hence, it is unsurprising that

Windows sold each copy for $239 and took painstaking steps to prevent privacy (Protalinski,

2009).

While the inflated price of Windows caused rampage among users, expensive drugs pose a more alarming concern. Pharmaceutical companies direct research to the most profitable drugs, namely those targeting first-world diseases. This leaves a void in studies on illnesses prevalent in developing regions of Africa, South Asia and Latin America, where people cannot afford the treatments. The World Health Organisation identified 17 diseases as

Neglected Tropical Diseases (NTDs), which despite affecting 1 billion people, fail to garner adequate funds for research (WHO, 2016).

Given the similarities between the industries, a solution for NTDs may exist in the IT world.

In the mid 90s, the domineering Microsoft faced an unexpected threat. The Linux operating system created a sensational buzz it was being for free. If developed via conventional propriety means, the Linux counterpart to Windows Vista named Fedora 9 would cost approximately $10 billion (McPherson, Proffitt, & Hale-Evans, 2008). However, as a product of open source, it came without a price tag.

The open source process resembles a bazaar of ideas where a sharing culture prevails in these communities. a new version of Linux is released nearly every 6 months, and independent hobbyists/programmers work collaboratively to fix bugs present. The ‘ open ’ element is exemplified by:

!

open participation - anyone can join

!

• !

open knowledge - all code is made public

• !

open distribution – all code can be freely reused

In contrast, traditional software is built like a cathedral, carefully crafted by an isolated team of programmers, only opening its door when all visible bugs are fixed – much like Microsoft

Windows, which is released every few years.

Figure 1 compares the two varying approaches. While both begin with a core group of programmers defining the direction of a project, they soon diverge as an open source software is made public as a ‘kernel’ (a small segment or crude version of code). After a peer review of the kernel, the process enters an iterative cycle of testing and bug-fixing, where anyone may contribute new code to improve upon the original. This process is repeated in the

Figure 1 Traditional vs Open Source development models

!

‘parallel debugging’ stage, after new code is incorporated by the core team in ‘development release’, but before the final ‘production release’ (Roets, Minnaar, & Wright, 2007).

Known for the model’s efficiency and cost-effectiveness, open source could contain R&D costs of drug research, thus keeping new drugs cheap. It does so by tackling all four contributors to the sky-high price: outlay, risk, duration and monopoly power.

The average annual wage of a pharmaceutical employee is over $100,000 (PhRMA, 2013).

Cost of labour undoubtedly adds up to a sizeable sum given the duration large number of employees involved in developing a drug. The open source model, which relies almost entirely on volunteers’ donation of time and effort essentially reduces wages to 0. Further cost savings is achieved with the use of free online platforms and databases, which minimises money spent on private severs and physical facilities. It may be hard to reconcile the fact the images of an IT enthusiast behind his computer and a pharmacist in a sterile laboratory.

Nevertheless, programmers find bugs to fix, while researchers find lead molecules that interact with disease-related proteins. Both involve ‘finding and fixing tiny problems hidden in an ocean of code’ ( Maurer, Rai, Sali, 2004). A number of open research initiatives are already underway, such as the Tropical Disease Initiative, Open Source Drug Discovery and

Collaborative Drug Discovery. Adopting the ethos of open source, research is split into bitesize tasks delegated to many contributors. Results are shared online, enabling speedy peer review.

While open research is largely limited to the Discovery and Pre-clinical stages of drug development, the influence of open source does not end here. Several free software facilitating the R&D process have emerged, such as the clinical trial data management,

OpenClinica. Historically, clinical trial involves the daunting tasks of capturing, cleaning, and extracting data, which translate into large volumes of paperwork .

Despite the advent of commercial Electronic Data Capture (EDC) software that digitalises the tedious process, many poorly-funded research institutions fail to shift to electronic records due to the exorbitant its exorbitant price. OpenClinica is an open source EDC software that lowers the cost of paper-based studies by an approximately 47% (Huger , 2013).

With such significant reduction, more parties are empowered to conduct NTD trials that are otherwise beyond their financial capacity.

!

Illustrated in Figure 2, a combination of open source concept and software can be incorporated to drug development to reduce outlay. Additionally, open research drives down cost in early stages, freeing up funds for more needed phase of product development. This would reverse the recent trend of funds shifting towards basic research, expediting clinical trials of existing, untested drugs (Moran, et al., 2011). Nevertheless, unlike software development, physical equipment and facilities are inevitably needed for wet experiments.

Open source research projects can form partnerships with public research institutions for these stages. Aided by rapid technological progress, drug researchers are increasing their use computers as substitutes for lab functions, from virtually screening thousands of compounds for suitable drug candidates, to modelling the interactions between chemicals and diseaserelated proteins (Carlson, 2012). For example, a SARS protein responsible for nearly 800 deaths in 2003 was identified by a virtual scan on proteins encoded by the SARS genome

(von Grotthuss M, 2003) .!

As such, open source has become even more feasible in recent years.

Open!Research!

Open!Source!

Software !

Figure 2 Open source in phases of drug development

Aside from the tangible expenditure in drug development, a pharmaceutical company takes into account the opportunity cost of capital when pricing a new drug. Thus, the duration and risks contribute to the unaffordability since investors are attracted only if expected returns are lower than the opportunity cost. Given the dismal success rate of the clinical trial phase – in some studies only 12% - profits made from one successful drug must be large enough to cover for the cost of failure, the reward and time value for investors. The average opportunity cost of capital across the same 5 studies mentioned above, adjusted for risk, is $534 million per drug, a significant sum that pharmaceutical companies incorporate into the price of a new drug (Morgan, Lexchin, Cunningham, Greyson, & Grootendorst, 2011). An open source research, however, can be viewed as a huge risk-sharing platform with many stakeholders. In this case, failure mainly incurs cost on lost time, which volunteers have willingly donated to the cause. Thus, there is no obligation for the final product to compensate for the risks.

Moreover, open source may reduce chances of failure. In the software industry, a famous

!

saying dubbed as ‘Linus’s Law’ states that “given enough eyeballs, all bugs are shallow.”

Just as how software experiences rapid prototyping and rapid failures, experiment results are released quickly online (compared to publishing in journals) so the community can identify dead ends and change the direction of a project quickly. Moreover, ‘the averaged opinion of a mass of equally expert (or equally ignorant) observers is quite a bit more reliable a predictor than that of a single randomly-chosen one of the observers (Raymond, 2001). With many participants across the globe, viable solutions are more likely to emerge when a research project encounters roadblocks.

Although open source could increase the success rate, there is no evidence for whether it quickens the pace of research. Unlike private pharmaceutical research, the open source approach imposes an opportunity cost on volunteers’ time. Following the argument for risks, volunteers do not expect monetary returns for their donated time. They are rewarded with immediate recognition and satisfaction from attaining a goal. Opportunity cost for funds could also be overlooked since those spent on conventional research methods has not been proven effective in delivering treatments for NTDs.

To sum up, the cost in discovery and pre-clinical phase are minimised by open research, open source software like OpenClinica, and disregarding the opportunity cost, as shown in Figure

3.

Total:!$1064!

50%!

33%!

17%!

Figure 3 How open source reduces R&D cost

Profits!forgone!

Open!source!

software!

Open !

research !

!

Pharmaceutical companies often cite high R&D cost to justify the high prices of new drugs, but profit remains the driving reason. As Table 1 shows, spending on sales and marketing often exceeds that of R&D – sometimes twice as much, and the top drug companies make billions of profit each year (Anderson, 2014). On the other hand, the composition of a new drug is made public in an open source project. Once approved, any generic manufacturer can freely obtain the information and produce it. The competition will drive prices down, close to marginal cost and eliminating profits. Without the need to please investors, reward management and pay employees, affordable drugs for NTDs come within reach with open source.

Company!

Johnson!&!Johnson!

(US)!

Novartis!(Swiss)!

Pfizer!(US)!

58.8!

51.6!

50.3!

HoffmannNLa!

Roche!(Swiss)!

Sanofi!(France)!

Merck!(US)!

44.4!

44.0!

GSK!(UK)!

41.4!

AstraZeneca!(UK)!

25.7!

Eli!Lilly!(US)!

AbbVie!(US)!

23.1!

18.8!

Total!

revenue!

($bn)!

71.3!

R&D!spend!

($bn)!

8.2!

9.9!

6.6!

9.3!

6.3!

7.5!

5.3!

4.3!

5.5!

2.9!

Sales!and!marketing!

spend($bn)!

17.5!

14.6!

11.4!

9.0!

9.1!

9.5!

9.9!

7.3!

5.7!

4.3!

Profit!

($bn)!

13.8!

9.2!

22.0!

12.0!

8.5!

4.4!

8.5!

2.6!

4.7!

4.1!

Profit!

margin!(%)!

19!

16!

43!

24!

11!

10!

21!

10!

20!

22!

!

Table 1 Profile of top pharmaceutical companies in 2014, BBC News

As of today, open source has yet to deliver a drug to the market. Its potential has not been fully unleashed, with projects utilising either the concept of open research, or free facilitating software. Merging these two elements can reduce cost more substantially, providing an impactful push along the drug pipeline. Furthermore, some companies relying on open source have made profit with innovative business models - Red Hat Inc, the largest commercial distributor of Linux forecasted a profit of $1.80/share on its $2 billion revenue last year

!

(Reuters, 2015). W ith suitable license schemes, open source drug development could be selfsustainable in the long run. Like a chef who shares his recipe but charges a franchise fee if it is used by restaurants, open source initiatives could provide license to generic manufacturers on profit-sharing terms. A small percentage of profit could be donated back to the initiatives to fund more ongoing research.

Manufacturers must obtain approval from health and safety regulators before production, creating a complete list of obliging firms to enforce compliance. Even if this cannot not fund future research completely, it further reduces the burden of NTDs on public sector and NGOs.

The main concerns with open source drug development are incentives, safety and availability of technology. Incentives are self-evident in the thriving open source software community; economists have also delved into this topic (Tirole & Lerner, 2002). The safety of a drug that involved less qualified volunteers may come into question, but experienced project managers, rising computational abilities and rigorous peer review should mitigate the risks. Lastly, some regions may not have decent internet connectivity and lab facilities to conduct open source research. Nevertheless, open source is not bounded by national borders. While this approach is more likely to thrive in more technologically advance countries such as India, dispersed institutions in other countries can collaborate online, simultaneously raising local research standards. Governments also may be more likely to fund such projects if they appear promising.

Above all other strengths, the true power of open source lies in the fact that programmers want to scratch a ‘personal itch’. With free software comes inevitable bugs that they are eager to fix for personal utility. Similarly, in drug research, open source enables local institutions that may not have the financial capabilities to solve a local disease, instead of hopelessly waiting for help from profiteering MNCs that never came.

!

(2000 words)

!

BIBLIOGRAPHY !

Levine, S. S., & Prietela, M. J. (2014). Open Collaboration for Innovation: Principles and

Performance. Collective Intelligence 2014 (pp. 1-4). Massacheusettes: MIT Sloan School of

Business.

Raymond, E. S. (2001). The Cathedral and the Bazaar.

O’Reilly Media, Inc.

Strawbridge, H. (2012, May 21). Wallets rejoice as Plavix goes generic . Retrieved March 9,

2016, from Harvard Health Publications: http://www.health.harvard.edu/blog/wallets-rejoiceas-plavix-goes-generic-201205214727

WHO. (2016). Neglected tropical diseases . Retrieved May 2016, 9, from World Health

Organisation: http://www.who.int/neglected_diseases/diseases/en/

Protalinski, E. (2009, June 25). Windows 7 pricing announced: cheaper than Vista . Retrieved

March 9, 2016, from Ars Technica: http://arstechnica.com/informationtechnology/2009/06/windows-7-pricing-announced-cheaper-than-vista/

McPherson, A., Proffitt, B., & Hale-Evans, R. (2008, October). The Linux Foundation .

Retrieved March 9 2016, from Estimating the Total Development Cost of a Linux

Distribution: http://www.linuxfoundation.org/sites/main/files/publications/estimatinglinux.html

Raymond, E. S. (2001). The Cathedral and the Bazaar.

O'Reilly Media, Inc.

PhRMA. (2013). 2013 Biopharmaceutical Research Industry Profile.

Washington DC:

Pharmaceutical Research and Manufacturers of America.

Carlson, E. (2012, August 29). 5 Ways Computers Boost Drug Discovery . Retrieved March 9,

2016, from Live Science: http://www.livescience.com/22786-computers-drug-designnigms.html von Grotthuss M, W. L. (2003). mRNA cap-1 methyltransferase in the SARS genome. Cell ,

113 (6), 701-2.

WHO. (2003, December 31). Summary of probable SARS cases with onset of illness from 1

November 2002 to 31 July 2003 . Retrieved March 9, 2016, from World Health Organisation: http://www.who.int/csr/sars/country/table2004_04_21/en/

Huger , J. W. (2013, August 26). Why open source is the future of clinical trials . Retrieved

March 9, 2016, from Opensource.com: https://opensource.com/health/13/8/clinovo-clinicaltrials-interview

Winegarden, W. (2014). The Economics of Pharmaceutical Pricing.

San Francisco: Pacific

Research Institute.

!

Morgan, S. G., Lexchin, J., Cunningham, C., Greyson, D., & Grootendorst, P. (2011, Aprill).

The Cost of Drug Development: A Systematic Review. Health Policy , 4-17.

Anderson, R. (2014, November 6). Pharmaceutical industry gets high on fat profits .

Retrieved March 9, 2016, from BBC News: http://www.bbc.co.uk/news/business-28212223

Reuters. (2015, March 25). Red Hat profit forecast matches estimates despite strong dollar .

Retrieved March 9, 2016, from Reuters: http://www.reuters.com/article/us-red-hat-resultsidUSKBN0ML2HP20150325

Adams, C. P., & Bratner, V. V. (2006). Estimating The Cost Of New Drug Development: Is

It Really $802 Million? Health Affair , 25 (2), 420-8.

Releases/ Historical Schedules . (n.d.). Retrieved March 9, 2016, from Fedora: http://fedoraproject.org/wiki/Releases/HistoricalSchedules

Roets, R., Minnaar, M., & Wright, K. (2007). Open Source: Towards successful systems development projects in developing countries. 9th International Conference on Social

Implications of Computers in Developing Countries.

Sao Paulo.

Maurer SM, Rai A, Sali A (2004) Finding Cures for Tropical Diseases: Is Open Source an

Answer? PLoS Med 1(3): e56. doi:10.1371/journal.pmed.0010056

T HE C ASE F OR A F ISCAL U NION IN T HE E UROZONE

Paul Kimon Weissenberg

BA Philosophy and Economics

2 nd

Year

University College London

Explore Econ Undergraduate Research Conference

March 2016

In the wake of the sovereign debt crisis, the Eurozone is plagued with low growth, low inflation, high unemployment and high debt ratios, not to mention the political turmoil. In this paper, I will argue that a Eurozone fiscal union is desirable from an economic point of view, thereby explicitly ignoring all political considerations that may get in the way of such an argument. I will argue that a fiscal union would put an end to two major problems facing the Eurozone: (i) conflicting priorities regarding the exchange and interest rate; (ii) fiscal indiscipline as demonstrated by countries like Greece during the Eurozone crisis. I will argue that a Eurozone fiscal union, if it is to be successful, must involve fiscal redistribution amongst euro-members and sounder ex ante measures to avoid fiscal excesses.

Conflicting needs within the single currency

One of the costs involved when joining a common currency area is the surrender of monetary policy and therewith exchange rate policy. This means that in times of economic struggle, the only countercyclical means available to governments is fiscal policy. Whilst the national government can run a deficit to compensate for the fall in aggregate demand, the usual market feedback in the form of a depreciated exchange rate (because wariness of financial markets regarding the country’s solvency means investors move out of the currency) does not apply.

Rather, it is spread over the entire currency union. This leads to the common exchange rate depreciating as a whole, which may benefit the country in need but not other countries. Of course this assumes that the country’s trade balance is relatively large so as to affect the entire currency.

In the case of the Eurozone, this is arguably true only for large trading countries such as

Germany. Germany’s economy and trade record are faring comparatively well when compared with peripheral states such as Greece, which took a blow in the aftermath of the sovereign debt

crisis. Greece is facing a prolonged recession with high levels of unemployment (25% 1 ) and declining output (-2% 1 ). Greece cannot use monetary policy to stimulate its economy, nor indeed force the Euro to depreciate enough for its demand to pick up through net exports, since Greece’s trade balance is negligible compared to the rest of the Eurozone (a mere 5% 1 ). This is because other states, such as Germany, have a different exchange rate consistent with the state of their economy. Some argue that Germany is beyond output equilibrium with inflationary pressures arising (unemployment is at 4.5% 1 , growth at 0.3% 1 , inflation at 0.5% 1 ) and that it therefore needs an appreciation of the Euro (Springford et al. 2014, Meier 2004), and perhaps even an increase in the rate of interest (this is set by the European Central Bank and is currently close to zero). This goes contrary to what is needed for the Greek economy to pick up: a depreciated exchange rate to discourage imports and make exports more competitive; and a low interest rate.

Thus, it seems that the idea of sharing a single currency poses a problem if countries sharing it are at different stages of the business cycle.

1 http://fr.tradingeconomics.com/

German exports account for the largest share of EU exports. Source: European Commission

(2015)

Fiscal indiscipline: the case of Greece

A big part of the issue in the case of the Eurozone is structural. Peripheral countries like Greece have, since joining the Euro, benefited from low interest rates as well as a strong currency. This has had two results: lavish spending on imports through increased purchasing power; and spending financed through borrowing at a low rate both at household and government level. Once it became clear that Greece’s debt to GDP ratio was unsustainable in 2009 (deficit of -15% of

GDP 1 ), markets reacted and the yield on 10-year Greek government bonds suddenly rose up to

30% in 2012 (see graph below). Since Greece has no control over either monetary policy or over the exchange rate, and since it could not get financing at lower rates to repay its debt from elsewhere, fiscal austerity was the only way out to regain market confidence. Thus, in the case of the Eurozone, countercyclical fiscal policy is no longer available to a government facing high unemployment and declining growth. The issue is that the debt to GDP ratio can only be brought down through reducing the debt or through inflation. Contractionary fiscal policy puts downward pressure on inflation and growth, thus undermining the primary objective of trimming the debt to

GDP ratio.

Source: European Central Bank

The Eurozone was aware of this and came to the rescue of Greece with financial assistance.

However, as noted by Wolff et al. (2011), this financial assistance is complex, ad-hoc, largely intergovernmental and veto-based. This is illustrated in the graph below. The numerous financial instruments used to assist Greece differed ‘in terms of size, guarantee structure, target group and governance’ ( ibid : 3). In effect, it may be argued that such complex, short-term financial assistance may have been made deliberately technical and intransparent, so as to avoid the political costs behind what effectively reflects the urgent need for a fiscal union in the Eurozone.

2 ibid

2

There are various ways to conceive of a fiscal union. The proposal I set below aims at tackling the problems set out in the first part of this paper.

More ex ante

A stronger ex ante approach to ensure fiscal discipline is needed if the Eurozone is to avoid another Greek-like problem. The Stability and Growth Pact (SGP) at Maastricht was designed to ensure fiscal discipline amongst euro-countries, effectively ensuring that one country’s fiscal excesses do not become a negative externality for others. Interestingly enough, Germany and

France were first to breach the rules which state that budget deficit must remain at less than 3% of GDP and the public debt ratio remain below 60% of GDP (European Commission 2015b). The reasons why these rules were repeatedly violated by member states without serious consequences have been exposed in the literature and go beyond the scope of this paper (Busemeyer 2004,

Economist 2003, Maris et al. 2015).

The failures of the ex ante approach to tackling debt crises point at the need for a new approach to be crafted, delegating more of a say to the EU rather than individual member states. Proposals such as the creation of a EU ministry of finance responsible for assessing the liquidity and solvency of governments, and possessing a veto right over national budgets should be considered

(Wolff et al. 2011). This would be similar to other federations such as Germany and Brazil which impose stricter fiscal discipline through stronger central oversight. That is, the federal government has more of a say when it comes to individual regions’ budgets. Others have gone further, suggesting the creation a euro-area budget to be managed by the EU, independently assessing the needs of various member states and allocating resources accordingly (Allard et al.

2013). Both measures would ensure that no country is allowed to run unsustainable public debts.

At this point, it is worth noting this paper has simplified matters by assuming that a solution for what is effectively a Eurozone problem must come from the EU (e.g. EU finance ministry). This is merely a simplification in light of the current institutional structure of the EU, and with reference to the Treaty of Amsterdam which states that all EU members have to join the euro once the necessary conditions are fulfilled (except for the UK and Denmark who have opt-outs)

(European Commission 2014b).

A redistributive fiscal union

It must be noted at this point that a sound ex ante policy as described above would, if perfectly designed and respected, prevent crises such as experienced in the Eurozone. However, history has shown that rules are not always well implemented (remember the SGP). Thus, I argue that there is a need for a safety net in the form of effective ex post policy as outlined below should ex ante fail.

We have seen that a fiscal boost is impossible for a country facing market uncertainty, such as

Greece. Under a fiscal union, resources would be distributed from those countries with high growth, low unemployment and rising inflation, to those economies facing recession. In our previous example, Germany’s surplus would be redistributed to Greece, allowing the Greek economy to pick up more quickly, thus eliminating the need for a depreciated exchange rate.

Thus, the only way to finance a fiscal boost is through redistribution. In other words, a redistributive system would enable the Euro’s value to better reflect the needs of all member states.

The EU does not currently possess any such redistributive system, as its budget constitutes a mere 1% of Member States’ GDP (European Commission 2015a). Note that the EU’s regional policy which aims to harmonise living standards across the EU has a budget for 2014-2020 of

351.8 bn euro., a tiny fraction of the EU’s total GDP ( 13.9 trillion euro ) (European Commission

2014a). It does not qualify as a redistributive system as defined above in the sense that it does not compensate according to business cycle fluctuations.

For there to be a redistributive Eurozone fiscal union, there ought to be a way for the EU to raise revenue. Again, there are various approaches. Wolff et al. (2011) suggest a federal tax such as in the US. On the federal level, progressive federal taxes have averaged 17% in the past 50 years

(Feyrer and Sacerdote 2013). This budget is in turn progressively spent to help fund federal expenses such as healthcare, defence and social security. This effectively means that there is fiscal burden sharing, albeit restricted solely to federal expenditure. As a side effect, in many ways a federal Eurozone tax would reduce tax rate discrepancy, be it for VAT or corporate tax amongst euro-countries, mitigating the issue of tax competition within the Eurozone

(Rademacher 2013, Mendoza et al. 2003). Another option, as suggested by Allard et al. (2013), would be for each state to contribute to the EU budget according to their current economic situation, effectively creating something like euro-wide automatic stabilisers.

The euro-area finance ministry would be able to borrow on the market at lower rates than the country facing liquidity issues and act as a lender of last resort because of the increase in perceived credibility of the Eurozone as a block. This might envisage the creation of Eurobonds, that is bonds to be issued by the EU finance ministry in order to finance the expenditure of all

countries at the same rate (Muellbauer 2013). This would ensure that situations such as those experienced in Greece be avoided by guaranteeing lower interest rates for Greece. It also means that larger, more credible member states, may see their position on financial markets worsen in terms of the less advantageous rates at which they can borrow. But in the long-term, Eurobonds would ensure intertemporal risk sharing. Should Greece become Europe’s economic superpower in a few years time, it may be of help to Germany. Such a fiscal union would appease financial markets with regards to vulnerable euro-countries such as Greece and would virtually eliminate the possibility of a state default.

There are drawbacks to these proposals involving problems of moral hazard. Debt mutualisation as achieved through the issuing of common Eurobonds may reduce incentives to restore competitiveness and fiscal sustainability. Supply-side reforms within the Eurozone ought not be neglected, although this goes beyond the scope of this paper. In the end, it is about making sure that it is not always the same strong countries financing the weaker ones.

This paper has made the case for a fiscal union in the Eurozone. First, I explained that sharing a common currency presents potentially contradicting priorities for euro-countries at different stages of the business cycle. Second, I outlined the problem of fiscal indiscipline in the Eurozone as a negative externality on other euro-states. I then outlined how a fiscal union would resolve these problems through redistribution and an ex ante approach. In the end, although it makes economic sense to have a fiscal union in the Eurozone, the politics behind it may prove much more contentious.

Word Count: 2024

Bibliography:

 Allard, C. et al. (2013), ‘Toward a Fiscal Union for the Eurozone’, https://www.imf.org/external/pubs/ft/sdn/2013/sdn1309.pdf (retrieved 32.02.16)

 Busemeyer, M. (2004), ‘Chasing Maastricht: The Impact of the EMU on the Fiscal

Performance of Member States’, https://ecpr.eu/Filestore/PaperProposal/4adce2c0-088e-

4a8e-8030-582ac0d24a2d.pdf (retrieved 23.02.2016)

 European Central Bank (2016), ‘Statistical Data’, http://sdw.ecb.europa.eu/browseTable.do (retrieved 08.03.16)

 European Commission (2012), ‘Overview of competitiveness in 27 Member States’, http://europa.eu/rapid/press-release_MEMO-12-760_en.htm?locale=en (retrieved

06.03.16)

 European Commission (2014a), ‘Available Budget 2014-2020’, http://ec.europa.eu/regional_policy/en/funding/available-budget/ (retrieved 23.02.2016)

 European Commission (2014b), ‘Adopting the Euro’ http://ec.europa.eu/economy_finance/euro/adoption/index_en.htm

 European Commission (2015a), ‘EU Annual Budget’ http://ec.europa.eu/budget/annual/index_en.cfm?year=2015 (retrieved 23.02.2016)

 European Commission (2015b), ‘Stabilility and Growth Pact’, http://ec.europa.eu/economy_finance/economic_governance/sgp/index_en.htm (retrieved

9.03.16)

 Feyrer and Sacerdote (2013), ‘the US may show the EU the way forward on fiscal integration’ , http://bit.ly/1754QSi

 Maris et al. (2015), ‘France, Germany and the New Framework for EMU Governance’, http://www.nup.ac.cy/wp-content/uploads/2014/09/France-Germany-and-the-New-

Framework-for-EMU-Governance.pdf (retrieved 23.02.2016)

 Meier (2004), ‘Investigating the Impact of an Appreciation of the Euro in a Small

Macroeconometric Model of Germany and the Euro Area’, https://www.ifwkiel.de/ifw_members/publications/investigating-the-impact-of-an-appreciation-of-theeuro-in-a-small-macroeconometric-model-of-germany-and-the-euro-area/kap1204.pdf

(retrieved 23.02.2016)

 Mendoza et al. (2003), ‘Winners and Loser of Tax Competition in the European Union’ , http://www.nber.org/papers/w10051.pdf (retrieved 23.02.2016)

 Muellbauer (2013), ‘Condition Eurobonds and the Eurozone Sovereign Debt Crisis’ , http://www.economics.ox.ac.uk/materials/papers/13078/paper681.pdf (retrieved

23.02.2016)

 Rademacher (2013), ‘Tax Competition in the Eurozone’, http://www.mpifg.de/pu/mpifg_dp/dp13-13.pdf (retrieved 23.02.2016)

 Springford et al . (2014), ‘Why Germany’s trade surplus is bad for the Eurozone’, http://www.cer.org.uk/sites/default/files/publications/attachments/pdf/2013/bulletin_93_js

_st_article2-8164.pdf (retrieved 23.02.2016)

 The Economist (2003), ‘Loosening those Bonds’ , http://www.economist.com/node/1928604 (retrieved 23.02.2016)

 Wolff, B., Sapir, A., Marzinotto, B. (2011), ‘What Kind of Fiscal Union ?

’ , http://bruegel.org/wp-content/uploads/imported/publications/111124_pb_2011-06__.pdf

(retrieved 23.02.2016)

REALISING THE MICROFINANCE DREAM: HOW CAN

MICROFINANCE REALLY HELP THE POOR?

Nareen Kaur Sidhu Baktor Sing

B.Sc Economics

2 nd year

University College London

Explore Econ Undergraduate Research Conference

March 2016

Realising the microfinance dream: How can microfinance really help the poor? by

Nareen Kaur Sidhu Baktor Sing

Second year, UCL B.Sc Economics

Modern microfinance was pioneered in the 1970s with the objectives of empowering the poor and alleviating poverty. Today, there are around 10,000 microfinance institutions (MFIs) in the world, but evidence suggests that the microfinance dream of eradicating poverty is still far from being realised. This paper focuses on examining the pitfalls of microfinance from an economic point of view, and analysing specific solutions to address them. In Section I, I motivate the potential of microfinance to help the poor and the economy. In Section II, I discuss the flaws in the execution of microfinance that have hampered its effectivenes s and in Section III, I outline policy recommendations to tackle these issues. Finally, Section IV concludes and summarises my main arguments.

Section I: The microfinance potential

Poverty persistence is mainly due to external factors that limit the opportunities available to the poor (Gorski; 2010). One such factor is the lack of access to basic financial services, which perpetuates poverty by restricting investments in physical and human capital. A survey conducted in 2014 revealed that access to long-term finance among the poor in developing countries remains a serious problem (World Bank; 2015).

The permanent income hypothesis states that households prefer to spread consumption over time (Friedman; 1957), but consumption smoothing is elusive for poor households. Their shortage of collateral renders them credit-constrained and inadequate savings render them savings-constrained. Microfinance circumvents these problems by offering microcredit (low interest rate loans without collateral requirements) and microsavings (small deposit accounts) to low-income clients. These services help the poor meet their daily needs and invest to grow their income.

The potential of microfinance in battling poverty is also shown by its focus on financia l ly equipping and empowering women. In 2013, 81% of microfinance clients were female borrowers, most of whom used microfinance to purchase capital for entrepreneurial ventures

(MIX; 2014). Microfinance can effectively reduce poverty by targeting impoverished women,

as women contribute a larger proportion of their income towards investments and productive expenditures to improve household welfare, compared to men (ILO; 2008).

On the aggregate level, microfinance can positively impact long-term economic growth by increasing financial inclusion (Sahay et al.; 2015). This impact is particularly through the increases in consumption and investment by the poor, which raise output. Agricult ur a l productivity may also benefit from microfinance, as poor farmers can use microcredit to purchase better farming equipment (Tenaw, Islam; 2009).

Section II: Where has microfinance gone wrong?

The theoretical construct of microfinance may have a large potential of reducing poverty.

However, varying microfinance success rates prove that the execution of microfinance is not fully exploiting its potential. Below are some major reasons for this setback:

Profit-focused MFIs

The global percentage of profit-oriented MFIs was 43% in 2011, a nine percentage point increase from 2001 (MIX; 2013). Stronger profit orientation is associated with significa nt ly higher interest rates being levied on microcredit clients (Roberts; 2013). This dark side of microfinance was evident in Andhra Pradesh, India, where several local MFIs conducted overlending, imposed exorbitant interest rates, and used coercion to recover loans. Such misconduct resulted in a suicide wave among the poor, as they could not bear the burden of overindebtedness (Shylendra; 2006). Alarmingly high interest rates on micro-loans are also causing similar problems in Latin America and the Caribbean, where debt accumulation among the poor is intensifying the vicious cycle of poverty (Campion et al.; 2010).

Profit-seeking MFIs also tend to have a lopsided focus on providing microcredit, as it is mor e profitable. This has resulted in the neglect of other services, particularly microsavings (Vos et al.; 2015). Microsavings services enable the poor to better cope with income shocks and prevent asset depletion when repaying debts. So, less emphasis on microsavings than on microcredit can worsen poverty, especially for extremely poor clients.

Non-productive use of microfinance by the poor

There is also a critical problem on the receiving end of microfinance. Many microfina nce clients underuse microsavings services because they are not well-informed about the importance and effective usage of savings. This has been proven by a recent behaviour a l diagnosis which showed that low savings behaviour among microfinance clients is due to the lack of intention or plan on how to use savings accounts (Fiorillo et al.; 2014).

Additionally, the bulk of microcredit goes towards funding consumption - in South Africa,

94% of microcredit is used for consumption instead of productive investment (Hickel; 2015).

Among those who take up micro-loans for entrepreneurship purposes, only seasoned entrepreneurs tend to reap the benefits of successful investments, as they apply better business strategies (Banerjee et al.; 2014). On the contrary, those without adequate knowledge on business management are unable to effectively use microcredit to boost their income.

Limited reach of target clients

The outreach of microfinance has improved since the 1970s, but its reach of target clients in some parts of Africa, The Middle East and South Asia remains limited. Microfinance is almost non-existent in many poverty-stricken and warn-torn areas, largely due to political hegemony and terrorism. For example, the provision of microfinance services to the rural poor in

Afghanistan is hampered by security challenges (Kantor, Andersen; 2010). Geographica l constraints also play an important role. Establishing microfinance services in remote rural areas involves large infrastructural and operational costs, which make it financially unfeasible for many MFIs (Daley-Harris, Awimbo; 2011).

Moreover, the cultural suppression of women’s rights in orthodox societies restricts MFIs from helping poor women achieve financial independence. For instance, microcredit services in

Nepal have failed to reach poor rural women who suffer from gender-based discrimination in their societies (Basnet; 2007). This lengthens the poverty problem as the empowerment of women is necessary for a nation’s economic development (Chadha; 2006).

Financial problems for MFIs

The inability to stomach operational costs in rural areas highlights a structural problem within some MFIs. While most developed MFIs are financially stable, emerging and donor-dependent

MFIs in poor regions, such as those in Sub-Saharan Africa, often lack financial funds. Weak portfolio management, substandard governance and insufficient human resources contribute to the undercapitalisation of these MFIs (Mersland, Strom; 2009).

Cost-inefficiencies also tend to jeopardize the financial status of MFIs. A Director at the

Central Bank of Nigeria, Alhaji Ahmed Abdullahi has identified non-performing loans and high overheads as major issues for Nigerian MFIs (Vanguard; 2015). If such financial problems continue to plague the microfinance industry, its provision of financial support to the poor will eventually be unsustainable.

Section III: How can microfinance be improved?

Increased regulation of MFIs

To increase the effectiveness of microfinance, constant monitoring of MFIs is vital. Currently,

MFIs in most countries are supervised either by the central bank or a government minis tr y

(Staschen; 2003). This is problematic, as central banks and ministries tend to have specific objectives that do not solely focus on economic development. MFIs should instead be regulated independently by national and regional governing bodies and led by development a l economists. Additionally, a clear regulatory framework promoting transparency, frequent audits, and surprise on-site visits by regulatory bodies would help curb misconduct among

MFIs (Berenbach, Churchill; 2011).

Internal regulation is also necessary and can be done by providing poor clients with a significant stake in MFIs. This will help prevent the misappropriation of resources and encourage their use to benefit the poor. The structure of Grameen Bank is a good example of this - 94% of the bank’s equity is owned by its borrowers, which has resulted in relatively good management of funds and high transparency (Rahman, Nie; 2011).

Separate divisions in MFIs

As discussed in Section II, profit-oriented MFIs are inclined to focus on microcredit services and neglect microsavings. Hence, within MFIs, it would be beneficial to have separate divisions that focus on the different branches of microfinance. Having separate divisions with

specific goals will increase the balance of services provided and raise welfare though adequate focus on microsavings.

A successful example of this is Bolivia’s BancoSol, which consists of two different wings, one focusing on microcredit and the other on microsavings (Gonzalez-Vega et al.; 1996).

BancoSol’s well-managed portfolios of savings and loans have resulted in the growth of incomes and assets of its borrowers (Mosely; 2001).

Financial education and skill workshops

All MFIs should also incorporate education on financial planning and entrepreneurial skill workshops to complement their main services. Financial literacy of microfinance clients is essential to prevent exploitation and improve financial inclusion (Microfinance Africa; 2010).

The Aspire programme by XacBank in Mongolia has explored this aspect by providing financial education to teenage girls from poor families. The programme resulted in the girls having increased savings behaviour and higher confidence in questioning bank practices

(Tower, McGuinness; 2011).

Skill workshops are particularly helpful in creating a sustainable income path for microfina nce clients. During TEDxBoston 2010, Vibha Pingle spoke about her organisation Ubuntu at

Work’s initiative of building Baobab workspaces for poor women in rural areas to learn business skills and make products. Professional volunteers then help these women identify business opportunities and retail their products internationally (TedxBoston; 2010). MFIs, especially those lacking human power, could collaborate with social enterprises like Ubuntu at

Work to equip their low-income clients with proper knowledge and skills to gradually escape poverty.

Combined effort from MFIs, private sector organisations and governments

Extending microfinance to those in need requires effort from all sectors. More MFIs should collaborate with private sector organisations to develop digital finance technologies in order to reach a wider range of poor communities. In Sub-Saharan Africa, digital finance in the form of mobile money has contributed significantly towards the financial inclusion of the poor (Kunt et al.; 2015). As shown in the graph below, around one-third of financially included Sub-

Saharan African adults use mobile money accounts. Digital finance can overcome geographic a l constraints and security issues, as it does not require physical visits to MFIs. Hence, its adaptation should be replicated in other regions that lack financial access, especially in the

Middle East which had the lowest account penetration in 2014.

Kenya is one of the few developing countries to have significantly ventured into mobile banking (M-PESA) for the poor. A case study on the impact of M-PESA on microfinance has demonstrated that M-PESA enables Kenyan MFIs to reach the unbanked in very remote areas, where operational costs and risks are too high (Nzioka; 2010). Apart from improving the outreach of microfinance, digital finance technologies like M-PESA also reduce operational costs for MFIs, as most transactions can be done online. However, such technologies should be monitored closely by microfinance regulatory bodies to ensure financial security.

To reach poor women in culturally-restricted communities, MFIs and local governments should co-organise marketing campaigns on gender empowerment (ILO; 2008). These campaigns will enlighten men and more importantly, women on their rights to be financially independent. This would then pave the way for microfinance programmes to positively impact women’s lives and improve poverty levels.

Governments and private commercial banks should also allocate a reasonable portion of their budgets towards financing MFIs, so that MFIs have a solid flow of income to operate in remote

and high risk areas. This funding could also be used to train the staff of MFIs and increase the efficiency of their services. An example of such an initiative is by the ICICI Bank, which provides financial assistance in the form of term loans to selected MFIs in India. It also offers cash-management services, customised current accounts and treasury products for MFIs to invest their liquid funds in and earn returns (ICICI Bank).

Section IV: Conclusion

The concept of microfinance has the potential of eradicating global poverty. However, there are many loopholes in the execution of microfinance programmes by MFIs. Since microfina nce can benefit both poor individuals and the economy as a whole, MFIs, private commercia l banks, and governments should work together to ensure that microfinance really helps the poor.

There may be unavoidable issues such as terrorism and corruption that obstruct the proper implementation of microfinance, but what matters is that we leave no stone unturned in realising the microfinance dream.

(1998 words)

Bibliography

1.

Banerjee et al. (2014).

Does Microfinance Foster Business Growth? The Importance of Entrepreneurial Heterogeneity : National Bureau of Economic Research. (pdf)

Available at: < http://web.business.queensu.ca/faculty/jdebettignies/docs/Breza14.pd f >

(Accessed: 17 February 2016)

2.

Basnet, X. (2007).

Microcredit Programs and their Challenges in Nepal : Duke

University. (pdf) Available at:

< https://econ.duke.edu/uploads/assets/dje/2007_Symp/Basnet.pdf

> (Accessed: 1

March 2016)

3.

Berenback, S. & Churchill, C. (2011).

Regulation and Supervision of Microfinance

Institutions Experience from Latin America, Asia, and Africa.

The Microfina nce

Network Occasional Paper No. 1. (pdf) Available at:

< https://centerforfinancialinclusionblog.files.wordpress.com/2011/10/regulation-andsupervision-of- microfinance- institutions.pdf

> (Accessed: 14 February 2016)

4.

Campion et al. (2010).

Interest Rates and Implications for Microfinance in Latin

America and the Caribbean . IDB Working Paper Series: Inter-American Developme nt

Bank. (pdf) Available at: < http://idbdocs.iadb.org/wsdocs/getdocument.aspx?docnum=35121757 > (Accessed: 19

February 2016)

5.

Chadha, S. (2006).

Innovative Strategy for Developing Women Entrepreneurship &

Gender Equality in Nepal . Innovation at work: national strategies to achieve gender equality in employment. (pdf) Available at: < http://www.un.org/en/ecosoc/meetings/2006/hls2006/documents/Chadha's%20paper.p

df > (Accessed: 5 March 2015)

6.

Daley-Harris, S. & Awimbo, A. (2011).

New Pathways Out of Poverty.

USA:

Kumarian Press.

7.

Fiorillo A. et al. (2014) . Applying Behavioural Economics to Improve Microsavings

Outcomes : ideas42. (pdf) Available at < http://www.ideas42.org/ wpcontent/uploads/2015/05/Applying-BE-to-Improve-Microsavings-Outcomes-1.pdf

>

(Accessed: 28 February 2016)

8.

Friedman, M. (1957).

A Theory of the Consumption Function . National Bureau of

Economic Research: Princeton University Press

9.

Gonzalez-Vega et al. (1996).

BANCOSOL the Challenge of Growth for Microfinance

Institutions.

Economics and Sociology Occasional Paper No. 2332: Ohio State

University.

10.

Gorski, P.C. (2010).

The myth of the ‘culture of poverty’

. In K. Finsterbusch (Ed.),

Annual Editions: Social Problems. Boston, MA: McGraw-Hill.

11.

Hickel, J. (2015) . The Microfinance Delusion: who really wins? The Guardian.

Retrieved from: < http://www.theguardian.com/global-development-professionalsnetwork/2015/jun/10/the- microfinance-delusion-who-really-wins >

12.

ICICI Bank.

Financial Assistance to Microfinance Customers.

Retrieved from:

< http://www.icicibank.com/rural/microbanking/microcredit.page

>

13.

ILO. (2008).

Small change, Big changes: Women and Microfinance . Internatio na l

Labour Organization Geneva. (pdf) Available at:

< http://www.ilo.org/wcmsp5/groups/public/@dgreports/@gender/documents/meeting document/wcms_091581.pdf

> (Accessed: 19 February 2016)

14.

Kantor, P. & Andersen, E. (2010).

Building a Viable Microfinance Sector in

Afghanistan. Briefing Paper Series: Afghanistan Research and Evaluation Unit . (pdf)

Available at < http://www.areu.org.af/Uploads/EditionPdfs/1001 E-

Building%20a%20Viable%20Microfinance%20Sector%20in%20Afghanistan%20BP

%202010.pdf

> (Accessed: 1 March 2016)

15.

Kunt et al. (2015).

The Global Findex Database 2014: Measuring Financial Inclusion around the World . Policy Research Working Paper 7255: World Bank, Washingto n,

DC.

16.

Mersland, R. & Strom, R. (2009).

Performance and governance in microfinance institutions . Journal of Banking and Finance 33: 662-669. (pdf) Available at < http://brage.bibsys.no/xmlui/bitstream/handle/11250/135966/Mersland_Performance_

2009.pdf?sequence=1 > (Accessed: 2 March 2016)

17.

Microfinance Africa. (2010). The Need for Financial Literacy in Microfinance and Its

Impact.

Available at: < http://microfinanceafrica.net/news/the-need- for- financia lliteracy- in- microfinance-and- its- impact/ >

18.

MIX Premium Market Intelligence. (2013).

Retrieved from

< http://www.mixmarket.org/MIX_Premium_Market_Intelligence_Reports >

19.

MIX Premium Market Intelligence. (2014).

Retrieved

< http://www.mixmarket.org/MIX_Premium_Market_Intelligence_Reports > from

20.

Mosley, P. (2001).

Microfinance and Poverty in Bolivia . Journal of Developme nt

Studies, Vol. 37, No. 4, pp. 101-132.

21.

Nzioka, D. K. (2010). Impact of Mobile Banking on Microfinance Institutions: A Case

Study of Small and Micro Enterprise Program (SMEP), Kenya : Southern New

Hampshire University. (pdf) Available at

< http://academicarchive.snhu.edu/bitstream/handle/10474/1646/sced2010nzioka.pdf?

sequence=2 > (Accessed: 29 February 2016)

22.

Rahman, R. & Nie, Q. (2011).

The Synthesis of Grameen Bank Microfinance

Approaches in Bangladesh . International Journal of economics and Finance. Vol. 3,

No. 6: Canadian Center of Science and Education. (pdf) Available at

< http://ccsenet.org/journal/index.php/ijef/article/viewFile/12701/8904 > (Accessed: 2

March 2016)

23.

Roberts, P.W. (2013).

The Profit Orientation of Microfinance Institutions and

Effective Interest Rates.

World Development Vol. 41, pp. 120-131: Elsevier Ltd. (pdf)

Available at:

< http://goizueta.emory.edu/faculty/socialenterprise/documents/profit_orientation_of_ microfinance.pdf

> (Accessed: 3 March 2016)

24.

Sahay et al. (2015).

Financial inclusion: Can it meet Multiple Macroeconomic Goals?

IMF Staff Discussion Note: IMF. (pdf) Available at: < https://www.imf.org/external/pubs/ft/sdn/2015/sdn1517.pdf

> (Accessed: 18 February

2016)

25.

Shylendra, H.S. (2006 ).

Microfinance Institutions in Andhra Pradesh: Crisis and

Diagnosis . Volume 41, No. 20: Economic and Political Weekly.

26.

Staschen, S. (2003).

Regulatory Requirements for Microfinance A Comparison of

Legal Frameworks in 11 Countries Worldwide.

Division 41, Economic Developme nt and Economic Promotion: GTZ. (pdf) Available

< http://www.bu.edu/bucflp/files/2012/08/Regulatory-Requirements- for-

Microfinance.pdf

> (Accessed: 18 February 2016) at:

27.

TEDxBoston. (2010).

Vibha Pingle – Beyond Microfinance.

TedxTalks. Available at:

< https://www.youtube.com/watch?v=fACagX2Etxo >

28.

Tenaw, S. & Islam, K.M.Z. (2009).

Rural Financial Services and Effects of

Microfinance on Agricultural Productivity, and Poverty. SARD-Climate D9:

University of Helsinki. (pdf) Available at: http://www.helsinki.fi/taloustiede/Abs/DP37.pdf

> (Accessed: 19 February 2016)

<

29.

Tower, C. & McGuinness, E. (2011). Savings and Financial Education for Girls in

Mongolia Impact Assessment Study : Microfinance for Opportunities.

30.

Vanguard. (2015).

Undercapitalisation threatening Microfinance Banks . Available at:

< http://www.vanguardngr.com/2015/09/undercapitalisation-threateningmicrofinance-banks/ >

31.

Vos, R. et al. (2015).

Financing for Overcoming Economic Insecurity . United Nations:

Bloomsbury Publishing Plc.

32.

World Bank. (2015).

Global financial development report 2015-2016: long-term finance.

Global financial development report. Washington, D.C.: World Bank Group.

(pdf) Available at: < http://wwwwds.worldbank.org/external/default/WDSContentServer/WDSP/IB/2015/09/02/09022

4b0830b28d1/2_0/Rendered/PDF/Global0financi0000long0term0finance.pdf

>

(Accessed: 15 February 2016)

!

H

OW MUCH WOULD YOU PAY FOR FAIR

-

TRADE

C

LOTHES

?

An empirical Investigation

Daniel J. Sonnenstuhl

B.A. Philosophy and Economics

2 nd year

University College London

Explore Econ Undergraduate Research Conference

March 2016

How much would you pay for Fair-Trade Clothes?

An empirical Investigation by Daniel J. Sonnenstuhl

Working standards as well as environmental standards in many so-called developing countries are significantly below the standards, which apply to Europe or North America. This leads to the situation in which many imported products commonly consumed in the western countries, such as clothes, exotic fruits, seafood or electronic gadgets, are often produced under difficult conditions with major impacts for humans and the environment. The ‘Fairtrade Foundation’ tries to address this problem by setting minimum standards in terms of working conditions for the involved people and environmental compatibility of the production process. If these standards are met in the production process, the product in question may be labelled as a ‘fairtrade’ product. However, these higher minimum standards in the production process are not without cost, which is usually reflected in a price premium for these fair-trade products. The examination of the willingness to pay (WTP) such price premiums for fair trade products is the aim of this paper and has been investigated by means of a self-designed survey. In the following, I will first explain the structure of this survey as well as the process of data collection. Subsequently, I will present and comment on the results of my examination.

Survey Structure

Before beginning with the actual design of the survey, it is necessary to specify the target group the survey will be aimed at. Choosing students as my target group 1 before starting with the survey design, I was able to customise the survey appropriately. Moreover, in order to determine the WTP such price premiums for fair-trade products I applied the ‘stated preference technique’ 2 and designed the survey in this light using the following structure.

First, I asked the participants several demographic questions in order to determine whether these affect their WTP. The demographic parameters included gender, age, the highest level of completed education, main current occupation as well as personal income. Since my target group consists of students, I decided to ask the participants about their personal income rather than household income as most of the students are in situations in which their personal incomes are of primary importance for their consumption decisions instead of their household income. In fact, asking the students about their household income is likely to be distorting the

1

The underlying reasons will be evaluated in the course of this paper.

2 SPT means that the participants of a survey are faced with a constructed yet concrete example of a good, which is usually not traded on the market. The participants are then asked to express their valuation of the good in question usually in monetary forms. For a detailed description see: Bateman, 2002

results as students commonly live in flat shares so that their household income can be significantly higher than their personal income, yet irrelevant for their personal consumption decisions. Even if students live with their parents, it is more likely that their personal budget primarily determines their personal consumption decision rather than the aggregated household income.

Second, I included four different sets of information about fair-trade standards and their implications in the survey, one of which were randomly shown to each participant. The first set of information included general, unspecific information about fair trade and served as a base category. The second set of information emphasized the aspects of working conditions being improved by fair-trade standards.

3 The third set of information emphasized how the participant would be affected themselves by buying fair-trade clothes as these contain, if at all, merely a small fraction of toxic substances usually commonly used for example during the dyeing process and affecting its wearer’s health.

4 The fourth set illustrated the affect of fairtrade clothes’ environmental compatibility compared to conventional clothes.

5

Third, I asked participants about their shopping habits, including frequency and spending patterns in order to be able to account for these when estimating the effect of students’ WTP price premiums. A person shopping for clothes frequently might, for example, be less likely to be willing to pay price premiums as these could in aggregation account for a significantly higher amount than for a person who shops new clothes just once a year.

Fourth, the participants had to state their WTP for a pair of jeans. I chose the concrete example of a pair of jeans for the application of SPT because many people have a reasonable clear imagination about how much they are willing to pay for it. Moreover, people value a pair of jeans very differently, which leads to a variation of valuations in the dataset. In order to reinforce this variation in valuations, which allows a more accurate estimation of whether the initial price of the jeans matters for the amount of price premium people are willing to pay, I included the differentiation between a no-name 6 as well as a branded jeans 7 in the survey asking the participants the same question for a no-name and for a branded jeans.

Following their valuations of the initial items, the participants were asked whether they were

7

3

See: ILO 2014 and ILO 2015

4

See: EFJ, 2007

5 See: Chapagain, 2005

6 I.e. any pair of jeans not from a well-known shopping lable.

I.e. any pair of jeans sold under the name of a well-kown fashion lable.

willing to pay more for this jeans would it be a fair-trade jeans. For the proportion of participants that affirmed this question, the next question asked them about their exact WTP, would the pair of jeans be a fair-trade pair of jeans.

Finally, I asked the participants certain questions intended to ask them to reflect on their attitude towards-fair trade clothes. These questions allow me to account for attitudinal differences in terms of fair-trade clothes when estimating their WTP more for the fair trade items.

Data Collection

The process of data collection is crucial for the quality of the outcome of any survey and relates to the previously mentioned determination of the target group. The reason behind my decision to aim this survey at students is the fact that students are usually more price concerned than professionals. The results of this survey aimed at students are thus likely to be generalizable as students belong to the most price-concerned consumer group.

In order to be able to obtain a reasonable large dataset, I decided to conduct the survey online by primarily contacting randomly selected university departments across the country and asking them to circulate the link to the survey amongst their students. I have contacted around

150 university departments and would estimate that around every second department forwarded my request to the students. In this way, I have, on the one hand, been able to reach out to a substantial amount of students. On the other hand, since I randomized the contacted departments, I assured a minimisation of the bias affecting the results of the survey.

Conducting a survey, it is never possible to fully eliminate bias, as there will always be selection bias because it is never possible to randomly select who will answer the survey.

Descriptive Statistics

I have been able to draw from a dataset initially containing 1,471 observations. However, I had to exclude 81 of the observations, as they were either incomplete or incoherent (for example denying to be willing to pay a price premium for fair-trade products and then stating an amount to be willing to pay as a price premium for fair-trade products). Hence, I have used a dataset containing 1,390 observations. The observations of the dataset are, apart from the fact that significantly more female than male observations are included in the dataset, quite likely to be representative of the British student body, as I randomly contacted various

university departments throughout the country. It includes undergraduate students, graduate students and even students having already completed a graduate degree in appropriate proportions. For a complete description of the dataset see table 1 as well as table 2.

Out of all observations, 64.2% stated that they would be willing to pay a price premium for no-name clothes, were these fair-trade clothes. However, only 56.6% stated that they would be willing to pay a price premium for fair-trade branded clothes, which is a notable difference.

Graph 1: WTP for a branded fair-trade pair of jeans Graph 2: WTP for a no-name fair-trade pair of jeans

In addition to personal attitudes, the aspect of the item in question being a no-name or branded item seems to influence the consumer’s WTP a price-premium for the item, if it were a fair-trade item. Considering the data, which illustrates that the average WTP for a branded pair of jeans amounts to 40.14 £ compared to an average WTP for a no-name pair of jeans of merely 23.72 £, a possible explanation for the significant difference in the percentage share of participants expressing a positive WTP might be the original price of the good in question.

The higher average WTP for branded items is a result of the fact that branded items are usually more expensive. The higher price of branded clothes might decrease the WTP an additional price premium for the already more expensive items.

The second noteworthy aspect, however, implies a further aspect an item’s initial price might have in terms of consumer’s WTP a price premium. While a higher price decreases the percentage share of people expressing a positive WTP price premiums, it leads to increase in the amount willing to pay among those, having expressed a positive WTP in the first place.

The WTP price premiums tends to increase with an increasing price of the product in question, as the comparison between the willingness to pay for no-name clothes and branded clothes indicate. While the average WTP price premiums amounts to

8.46£ for no-name items

(see graph 3, red line), it amounts to 9.72£ for branded items (see

Graph 3: Means and range of WTP for branded and no-name item graph 3, green line). This effect might be ascribed to the fact that consumers determine their WTP price premiums for fair-trade clothes in relation to the initial price of the item. Most likely, the amount of the price premium, consumers are willing to pay, will be determined by personal characteristics but the initial price of the item will be taken into account. A clearer picture of the aspects determining consumers’ WTP will be given in the next section by making use of econometric analyses.

Econometric Analysis

Analysing the data econometrically, it was notable that many of the included variables seem not to add further explanation to the analysis and are not statistically significant. At this point the bias within the dataset becomes particularly clear, as many variables, which would intuitively help explaining the differences in the WTP in more detail, such as income for example, are not statistically significant in the analysis. Despite the usual selection bias, the bias comes particularly from the fact that many participants might not now their exact WTP price premiums for fair trade clothes and hence state biased WTP such price premiums. Due to this reason I will focus on the analysis of selected factors determining the amount participants are willing to pay as a price premium (see regression table1).

Regression Table 1 noname female age spend2 spend3 spend4 spend5 spend6 purchase_price unfashion brand

_cons

N

R-squared

Adj R-squared t statistics in parentheses

* p < 0.05, ** p < 0.01, *** p < 0.001

(1) ln_fair_noname

0.0131

***

(7.42)

0.124

**

(3.00)

0.0244

***

(4.92)

0.00290

(0.06)

-0.198

**

(-3.27)

0.0446

(0.55)

0.184

(1.95)

0.0403

(0.41)

-0.120

**

(-2.90)

-0.154

***

(-3.34)

1.160

***

(9.50)

896

F(10, 885) = 14.17

0.1380

0.1283

This table shows an OLS 8 regression of ln_fair_noname as well as ln_fair_brand and

(2) ln_fair_brand

0.115

*

(2.45)

0.0200

***

(3.42)

0.0250

(0.50)

-0.143

(-1.95)

0.0907

(1.01)

0.325

**

(3.06)

0.0519

(0.43)

-0.215

***

(-4.62)

-0.174

**

(-3.24)

0.00760

***

(7.09)

1.432

***

(10.57)

788

F(10, 777) = 16.44

0.1746

0.1640 highlights some aspect of the analysis. First, the thesis that the initial price of the good in question influences the amount participants stated to be willing to pay as price premiums for

8 For an explanation of the variabels used in the regression see Table 1 and Table 2. I used the log in this regression in order to interpret the coefficients as percentages.

fair-trade items is supported in this regression. The WTP increases with the initial willingness to pay as the statistically significant coefficients of ‘noname’ as well as ‘brand’ clearly show.

The amount individuals are willing to pay for fair trade items increases by around 0.7% to

1.0% with every Pound-Sterling they are initially willing to spend on the good in question.

Moreover, the fact that very price concerned individuals or individuals associating unfashionable clothes with fair-trade clothes are likely to express a very small WTP is also supported in this regression. Two other interesting facts are that age tends to increase the

WTP, which is likely due to the fact that individuals become more concerned about their environment with an increasing age. Every additional year of age tends to increase the WTP by around 2%. On the other hand, females tend to express a WTP that is an astonishing 11% to 12% higher compared to the average WTP expressed by males. This is a huge gender specific difference, which leads to the conclusion that fair-trade and a careful selection of clothes seems to be an issue of greater importance for females than for males. Moreover, against the hypothesis that individuals who usually spend more on clothes might be less willing to pay an additional price premium, the results suggest that the WTP such price premiums for fair-trade clothes rather increases with an higher spending on clothes.

Conclusion

Although the dataset has not been as rich as I hoped it would be, particularly in terms of the extend to which the different variables can explain the differences in the different WTP, it has been rich enough to draw some very interesting conclusion. First, we have seen that females express on average a much higher WTP for fair trade clothes than males. Moreover, we have seen that the initial price of the good in question is likely to decrease the probability that individuals express a positive WTP but if they express a positive WTP, this WTP is likely to increase even further with an increasing price.

Variable

Table 1 – Descriptive Statistics of the Dataset

Explanation

Gender

(female)

0 if male

(32.0%)

1 if female

(68.0%)

In years Age

(age)

Personal income in

£

(inc1 till inc3)

Education

(school, college, grschool)

Shopping frequency

(shop1 till shop4)

Monthly spending on new clothes in £

(spend1 till spend6)

Willingness to pay for a no-name jeans

(noname)

1 if income ≤ 500

(71.3%)

2 if 500 < income ≤ 1,000

(20.1%)

3 if 1,000 < income

(8.4%)

1 if grad. from secondary school

(60.6%)

2 if undergraduate degree

(33.9%)

3 if graduate degree

(5.6%)

1 if frequency > once a month

(20.5%)

2 if more than once a month > frequency > once every three months

(31.5%)

3 if once every three months > frequency > twice a year

(29.1%)

4 if frequency < twice a year

(18.7%)

1 if spending ≤ 20

(41.2%)

2 if 20 < spending ≤ 40

(28.4%)

3 if 40 < spending ≤ 60

(13.1%)

4 if 60 < spending ≤ 80

(6.9%)

5 if 80 < spending ≤ 100

(5.8%)

6 if spending > 100

(4.3%)

The willingness to pay for a noname pair of jeans.

Obs.

1390

1390

1390

1390

1390

1390

1390

Mean Standard

Dev.

0.68

20.94

1.37

1.46

2.45

2.20

23.72

0.46

3.60

0.63

0.70

1.01

1.41

11.19

Min.

Value

0

18

1

1

1

1

0

Max.

Value

1

49

3

3

4

6

100

Willingness to pay for a branded jeans

(brand)

Willingness to pay a price premium for a no-name jeans

(paymore_noname)

The willingness to pay for a branded pair of jeans.

This variable captures whether the participant is willing to pay a price premium for no-name fairtrade clothes.

Willingness to pay a price premium for a branded jeans

(paymore_brand)

Amount of price premium willing to pay for a no-name jeans

(fair_noname)

This variable captures whether the participant is willing to pay a price premium for branded fairtrade clothes.

The willingness to pay a price premium according to the observations, which expressed a positive willingness to pay the price premium (64.2%)

1390

1390

1390

896

40.14 23.19

0.64

0.56

8.46

0.47

0.49

6.04

0

1

0

0

200

1

1

50

Amount of price premium willing to pay for a branded jeans

(fair_brand)

The willingness to pay a price premium according to the observations, which expressed a positive willingness to pay the price premium (56.6%)

788 9.72

6.77

1 30

Variable

Whether clothes are considered as a status symbol

(status)

Whether the participant has at least once bought fair-trade clothes

(bought)

Table 2 – Descriptive Statistics of attitudinal Variables

Explanation

This variables captures the percentage share of participants who consider clothes as a status symbol

Observations Percentage share

1390 31.7%

This variables captures the percentage share of participants who have at least once bought fair-trade clothes

1390 46.9%

Whether the participant knows how purchased clothes had been produced

This variables captures the percentage share of participants know how the clothes that they have bought had been produced

(know_prod)

Whether the participant thinks his consumption decision can have an impact

(make_diff)

Whether the participant thinks the aggregate

This variables captures the percentage share of participants who think that their consumption decision changes the way clothes are produced

This variables captures the percentage share of participants who think that the consumption decisions of all consumers change the way clothes are produced consumption decisions of all consumers can have an impact

(ag_make_diff)

Whether the participant associate fair-trade clothes with over expensive prices

(overexp)

This variables captures the percentage share of participants who associate fair-trade clothes with over expensive prices

This variables captures the percentage share of participants who associate fair-trade clothes with unfashionable clothes

Whether the participant associate fair-trade clothes with unfashionable clothes

(unfashionable)

Whether the participant is sceptical if fair-trade clothes are indeed fairtraded

(ad_plot)

Whether the participant had been familiar to fairtrade before participating in the survey

(aquainted)

This variables captures the percentage share of participants who are sceptical whether fair-trade are indeed fair-traded and not just an advertising plot

This variables captures the percentage share of participants who had been familiar to fair-trade before participating in the survey

Determination of the purchase decision by price

(purchase_price)

Determination of the purchase decision by quality

This variables captures the percentage share of participants who consider prices to be determining their consumption decision

This variables captures the percentage share of participants who consider quality to determine their consumption decision

1390

1390

1390

1390

1390

1390

1390

1390

1390

16.6%

60.0%

87.4%

47.8%

24.1%

50.1%

87.1%

69.4%

46.8%

(purchase_quality)

Determination of the purchase decision by ethics

(purchase_ethics)

Determination of the purchase decision by look/style

(purchase_look)

Determination of the purchase decision by brand

(purchase_brand)

Determination of the purchase decision by practical aspects

(purchase_pract)

Information about chemicals

(chemicals)

Information about the environment

(environment)

Information about working standards

(worker)

General information

(general)

This variables captures the percentage share of participants who consider ethical aspects to determine their consumption decision

This variables captures the percentage share of participants who consider prices to determine their consumption decision

This variables captures the percentage share of participants who consider brand aspects to determine their consumption decision

This variables captures the percentage share of participants who consider practical aspects to determine their consumption decision

This variables captures the percentage share of participants who were shown information about chemicals at the beginning of the survey

This variables captures the percentage share of participants who were shown information about the environment at the beginning of the survey

This variables captures the percentage share of participants who were shown information about working standards at the beginning of the survey

This variables captures the percentage share of participants who were shown general information at the beginning of the survey

1390

1390

1390

1390

1390

1390

1390

1390

6.1%

72.6%

5.4%

15.1%

22.7%

26.6%

25.0%

25.6%

Bibliography

Bateman, I.J. et al. 2002. Economic Valuation with Stated Preference Techniques: A

Manual, Edward Elgar Publishing, London.

Chapagain, A. K. et al. 2005. The Water Footprint of Cotton Consumption. Value of Water

Research Report Series No. 18. Unesco-IHE. Delft.

Accessed via (09/03/16):

<http://waterfootprint.org/media/downloads/Report18.pdf>

Carlsson, J. and Vikner, A. 2011. Consumer Preferences for Eco and Fair Trade Clothes in

Gothenburg. University of Gothenburg. School of Economic Business and Law. Gothenburg.

Accessed via (09/03/16):

<https://gupea.ub.gu.se/bitstream/2077/28863/1/gupea_2077_28863_1.pdf>

Dudley, R. et al. 2013. The hidden Cost of fast Fashion: Worker Safety. Bloomberg Business.

Accessed via (09/03/16):

<http://www.bloomberg.com/bw/articles/2013-02-07/the-hidden-cost-of-fast-fashion-workersafety>

EJF. 2007. The Deadly Chemicals in Cotton . Environmental Justice Foundation in collaboratio with Pesticide Action Network UK, LondonAccessed via:

Accessed via (09/03/16):

<http://www.panuk.org/attachments/125_the_deadly_chemicals micals_in_cotton_part1.pdf> and

<http://www.pan-uk.org/attachments/125_the_deadly_chemicals_in_cotton_part2.pdf>

ILO. 2015. Insights into working conditions in India’s garment industry. International Labour

Office, Fundamental Principles and Rights at Work (FUNDAMENTALS), Geneva

Accessed via (09/03/16):

<http://www.ilo.org/wcmsp5/groups/public/---ed_norm/--declaration/documents/publication/wcms_379775.pdf>

ILO. 2014. Wages and working hours in the textiles, clothing, leather and footwear industries.

Geneva

Accessed via (09/03/16):

<http://www.ilo.org/wcmsp5/groups/public/@ed_dialogue/@sector/documents/publication/w cms_300463.pdf>

C

AN EARLY CHILDHOOD INTERVENTION PROGRAMS BE SUCCESSFUL ON A

LARGE SCALE

: E

VIDENCE FROM

H

EAD

S

TART

Teresa Steininger

B.Sc Economics

2 nd

year

University College London

Explore Econ Undergraduate Research Conference

March 2016

Can early childhood intervention programs be successful on a large scale: Evidence from Head Start

Introduction

The question of when to invest in children to yield the highest return has received much attention in the past two decades. A common opinion that investments in disadvantaged young children have the potential to be the most productive has been established (e.g.

Heckman 2000). Small scale, experimental early intervention programs, usually consisting of enriched preschool centres sometimes coupled with home visits, have shown substantial returns to society. The flagship intervention, the Perry Preschool Project, conducted 1962-65 in Michigan, US, is frequently cited to have an eight-to-one benefit-cost ratio, although there have been some downward revisions in recent years (e.g. Anderson 2008 and Heckman et al

2010). These estimates include higher earnings and public benefits such as reduction in crime cost and welfare savings. Head Start, henceforth HS, a large scale, “watered down” version of the experimental interventions, has been unable to replicate these results. In this essay I will argue that it can nonetheless be considered a successful intervention.

I will outline the program, summarize the literature on HS’s short and long-term effects and discuss what makes an early intervention program successful. I will do so by addressing “HS fade-out” and comparing benefit-cost analyses of the program. Finally, I will investigate the issues associated with the time gap between investment and observation of the net benefit and suggest possible solutions.

Short and long-term effects of Head Start

HS is a federal program targeted at promoting the school readiness of disadvantaged preschool children from families at or below the Federal Poverty Guidelines across the United

States. Other than educational services, the program provides health, nutrition and social services. In the literature summary below, the estimates will concern three to five-year-olds.

The program was launched in 1965 and has since seen large increases in enrolment, funding and intensity. In 2014, 930,000 children were enrolled in HS and it was appropriated $8,6 billion (2014 dollars). 43% of those enrolled in the preschool centre-based option were in care for at least six hours a day, five days a week (Office of Head Start 2014).

Currie and Thomas (1995), henceforth CT (1995), find a significant positive impact of HS on schooling attainment and grade repetition of white children. White HS children aged nine or above were 47% less likely to have repeated a grade than other white children, although

similar effects are not reported for African-American children. These racial differences may lie in the fact that African-American children disproportionately come from more disadvantaged homes, in poorer neighbourhoods, and are systematically more likely to enter poor schools, causing a quicker fade-out. The underlying hypothesis, which they find support for in Currie and Thomas (2000), is that the effects of the intervention largely depend on children’s experience after leaving the program.

Garces et al (2002) estimate longer-term effects and report that white HS children are 20% more likely to graduate from high school and 28% more likely to attend college than their siblings who attended no preschool. While HS participation significantly reduces the probability of committing a crime in adulthood for African-Americans, no such effect was found for whites. Finally, they find evidence of positive spillover effects of HS children to their younger siblings regarding the likelihood of committing a crime in adulthood, indicating that their crime estimates should be treated as a lower bound.

Deming (2009) finds an initial strong positive impact on test scores which fades out to 0.05 standard deviations at ages 11–14. This fade-out is largest for African-American children, yet they obtain slightly larger long-term effects, including graduation rates, crime and health status. Long-term effects are similar by race averaging at 0.23 standard deviations, which is approximately equal to a third of the outcome gap between HS and other children in the sample. Deming reports no significant effect for crime and limited, inconsistent evidence for spillover effects.

Carneiro and Ginja (2014), henceforth CG, estimate the mid to long-term effects of HS on males. They find large and significant effects of HS on behaviour and health which persist into the late teenage years and early adulthood. They find no such effects for cognitive outcomes, consistent with the Head Start Impact Study (HSIS 2010). At ages 12-13 HS reduces obesity, the prevalence of chronic conditions and leads to a decrease in the behaviour problems index. At ages 16-17 HS reduces the probability of suffering from depression and obesity. At ages 20 or 21, the only significant effect is that on criminal activity: HS participation reduces the probability of being sentenced to a crime by 22%.

HSIS (2010) is the first study to examine the effects of a large scale pre-school intervention program on an experimental basis, using data collected from 2002-2006. The study finds a short-term positive effect on cognitive measures which becomes insignificant by the end of

1st grade for most measures. Behavioural and social-emotional outcomes are also initially

positively impacted, yet, again “[b]y 1st grade, these impacts were limited to outcomes related to parent-child relationships and parenting practices” (HSIS 2010, p. xxxvii).

Determining Head Start’s success

If short-term gains of HS do not persist into first grade, as reported in HSIS (2010), what would lead us to expect long-term gains? We have repeatedly observed short-term fade-out coupled with significant long-term gains. To reiterate, Deming (2009) finds that black participants experience the strongest fade-out and yet the largest long-term gain. Carneiro and

Heckman (2003) suggest that the current emphasis on the cognitive impact of early interventions is misplaced as “[i]t appears that early childhood programs are most effective in changing noncognitive skills” (Carneiro and Heckman 2003, p.53). The key driver of HS’s lasting outcomes is likely to be its impact on the accumulation of non-cognitive skills; midterm test scores are at most a lower bound for the program’s success.

Support for this argument can be found in the emphasis HS places on engaging families in their children’s learning. They aim to promote positive parent-child relationships, keep a high quality parent-staff relationship, help with behavioural problems and reduce parental stress

(Office of Head Start 2015). Parenting practices are crucial in children’s social-emotional development. If HS creates a lasting positive impact on these practices, as suggested by the findings of HSIS (2010), we may expect the program to affect non-cognitive skills beyond program participation. Since improved parenting practices will affect all children, this provides an explanation for spillover effects of HS children onto their siblings. Current evidence of spillover effects is inconclusive across studies; nonetheless, HS’ impact on parenting practices possibly has an important role in the program’s success and is worthy of further exploration.

From an economic perspective, the success of an intervention can be studied using benefitcost analysis. Several issues arise with this strategy. (1) Certain costs to society are hard to measure, those measures will have low accuracy and different estimates may lead to different conclusions. (2) Costs change over time – including costs of the average Head Start child and the costs to society. (3) Different cohorts may experience different impacts from HS, making any estimations based on past effects an interesting statistic, but not a complete guide for policy.

Deming (2009) calculates the predicted internal rate of return for HS participants from 1984-

1990 based on a projection of adult wages from the NLSY1979. His results imply that the projected benefits per HS participant, a yearly wage gain of $1500, exceed program costs of

$6000 (average program cost over time period, 2007 dollars), yielding an internal rate of return of 7.19%. This suggests that wage gains alone would make HS cost-effective.

Anderson et al. (2010) study the effect of HS participation on smoking. They find that the present value of a reduction in smoking (in 2003 dollars) through program participation represents 36-141% of the program costs of an average HS participant (around $7,000 in 2003 dollars), depending on the discount rate.

CG (2014) estimate the net benefit of HS by approximating the social benefits of reduced criminal activity and obesity. They choose the cohort which entered HS in 1992 and use a discount rate of 4%. A modest estimate of crime costs yields a net benefit of $280 whereas a higher estimate yields a net benefit of $3,545 (all in 2009 dollars), suggesting an internal rate of return of at least 4%. They also calculate the net benefits of HS in 2003, assuming that the impact of HS on its participants was unchanged. Using modest crime costs, this yields a negative net benefit of $433 due to the increase in average costs of a HS child.

Evidently, the impact of HS depends on which cohort of participants is analysed and the quality of the program, and will thus vary over time and across socioeconomic groups. The estimates, on which the benefit-cost analyses are based, come from participation in different periods. Moreover, they are calculated using econometric models which limit the sample for which the effect of HS is observed to specific cohorts: e.g. Anderson et al (2010) and Deming

(2009) calculate the effect of HS on children with siblings who did not attend any form of preschool. Hence, general statements about HS’s success require a high level of caution.

Yet, the observation of positive net benefits throughout different time periods and for different groups is a strong indicator that HS has created long-term gains for its participants and society and has passed the benefit-cost test in several time periods. This conclusion is strengthened by the fact that the analyses focus on specific positive effects of HS, thereby creating a lower bound for the total net benefit. Inferring about HS’s success today is more difficult: the program has consistently produced positive effects in the past, however average costs have continued to increase and we do not know how positively HS will impact the current cohort – the quality of the program is increasing but so is that of its alternatives (see

Ludwig and Phillips 2007). In short, the magnitude of HS’s current net effect is unclear.

However, benefit-cost analysis is insufficient for an adequate assessment of HS’ success. As suggested by Carneiro and Heckman (2003), from a policy maker’s perspective “[t]he substantial gap in time between the payment in terms of costs and the harvest of benefits requires that these benefits be substantial to justify early intervention programs” (p.55). For a

policy maker, who is in the business of staying in business, it may be hard to justify a program such as HS with voters given that (a) the short-term effects appear to be small and prone to fade-out, (b) average costs are increasing and (c) the long-term gains cannot be observed yet and may not be of the same magnitude as prior estimates. This may lead to an inefficient allocation of resources to the program.

A possible solution to the outlined policy time gap issue may be to approximate future longterm effects by looking at current short-term effects. Ludwig and Phillips (2007) try to estimate how large a short-term effect of HS would have to be to produce sufficiently large long-term effects. They base their estimates on the correlation between early math and reading scores put forward by Krueger (2003). They conclude that the evidence suggests at a cost of $7,000 per child HS, as approximately the case in 2007, “the program would pass a benefit-cost test if the short-term impacts on achievement test scores were equal to around 0.1 to 0.2 standard deviations, or maybe even much smaller still” (p.3).

However, this seemingly simple “recipe” is complicated by the different nature and magnitude of short-term and long-term impacts for early intervention programs; basing estimates of long-term gains on empirical correlations between early measures of cognitive ability and life outcomes is likely to lead to incorrect calculations of the impact of early interventions. Ultimately, we require a better understanding of how cognitive and noncognitive skills at a young age translate into life outcomes. To generalised even further – what we need to know to accurately predetermine whether an early intervention will be successful is (a) which skills acquired at a young age actually matter for life outcomes such as crime rates, obesity and economic success and (b) how preschool programs can produce these skills, including channels such as their impact on parenting practices. This is a call for further research on the human capital production function and for strengthened interdisciplinary research between economics and behavioural and cognitive psychology to better understand the mechanisms of skill-life outcome translations.

Conclusion

There is strong evidence that in the past HS has been a successful intervention in terms of improving life outcomes of the disadvantaged and creating a net benefit to society. To determine the success of an early childhood intervention it is insufficient to look at short to mid-term measures of cognitive ability. This is because of the discrepancy between short-term and long-term impacts, and the support for the hypothesis that HS is most effective in altering non-cognitive skills. Due to the time lag between the investment and the observation of long-

term effects, it is hard to say how cost-effective HS currently is, which is what we ultimately care about. Therefore, further research on the translation of early cognitive and non-cognitive skills into life outcomes should be conducted.

Word count: 2193

References

Anderson, M. L., 2008. Multiple Inference and Gender Differences in the Effects of Early

Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training

Projects. Journal of the American Statistical Association , 103 (484), pp.1481-1495.

Anderson, K., Foster, J. and Frisvold, D., 2010. Investing In Health: The Long-Term Impact

Of Head Start On Smoking. Economic Inquiry, Western Economic Association International ,

48(3), pp. 587-602, 07.

Carneiro, P. and Ginja, R., 2014. Long Term Impacts of Compensatory Pre-School on Health and Behavior: Evidence from Head Start, American Economic Journal: Economic Policy ,

6(4).

Carneiro, P., and Heckman, J. 2003. Human Capital Policy. IDEAS Working Paper Series from RePEc.

Currie, J. and Thomas, D., 1995. Does Head Start Make a Difference? American Economic

Review , 85(3): 341-364.

Currie, J. and Thomas, D., 2000. School Quality and the Longer-Term Effects of Head

Start. Journal of Human Resources , 35 (4), pp.755-774.

Deming, D., 2009. Early childhood intervention and life-cycle skill development: Evidence from Head Start. American Economic Journal: Applied Economics , pp. 111-134.

Garces, E., Thomas, D., and Currie, J., 2002. Longer Term Effects of Head Start. American

Economic Review , 92(4), pp.999-1012.

Heckman, J., 2000. Policies to foster human capital. Research in economics , 54 (1), pp.3-56.

Heckman, J., Moon, S., Pinto, R., Savelyev, P., and Yavitz, A., 2010. A New Cost-Benefit and Rate of Return Analysis for the Perry Preschool Program: A Summary. IDEAS Working

Paper Series from RePEc.

Krueger, A. B., 2003. Economic considerations and class size. Economic Journal .

Ludwig, J. and Phillips, D. A., 2007. The benefits and costs of Head Start (No. w12973).

National Bureau of Economic Research.

Office of Head Start. 2014. United States Department of Health and Human Services. https://eclkc.ohs.acf.hhs.gov/hslc/data/factsheets/2014-hs-program-factsheet.html (last accessed on 20 Feb 2016).

Office of Head Start. 2015. United States Department of Health and Human Services. http://eclkc.ohs.acf.hhs.gov/hslc/tta-system/family/rtp-series.html (last accessed on 2 March

2016).

Puma, M., Bell, S., Cook, R., Heid, C., Shapiro, G., Broene, P., Jenkins, F., Fletcher, P.,

Quinn, L., Friedman, J. and Ciarico, J., 2010. Head Start Impact Study. Final

Report. Administration for Children & Families .

Rolnick, A. and Grunewald, R., 2003. Early childhood development: Economic development with a high public return. The Region, 17(4), pp.6-12.

ECONOMIC COSTS OF MENTAL ILLNESS IN THE UNITED KINGDOM

THE CASE FOR INTERVENTION

Leonie Westhoff

BA Philosophy and Economics

Second Year

University College London

Explore Econ Undergraduate Research Conference

March 2016

Of all people suffering from mental illness in the United Kingdom, three quarters are currently not being treated. In this essay, I argue that this treatment gap has significant economic costs. First, I give an overview of the mental health treatment gap in the United

Kingdom. Second, I lay out the specific economic costs associated with mental illness, focusing mainly on costs to care systems and costs associated with productivity loss. Third, I suggest policy measures to be taken to reduce these negative effects. I conclude that policy measures to combat mental illness are not only a necessity from a moral point of view, but are also economically efficient.

The term “mental illness” is often used ambiguously – In this essay, I take it to encapsulate a range of health issues, from mild to moderate disorders such as anxiety and depression to more severe conditions such as schizophrenia, as well as substance abuse issues. Following this definition, in the United Kingdom, nearly one person in four suffered from a psychiatric disorder in 2007 (Adult Psychiatric Morbidity Survey 2007). This number is projected to be increasing - the annual growth rate of mental health disorders since 2000 is 5.4% (Oxford

Economics 2007: 3). Of those suffering from mental illness, only around a third of those with mild to moderate disorders are currently being treated, while only half of those with severe disorders are in treatment (OECD 2014: 14). The most common form of treatment is prescription of anti-depressants, which applies to 70% of patients; only 29% of those in treatment receive therapy, or therapy and anti-depressants. This mental health treatment gap also extends to children: nearly 10% of all children in the UK suffer from a clinically diagnosable mental health condition. However, 60-70% of these do not receive appropriate intervention sufficiently early to prevent a negative impact of the disorder on their adult life

(Centre for Mental Health Care 2014: 5).

Mental illness is often more debilitating than physical illness: a person with depression is on average 50% more disabled, in terms of quality of life, than somebody with angina, arthritis or diabetes. It also accounts for 23% of the burden of disease, taking into account both morbidity of diseases and their causing premature death, in the UK. Despite these facts, only

13% of NHS budget is spent on mental illness treatment (LSE CEP 2012: 10). While spending on mental health care has been increasing at a rate of 5.8% since 2001/02, this does not correspond to the overall increase in spending on health care at 7.1% (Oxford Economics

2007: 16).

Considering that people with mental illness die, on average, 10-20 years earlier (Chesney et al 2014: 153), the fact that so many of them are left untreated certainly seems like a large social and ethical injustice. However, this essay will focus on the economic costs of the treatment gap. In total, mental illness costs the UK economy an estimated 105 billion pounds a year – this amounts to 4.5% of GDP (OECD 2014: 13). Figure 1 show that this cost is not only immense in absolute value, but also when compared to other European economies.

Figure 1: Costs of Mental Illness in the UK Compared to Other European Countries

Source: OECD (2014), “Mental Health and Work: United Kingdom”, Mental Health and

Work, OECD Publishing

The economic costs of mental illness can broadly be divided into two categories: costs to care systems and costs associated with productivity loss. I examine each of these in turn.

With regard to the cost of mental illness to care systems, I will first consider direct treatment costs to the health care system. In the United Kingdom, the extra health care necessitated by mental illness costs the NHS 10.4 billion pounds a year. 3.1 billion pounds of these are associated with primary care (GP consultations, prescriptions); the remaining 10.9 billion pounds can be attributed to secondary and tertiary care, mostly for those with more severe disorders and for the elderly (LSE CEP 2012: 10). However, beyond these direct costs, research shows that mental illness is inextricably linked to physical illness: in fact, for a wide range of conditions, it increases the cost of physical health care by an estimated 45%, amounting to a further cost of 8 billion pounds a year (LSE CEP 2012: 12).

Further costs of care systems due to mental illness occur in the form of disability benefits.

The OECD (2014) reports that 1% of the UK working age population claim disability benefits – this is double the OECD average and a higher number than in any other OECD country. Of these claims, 41% are attributed to mental health issues – the number of

Incapacity Benefit recipients due to mental and behavioural disorders is similar to the number of Job Seekers Allowance recipients (Oxford Economics 2007: 7). In addition, mental illness also incurs significant costs to other social care systems. Research has shown statistically significant relationships between mental illness and various social issues, including homelessness, substance dependence, child abuse and neglect, divorce and motor vehicle accidents (Frank 1999: 7). Most striking perhaps is the relation between mental illness and criminal activity. Ferguson et al. (2005) show that there is a significant relation between childhood conduct problems and level of crime committed later in life – in their research, children in the most disturbed 5% of the cohort had outcome rates in terms of criminal activity between 1.5 and 19 times higher than those of the 50% least disturbed. With regard to the UK, this relation appears to be confirmed: 30% of all crime is committed by people who had a clinically diagnosable conduct disorder as a child or adolescent (LSE CEP 2012: 11).

Considering the growth rate of mental illness in the UK, it should not be surprising that the costs of mental health care are projected to increase significantly. Figure 2 compares the current costs to health and social care systems of specific mental health disorders with those projected to 2026.

Figure 2: Current and Projected Future Costs of Mental Health by Disorder to 2026

Source: Knapp, M. and McDaid, David and Parsonage, M. (2011): “Mental Health

Promotion and Prevention: The Economic Case“ based on data in King’s Fund (2008):

“Paying the Price: The Cost of Mental Health Care to 2026

Beyond costs to care systems, further economic costs of mental illness can be attributed to productivity loss. Mental illness in children negatively impacts their ability to accumulate human capital later in life (Currie and Stabile 2004: 2). More generally, research by Hamilton et al. (1998) shows that good mental health has a statistically significant positive effect on employment; they also link it to reduction in the number of sick days and increases in market wages offered to employees. With regard to the UK context, the OECD (2014) reports that unemployment rates for people with severe mental disorders are five times as high as for those without. To employers of mental illness sufferers, mental illness causes severe economic costs due to reduced productivity of workers, negative externalities imposed on other workers, high staff turnover and sickness absence, 40% of which is caused by mental illness (Royal Institute of Psychiatrists 2008: 13). Figure 3 relates the total number of working days lost to the number accounted for by mental illness.

Figure 3: Working Days Lost Due to Mental Illness Relative to All Working Days Lost

Source: Oxford Economics (2007): “Mental Health in the UK Economy”

In total, the cost of mental illness to employers is estimated to be 26 billion pounds per year

(Sainsbury Centre for Mental Health 2007: 1) – of these, impaired work efficiency accounts for 15.1 billion pounds, or alternatively, 605 pounds per employee (Royal College of

Psychiatrists 2008: 35). It therefore seems to be long overdue to recognize that mental health is in fact a major factor of production.

Having thus outlined the economic costs incurred by mental illness, I now turn to outlining possible policy intervention measures. There has been extensive research conducted on a wide range of mental health interventions in the UK to demonstrate their economic efficiency

– see, for instance, Knapp et al (2011). The scope of this paper does not allow for a discussion of all such measures; below, I point out the ones I deem to be most effective.

First, it is evident that there remains a stigma attached to mental illness. For instance, the

Royal College of Psychiatrists (2008) reports that mental illness has negative effects on employment prospects and may also lead to discrimination in the workplace. I have clearly shown that mental illness is a serious issue with adverse effects on not just well-being but also productivity, yet it remains to be treated as secondary to physical illness. As long as people suffering from mental illness feel that they cannot come forward, no policy can be completely effective. The stigma needs to be attacked through various means, ranging from education on mental illness and public awareness campaigns to a clearly communicated mental health policy from the side of the government. This will contribute to a gradual shift from treatment of mental illness to its prevention.

One policy measure that is clearly among the most effective is early investment in children’s mental health. I have previously outlined the negative effects of childhood mental illness on adult life, ranging from lost workplace productivity to criminal activity. Investment in mental health for children has immense economic benefits, including savings in future public spending, reduced use of the NHS and other health care institutions and increased earnings, associated with the impact of improved mental health on education (Centre for Mental Health

2014: 20). In cases of children with moderate mental health problems prevention of conduct disorders could save 75,000 pounds per child; In the most severe cases, it could save up to

150,000 pounds (Northern Ireland Association for Mental Health 2007: 5). These savings are largely related to savings in crime, but also to future reduced treatment costs and increased lifetime earnings. A variety of child support programmes have been shown to be successful.

For instance, programmes involving parenting initiatives and social or emotional support for children have been estimated to have a return of 8 euros per euro spent on them, due to savings in crime and education (Knapp et al 2011: 40).

Another instance where intervention will be most efficient is in the workplace. A first step must be to increase awareness of the impact of working conditions on mental health. Figure 4 illustrates the high correlation between workplace factors and mental health: those who felt that their work was particularly stressful or demanding very often suffered from mental illness.

Figure 4: The Correlation between Workplace Factors and Mental Health

Source: OECD (2014), “Mental Health and Work: United Kingdom”, based on Health and

Social Care Information Centre (2007): “Adult Psychiatric Morbidity Survey”

Making employers aware of how workplace factors influence mental illness and hence productivity, and working to decrease the stigma associated with mental illness in the workplace will increase productivity in the long run. Beyond that, more concrete measures have to be taken to help people with mental illness reintegrate into the labour market and to reduce instances of anxiety or depression in the working environment. Here, a focus on individual psychological treatment and therapy has been shown to be most effective: an average spending of a mere 2,500 pounds enables someone with anxiety or stress to feel well enough to start looking for and obtaining a job. In turn, over their lifetime, the economic benefits of reducing the number of sick days an individual takes due to anxiety or depression could amount to nearly 100,000 pounds in economic output. (Oxford Economics 2007: 20-

27). In relation to this, another priority must be the training of GPs, who are often the first point of contact for mental illness sufferers. GP training does not include a compulsory mental health rotation and as such, GPs often fail to recognize a mental illness when patients do not directly report that they have one, or only talk of symptoms relating to physical illness

(LSE CEP 2012: 18).

Finally, there is arguably a lack of economic research on the topic of mental health economics (Evers et al 1997; Zechmeister et al 2008). Researchers need to both increase the scope of their research as well as evaluate the methodology of its analysis. A lot of areas of mental health issues have not yet been examined thoroughly enough from an economic perspective. For instance, it is quite likely that the estimates of the impact of mental illness on the UK economy are flawed because they rely on self-reported measures of mental health. It may be that the prevailing stigma on mental health means that sufferers are not able to recognize symptoms of mental illness, or do not feel comfortable reporting these, leading to misreporting of mental health status, and hence a downward bias on the estimate of the effect of mental illness.

In conclusion, this paper has laid out the mental health treatment gap, focusing on the United

Kingdom. Mental illness incurs significant economic costs, with regard to both costs generated by care systems and costs associated with productivity loss. I have suggested a number of policy measures to be taken to reduce these costs, specifically focusing on early investment in children and improvements in the workplace. In conclusion, mental health interventions will not only increase general well-being, but will also lead to large savings in economic cost.

Word Count: 2104

BIBLIOGRAPHY

Centre for Mental Health (2014): “Investing in Children’s Mental Health” http://www.centreformentalhealth.org.uk/investing-in-children-report (retrieved 23/02/2016)

Chesney, E., Goodwin, G. M. and Fazel, S. (2014), Risks of all-cause and suicide mortality in mental disorders: a meta-review.“, World Psychiatry , 13, 153–160

Currie, Janet and Stabile, Mark (2004): “Child Mental Health and Human Capital

Accumulation: The Case of ADHD”, Journal of Health Economics , 25(6), 1094-1118

Evers, S. M. A. A., Van Wijk, A. S. and Ament, A. J. H. A. (1997), “Economic Evaluation of

Mental Health Care Interventions. A Review.“ Health Economics , 6, 161–177

Fergusson, D.M., Horwood, L.J. and Ridder, E.M. (2005): “Show me the child at seven: the consequences of conduct problems in childhood for psychosocial functioning in adulthood“,

Journal of Child Psychology and Psychiatry , 46, 837-849

Frank, Richard G. and McGuire, Thomas G. (2000): "Economics and mental health,"

Handbook of Health Economics , in: A. J. Culyer & J. P. Newhouse (ed.), 893-954

Hamilton, V. H., Merrigan, P. and Dufresne, É. (1997), „Down and out: estimating the relationship between mental health and unemployment.“, Health Economics , 6, 397–406

Health and Social Care Information Centre (2007): “Adult Psychiatric Morbidity Survey”, http://www.hscic.gov.uk/pubs/psychiatricmorbidity07 (retrieved 23/02/2016)

King’s Fund (2008): “Paying the Price: The Cost of Mental Health Care to 2026, http://www.kingsfund.org.uk/sites/files/kf/Paying-the-Price-the-cost-of-mental-health-care-

England-2026-McCrone-Dhanasiri-Patel-Knapp-Lawton-Smith-Kings-Fund-May-

2008_0.pdf (retrieved 23/02/2026)

Knapp, M. and McDaid, David and Parsonage, M. (2011): “Mental Health Promotion and

Prevention: The Economic Case“ http://www.lse.ac.uk/businessAndConsultancy/LSEEnterprise/pdf/PSSRUfeb2011.pdf

(retrieved 23/02/2016)

London School of Economics, Centre for Economic Performance (2012): “How Mental

Illness loses out in the NHS”, http://cep.lse.ac.uk/pubs/download/special/cepsp26.pdf (retrieved 23/02/2016)

Northern Ireland Association for Mental Health (2007): “Mental Health Promotion: An

Economic Case” http://www.chex.org.uk/media/resources/mental_health/Mental%20Health%20Promotion%2

0-%20Building%20an%20Economic%20Case.pdf (retrieved 23/02/2016)

Royal College of Psychiatrists (2008): “Mental Health and Work“, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/212266/hwwbmental-health-and-work.pdf (retrieved 23/02/2016)

Sainsbury Centre for Mental Health (2008): “Mental Health at Work: Developing the

Business Case”, http://www.centreformentalhealth.org.uk/mental-health-at-work (retrieved 23/02/2016)

OECD (2014), “Mental Health and Work: United Kingdom”, Mental Health and Work,

OECD Publishing

Oxford Economics (2007): “Mental Health in the UK Economy”, http://web.oxfordeconomics.com/FREE/PDFS/MENUKEC.PDF (retrieved 23/02/2016)

Zechmeister, I. and Kilian, R. and McDaid, D (2008): “Is it worth investing in mental health promotion and prevention of mental illness? A systematic review of the evidence from economic evaluations“ http://bmcpublichealth.biomedcentral.com/articles/10.1186/1471-2458-8-20 (retrieved

23/02/2016)

The Effect of Low Interest Rates on Household

Expectation Formation in the U.S.

You Jin Lim

B.Sc Economics

3 rd

year

University College London

Explore Econ Undergraduate Research Conference

March 2016

1 Introduction

Interest rates affect many key decisions in the public, commercial and private sector of the general economy. The public sector and commercial sector has the capacity and resources to conduct studies on how interest rates are expected to affect them in the future but usually not the individuals within the private sector. Therefore, it warrants my attention to understand how households form interest rate expectations especially when many of their assets, debts and decisions revolves around making interest rate decisions in varying maturing dates.

Figure 1: Historical Federal Funds Rate Data (Source: New York Fed)

Since the Global Financial Crisis in 2008, we witness major countries like the U.S. adopting low and unchanged short-term interest rate targets from 2009 onwards as shown in Figure 1.1.

In this paper, my main focus would to be study the phenomenon of how this extensive period of fixed interest rate targets affect the very decisions of households in forming interest rates outlook.

In here, I would be studying the households in the United States (U.S.). The primary data source that I would be evaluating would be the Survey of Consumer Finances (SCF) sponsored by the Federal Reserve Board. The Survey of Consumer Finances is a triennial

cross-sectional survey of households in the United States. The dataset I would be using is that of 2010 and 2013.

The survey data include large information datasets on households’ pensions, balance sheets, and full income – such as wage income, normal income, and important demographic characteristics.

Finally, in the conclusion, I would like to share with the readers about my concern with the

Federal Open Market Committee’s (FOMC) lack of consideration for household’s interest rate expectations, future extension of the results from this paper and its inherent ability to lead to wider applications in the near future when greater data are available.

2 Duration of Unchanged Rate Level and its effect on

Household Interest Rate Outlook

In this paper, my primary objective is to study how the duration of an unchanged Federal

Fund Rate target level has affected head of household’s in the United States in forming their interest rate outlook. The Federal Fund Rate as mentioned therein would be represent the target level for the overnight fed funds rate set by the Federal Open Market Committee

(FOMC).

The target levels of Fed Fund Rates set by the Federal Open Market Committee (FOMC) does have a large impact on the general economy. The target level affects loans and investment decisions of financial institutions, particularly for commercial bank loan decisions to businesses, individuals and other non-domestic institutions. Intuitively, financial institutions based their lending or borrowing decisions by comparing the federal funds rate with the periodic yields on other investments.

The New York Federal Reserve highlights that interest rates paid on other short-term financial securities such as commercial papers and

Treasury bills (maturities ranging from 13 to 52 weeks) moves roughly in tandem with the

Fed Funds Rate. Further, yields on long-term assets such as Treasury Notes (maturities ranging from 1 to 10 years) and Corporate Bonds are determined with expectations of the future Fed Funds Rate (New York Fed, 2013).

2.1 Motivations for studying extended period of unchanged Federal

Fund Rates

Since 16 th

December 2008, the Federal Reserve has set the target Fed Fund Rate between

0.00% - 0.25%. This target was only recently raised in December 2015. This meant that the

Fed Fund Rate has unprecedentedly remained unchanged for 7 years. This approach to

Monetary Policy formulation has not occurred throughout the history of most Central Banks and National Financial Institutions.

One of the key dependent variable of study in this paper is “

YrsSinceLastFedFundRateChg

”.

It is defined as the years since the Federal Reserve has maintained its target Fed Fund Rate of

0.00% – 0.25% to the date the interview has been conducted. The variation in time factor of the variable will come from the different interview dates in 2010 and 2013 SCF survey data.

I would be studying the statistical relationship between the key dependent variable

YrsSinceLastFedFundRateChg

” with the following independent variable “ intexpin5

”, which is one of the variables collected under the Survey of Consumer Finance. intexpin5 : Five years from now, do you think interest rates will be higher, lower, or about the same as today?

(1):

(2):

* Lower

* About the Same

(3): * Higher

2.2 Econometric Analysis on the Head of Households’ Interest

Rate Expectations formation in the United States

In this section, I would focus on studying the statistical relationship between the independent variable “ intexpin5

” (interest rate expectations of head of household’s in 5 years) with selected dependent variables (including the Key Variable of interest

YrsSinceLastFedFundRateChg

”).

Primarily, I am interested in how interest rate expectations “ intexpin5 ” changes with wellestablished social and economic factors such as Age, Gender, Household Income and

Education. As stated in Section 2.1 of the paper, the New York Federal indicates that shortterm interest rates are known to be affected in tandem with the Federal Funds Rate target level.

𝑖𝑛𝑡𝑒𝑥𝑝𝑖𝑛 = 𝛼(𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡)

𝑖

+ (𝜷

𝑨(𝒏)

)

𝑻

(𝐻𝑒𝑎𝑑 𝑜𝑓 𝐻𝑜𝑢𝑠𝑒ℎ𝑜𝑙𝑑 𝐶ℎ𝑎𝑟𝑎𝑐𝑡𝑒𝑟𝑖𝑠𝑡𝑖𝑐𝑠)

𝑖

+ (𝜷

𝑩(𝒏)

)

𝑻

(𝐴𝑡𝑡𝑖𝑡𝑢𝑑𝑖𝑛𝑎𝑙 𝑎𝑛𝑑 𝑅𝑒𝑙𝑎𝑡𝑒𝑑 𝑉𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠)

𝑖

+ (𝜷

𝑪(𝒏)

)

𝑻

(𝐻𝑜𝑢𝑠𝑒ℎ𝑜𝑙𝑑 𝐹𝑖𝑛𝑎𝑛𝑐𝑒𝑠)

𝒊

+ 𝛽 (𝑌𝑟𝑠𝑆𝑖𝑛𝑐𝑒𝐹𝑒𝑑𝐹𝑢𝑛𝑑𝑅𝑎𝑡𝑒𝐿𝑎𝑠𝑡𝐶ℎ𝑔)

𝑖

+ 𝑢

𝑖

Table 1: Independent variables under the respective categories:

Head of Household

Characteristics:

A1. hhsex

A2. age

A3. educ

A4. married

A5. kids

A6. lf

A7. race

A8. lgincome

Attitudinal and Related

Variables

B1. wsaved

B2. spendmor

B3. bnkruplast5

B4. bshop

B5. ishop

B6. econexpin1

Household Finances

C1. lgliq

C2. lghouses

C3. lgnliq

C4. lgnhnfin

C5. lgdebt

C6. levratio

The definition of all the variables that are used in the framework can be found under the

Appendix section of the paper.

3 Statistical Results and Interpretations

3.1 The Statistical Significance of the Key Variable

The primary objective of my paper sets out to discuss if there are any relationship between the interest rate expectations of households and the duration of unchanged Federal Funds

Rate. The coefficients, P-values and other related variables quoted below are primarily extracted from Table 3.

From the statistical output shown in Table 3,Under both ordered logit and probit regression, the resulting p-value for the variable “ YrsSinceLastFedFundRateChg

” is <0.001, this indicates a very strong statistical evidence to prove the central hypothesis that the duration of unchanged Federal Funds Rate have an effect on the household interest rate expectations in the future. Like normal logistic or probit regression, we cannot infer results from the coefficients directly (Allison, 1999). Therefore, I proceed to study the marginal effects.

3.2 Marginal Effects of the Key Variable

Below is the summarize results of the coefficients of marginal effects of the key variables

YrsSinceLastFedFundRateChg

” from Table 4.1 to Table 4.3.

Table 2: Excerpts of Marginal Effects of Variable “ YrsSinceLastFedFundRateChg ”

Ordered Logit Ordered Probit dy/dx of Key Variable

𝑷𝒓[” intexpin5” = 𝟏 ]

(𝑅𝑒𝑠𝑝𝑜𝑛𝑑𝑒𝑛𝑡𝑠 𝑒𝑥𝑝𝑒𝑐𝑡𝑠 𝑙𝑜𝑤𝑒𝑟 𝑖/𝑟 𝑖𝑛 5 𝑦𝑒𝑎𝑟𝑠)

𝑷𝒓[” intexpin5” = 𝟐 ]

(𝑅𝑒𝑠𝑝𝑜𝑛𝑑𝑒𝑛𝑡𝑠 𝑒𝑥𝑝𝑒𝑐𝑡𝑠 𝑖/𝑟 𝑡𝑜 𝑟𝑒𝑚𝑎𝑖𝑛 𝑖𝑛 5 𝑦𝑒𝑎𝑟𝑠)

𝑷𝒓[” intexpin5” = 𝟑 ]

(𝑅𝑒𝑠𝑝𝑜𝑛𝑑𝑒𝑛𝑡𝑠 𝑒𝑥𝑝𝑒𝑐𝑡𝑠 ℎ𝑖𝑔ℎ𝑒𝑟 𝑖/𝑟 𝑖𝑛 5 𝑦𝑒𝑎𝑟𝑠)

-0.0037

(0.0008)

-0.0095

(0.0021)

0.0132

(0.0029)

-0.0046

(0.0010)

-0.0086

(0.0018)

0.0132

(0.0028)

In here, we see the marginal effects of “ YrsSinceLastFedFundRateChg

” when the respondents are predicted to expect higher interest rates in 5 years. In both ordered logit and probit regression, the marginal effects have a corresponding coefficient of 0.0132. This meant that for every unit increase in the key variable “

YrsSinceLastFedFundRateChg

”, which is equivalent to increase in 1 year (or 356 days) in the extended duration of unchanged Federal

Funds Rate, it lead to the increase the probability of respondents expecting higher interest rates in the five years by 1.32%.

The statistical significance (P<0.001) and coefficient of marginal effects provide us with a conclusive result. It seems that the longer the duration of the unchanged interest rate target in the economy remains constant, households in the United States tend to have a higher interest rate outlook in the mid-terms (5 years).

3.3 Other Noteworthy Determinants

Through the richness of the data, I found an interesting value on the variable “ bshop

”. It measures the degree of searching that head of household places when making major decisions such as borrowing money or obtaining loans.

bshop : When making major decisions about borrowing money or obtaining credit, some people search for the very best terms while others don’t.

On a scale from one to five, where one is almost no searching, three is moderate searching, and five is a great deal of searching, where would

(you/your family) be on the scale?

1.

*Almost No Searching

2.

3.

*Moderate Searching

4.

5.

*A Great Deal of Searching

Noticeably, the variable “ bshop ” is statistically significant at the 5% level. Although “ bshop ” is statistically significant, it pales in comparison with respect to statistical significance to variables such as “

YrsSinceFedFundRateLastChg

” and “ econexpin1

”. This observation tells

us that the different degrees of searching for loan products have an impact on how head of households view the future interest rates. However, it is surprising that this variable is not among the statistically strongest variables that affect interest rates. It is surprising as the results tells us there is only strong but not significantly strong statistical evidence to show that head of household’s degree of effort to search for interest rate sensitive loan products affects their prediction on future interest rates.

The most probable reason that result in pale comparison of statistical significance can be due to the fact that many more people recognises low federal funds rate or interests rates implemented by the government to jumpstart the economy in 2008 and that it has remained unchanged for 7 years. This extensive recognition and time range might have caused people to re-think the sustainability of such trends especially when they have to decide for big ticket items such as the purchase of properties. But not for the case of degree of effort where it is vastly varying and can be difficult to specifically pinpoint the level of efforts.

Table 3: Parameters Estimate for Ordered Logit and Probit Regression with Multiple Imputation for Selected Variables in the Survey of

Consumer Finance, Weighted and Adjusted to 2013 dollars intexpin5

Head of Household Characteristics hhsex age educ married kids lf race bnkruplast5 lgincome

Attitudinl & Rltd Variables wsaved spendmor bshop ishop econexpin1

Household Finances lgdebt levratio

Key Variable of Interest

YrsSinceLastFedFundRateChg cut1 lgliq lghouses lgnliqfin lgnhnfin

_cons cut2

_cons

Logit Coefficient

-0.0751

0.0009

0.0128

-0.0230

-0.0156

0.0518

-0.0555

0.0157

0.0481

-0.0359

0.0189

0.0392

-0.0004

0.0882

-0.0085

0.0059

0.0245

0.0382

0.0081

0.0001

0.0723

-1.2068

0.5049

Ordered Logit Regression

(Mean Treatment for Clustered Dates)

Odds Ratio Robust Std. Err

0.9276

1.0009

1.0129

0.9772

0.9845

1.0532

0.9460

1.0158

1.0492

0.9648

1.0191

1.0400

0.9996

1.0922

0.9915

1.0059

1.0248

1.0390

1.0081

1.0001

1.0749

(0.0727)

(0.0020)

(0.0102)

(0.0700)

(0.0226)

(0.0685)

(0.0238)

(0.1265)

(0.0207)

(0.0346)

(0.0188)

(0.0197)

(0.0191)

(0.0278)

(0.0104)

(0.0054)

(0.0059)

(0.0076)

(0.0058)

(0.0001)

(0.0158)

(0.3103)

(0.3071)

𝑃 > |𝑍|

0.301

0.654

0.208

0.742

0.490

0.449

0.020*

0.902

0.020*

0.300

0.313

0.046*

0.985

0.001***

0.410

0.268

0.000****

0.000****

0.164

0.498

0.000****

0.000****

0.100

Ordered Probit Regression

(Mean Treatment for Clustered Dates)

Probit Coefficient Robust Std. Err 𝑃 > |𝑍|

-0.0501

0.0013

0.0079

-0.0106

-0.0098

0.0282

-0.0349

0.0131

0.0318

-0.0228

0.0099

0.0237

-0.0020

0.0495

-0.0044

0.0030

0.0139

0.0226

0.0047

0.0000

0.0422

-0.5943

(0.0418)

(0.0012)

(0.0058)

(0.0402)

(0.0129)

(0.0390)

(0.0136)

(0.0723)

(0.0127)

(0.0199)

(0.0108)

(0.0113)

(0.0110)

(0.0159)

(0.0060)

(0.0031)

(0.0034)

(0.0044)

(0.0033)

(0.0000)

(0.0090)

(0.1811)

0.231

0.278

0.177

0.793

0.448

0.469

0.010*

0.856

0.012*

0.252

0.359

0.036*

0.855

0.002**

0.465

0.324

0.000****

0.000****

0.156

0.466

0.000****

0.001***

0.3218 (0.1804) 0.075

N

Pseudo 𝑹 𝟐 𝝌 𝟐

Mcfadden

Mcfadden (adjusted)

Prob > 𝝌 𝟐

12497

0.019

0.019

0.000

12497

0.020

0.020

0.000

Notes: Standard Errors are expressed in parenthesis.

* denotes the statistical significance at the 5% level (P-values that are between 0.05 to 0.01) for all regression models

** denotes the statistical significance at the 1% level (P-values that are between 0.01 to 0.001) for all regression models

*** denotes the statistical significance at the 0.1% level (P-values that are equivalent to 0.001) for all regression models

**** denotes the statistical significance below the 0.1% level (P-values that are below 0.001) for all regression models

Table 4.1: Marginal Effects of Parameters for Outcome == 1

intexpin5 dy/dx

Head of Household Characteristics hhsex 0.0039 age 0.0000 educ -0.0007 married 0.0012 kids 0.0008 lf -0.0027 race 0.0029 bnkruplast5 -0.0008 lgincome -0.0025

Attitudinl & Rltd Variables wsaved 0.0018 spendmor -0.0010 bshop -0.0020 ishop 0.0000 econexpin1 -0.0045

Household Finances lgliq 0.0004 lghouses -0.0003 lgnliqfin -0.0013 lgnhnfin -0.0020 lgdebt -0.0004 levratio 0.0000

Key Variable of Interest

YrsSinceLastFedFundRateChg -0.0037

Probability(intexpin5 = 1) 0.0544

0.302

0.655

0.209

0.743

0.490

0.454

0.021*

0.901

0.021*

0.299

0.313

0.047*

0.985

0.002**

0.411

0.267

0.000****

0.000****

0.164

0.497

0.000****

Ordered Logist Regression

Std. Err. 𝑃 > |𝑍| Mean Values

0.0038

0.0001

0.0005

0.0036

0.0012

0.0036

0.0012

0.0064

0.0011

0.0018

0.0010

0.0010

0.0010

0.0014

0.0005

0.0003

0.0003

0.0004

0.0003

0.0000

0.0008

1.2778

50.8505

13.4609

1.4240

0.8210

0.7227

1.5417

0.0385

10.7402

2.3643

3.5401

3.2222

3.0741

2.3399

7.5216

7.9613

6.9850

9.0261

7.9484

14.9985

3.0981 dy/dx

0.0054

-0.0001

-0.0009

0.0011

0.0011

-0.0031

0.0038

-0.0014

-0.0034

0.0025

-0.0011

-0.0026

0.0002

-0.0054

0.0005

-0.0003

-0.0015

-0.0025

-0.0005

0.0000

-0.0046

0.0533

Ordered Probit Regression

Std. Err. 𝑃 > |𝑍|

0.0046

0.0001

0.0006

0.0044

0.0014

0.0043

0.0015

0.0077

0.0014

0.0022

0.0012

0.0012

0.0012

0.0017

0.0007

0.0003

0.0004

0.0005

0.0004

0.0000

0.0010

0.232

0.279

0.178

0.793

0.449

0.474

0.011*

0.854

0.013*

0.252

0.358

0.036*

0.855

0.002**

0.466

0.323

0.000****

0.000****

0.157

0.466

0.000****

Notes: Standard Errors are expressed in parenthesis.

* denotes the statistical significance at the 5% level (P-values that are between 0.05 to 0.01) for all regression models

** denotes the statistical significance at the 1% level (P-values that are between 0.01 to 0.001) for all regression models

*** denotes the statistical significance at the 0.1% level (P-values that are equivalent to 0.001) for all regression models

**** denotes the statistical significance below the 0.1% level (P-values that are below 0.001) for all regression models

Mean Values

1.2778

50.8505

13.4609

1.4240

0.8210

0.7227

1.5417

0.0385

10.7402

2.3643

3.5401

3.2222

3.0741

2.3399

7.5216

7.9613

6.9850

9.0261

7.9484

14.9985

3.0981

Table 4.2: Marginal Effects of Parameters for Outcome == 2

intexpin5

Head of Household Characteristics hhsex dy/dx

0.0099 age -0.0001 educ -0.0017 married 0.0030 kids 0.0021 lf -0.0069 race 0.0073 bnkruplast5 -0.0021 lgincome -0.0063

Attitudinl & Rltd Variables wsaved 0.0047

Household Finances spendmor -0.0025 bshop -0.0052 ishop 0.0000 econexpin1 -0.0116 lgliq 0.0011 lghouses -0.0008 lgnliqfin -0.0032 lgnhnfin -0.0050 lgdebt -0.0011 levratio 0.0000

Key Variable of Interest

YrsSinceLastFedFundRateChg -0.0095

Probability(intexpin5 = 2) 0.1874

Ordered Logist Regression

Std. Err. 𝑃 > |𝑍| Mean Values

0.0096 0.301

0.0003

0.0013

0.0092

0.654

0.208

0.742

1.2778

50.8505

13.4609

1.4240

0.0030

0.0091

0.0031

0.0166

0.0027

0.0046

0.0025

0.0026

0.490

0.451

0.020*

0.901

0.020*

0.300

0.314

0.046

0.0025 0.985

0.0037 0.001***

0.0014 0.410

0.0007 0.268

0.0008 0.000****

0.8210

0.7227

1.5417

0.0385

10.7402

2.3643

3.5401

3.2222

3.0741

2.3399

7.5216

7.9613

6.9850

0.0010 0.000****

0.0008

0.0000

0.164

0.498

0.0021 0.000****

9.0261

7.9484

14.9985

3.0981 dy/dx

0.0102

-0.0003

-0.0016

0.0022

0.0020

-0.0058

0.0071

-0.0027

-0.0065

0.0047

-0.0020

-0.0048

0.0004

-0.0101

0.0009

-0.0006

-0.0028

-0.0046

-0.0010

0.0000

-0.0086

0.1893

Ordered Probit Regression

Std. Err. 𝑃 > |𝑍|

0.0085

0.0002

0.0012

0.0082

0.231

0.277

0.177

0.793

0.0026

0.0080

0.0028

0.0147

0.0026

0.0041

0.0022

0.0023

0.0022

0.0033

0.0012

0.0006

0.0007

0.855

0.002**

0.464

0.324

0.000****

0.448

0.470

0.010*

0.856

0.012*

0.253

0.359

0.036*

0.0009

0.0007

0.0000

0.0018

0.000****

0.157

0.467

0.000****

Notes: Standard Errors are expressed in parenthesis.

* denotes the statistical significance at the 5% level (P-values that are between 0.05 to 0.01) for all regression models

** denotes the statistical significance at the 1% level (P-values that are between 0.01 to 0.001) for all regression models

*** denotes the statistical significance at the 0.1% level (P-values that are equivalent to 0.001) for all regression models

**** denotes the statistical significance below the 0.1% level (P-values that are below 0.001) for all regression models

Mean Values

1.2778

50.8505

13.4609

1.4240

0.8210

0.7227

1.5417

0.0385

10.7402

2.3643

3.5401

3.2222

3.0741

2.3399

7.5216

7.9613

6.9850

9.0261

7.9484

14.9985

3.0981

Table 4.3: Marginal Effects of Parameters for Outcome == 3

intexpin5

Head of Household Characteristics dy/dx hhsex -0.0138 age 0.0002 educ 0.0023 married -0.0042 kids -0.0029 lf 0.0096 race -0.0102 bnkruplast5 0.0029 lgincome 0.0088

Attitudinl & Rltd Variables wsaved -0.0066

Household Finances

Key Variable of Interest spendmor bshop

0.0035

0.0072 ishop -0.0001 econexpin1 0.0162 lgliq -0.0016 lghouses lgnliqfin

0.0011

0.0045 lgnhnfin lgdebt levratio

YrsSinceLastFedFundRateChg

Probability(intexpin5 = 3)

0.0070

0.0015

0.0000

0.0132

0.7852

Ordered Logist Regression

Std. Err. 𝑃 > |𝑍| Mean Values

0.0133 0.301

0.0004

0.0019

0.0128

0.654

0.208

0.742

1.2778

50.8505

13.4609

1.4240

0.0041

0.0127

0.0044

0.0230

0.0038

0.0063

0.0034

0.0036

0.490

0.452

0.020*

0.901

0.020*

0.300

0.313

0.046

0.0035 0.985

0.0051 0.001***

0.0019 0.410

0.0010 0.267

0.0011 0.000****

0.8210

0.7227

1.5417

0.0385

10.7402

2.3643

3.5401

3.2222

3.0741

2.3399

7.5216

7.9613

6.9850

0.0014 0.000****

0.0011

0.0000

0.164

0.498

0.0029 0.000****

9.0261

7.9484

14.9985

3.0981 dy/dx

-0.0157

0.0004

0.0025

-0.0033

-0.0031

0.0089

-0.0109

0.0041

0.0099

-0.0071

0.0031

0.0074

-0.0006

0.0155

-0.0014

0.0009

0.0043

0.0071

0.0015

0.0000

0.0132

0.7574

Ordered Probit Regression

Std. Err. 𝑃 > |𝑍|

0.0131

0.0004

0.0018

0.0126

0.231

0.278

0.177

0.793

0.0040

0.0123

0.0043

0.0224

0.0040

0.0062

0.0034

0.0035

0.0034

0.0050

0.0019

0.0010

0.0011

0.855

0.002**

0.465

0.323

0.000****

0.448

0.471

0.010*

0.855

0.012*

0.252

0.358

0.036

0.0014

0.0010

0.0000

0.0028

0.000****

0.156

0.466

0.000****

Notes: Standard Errors are expressed in parenthesis.

* denotes the statistical significance at the 5% level (P-values that are between 0.05 to 0.01) for all regression models

** denotes the statistical significance at the 1% level (P-values that are between 0.01 to 0.001) for all regression models

*** denotes the statistical significance at the 0.1% level (P-values that are equivalent to 0.001) for all regression models

**** denotes the statistical significance below the 0.1% level (P-values that are below 0.001) for all regression models

Mean Values

1.2778

50.8505

13.4609

1.4240

0.8210

0.7227

1.5417

0.0385

10.7402

2.3643

3.5401

3.2222

3.0741

2.3399

7.5216

7.9613

6.9850

9.0261

7.9484

14.9985

3.0981

4 Concluding Remarks

4.1 Worrying Thoughts

From the studies using the variable “ YrsSinceLastFedFundRateChg

”, I find that duration of unchanged Federal Funds Rate target level has a highly statistically significant result

(P<0.001) on how head of households form interest rate expectations in 5 years.

A step further, I estimated the probability of respondents having higher interest rates outlook in 5 years to increase by 1.32% when the years since unchanged Federal Funds Rate increase by 1 year (or 356 days). Similarly, I estimated the probability of respondents expecting similar interest rates as the date of interview to decrease between 0.86% - 0.95% when the years between unchanged Federal Funds Rate increase by 1 year.

This strong result reveals something particularly worrying. Despite it being statistically significant to a great extent, it has come to my attention when reviewing the transcripts of

FOMC meeting to decide on federal fund rates level, its transcripts

1

do not indicate any signs of its 12 members economists taking household expectations of interest rates level to consideration despite households forming one of the three cores of a functional economy

(namely Government, Corporates and households). Unlike Corporates which seemingly have better access to in-house economic views and their outlook are better taken into account by the Central Bank, households are arguably less able to voice their opinions collectively. It is through the collection of household survey as such that we notice that households have a statistically strong opinion on how interest rates are likely to change in the new future.

Therefore, I believe this paper has clearly highlighted the need for Central Bankers to consider household expectations of households because they represent the largest group in the economy in terms of population. And if the Central Bankers are indeed interested in altering the economy, it would definitely need to take a good look at how expectations of households have changed especially after introducing such unprecedented monetary policy tools. In John

Taylor’s words, “Deviations from good economic policy have been responsible for the very poor performance. Such deviations are the highly discretionary monetary policy (such as

1

http://www.federalreserve.gov/mediacenter/files/FOMCpresconf20151216.pdf

Quantitative Easing) that has generated distortions and uncertainty.” And if the Central

Bankers do not start factoring in households collective expectations into policy making, it will definitely lead to greater distortions and uncertainties within the economy.

4.2 Future Extensions

The Federal Reserve Open Market Committee (FOMC) has only recently announced in 16 th

December 2015 to raise the Fed Fund Rates target level, ending a 7 year long of unchanged low interest rate target level from 0.00%-0.25% to 0.25%-0.50% target level.

2

At the date of writing, the BoE Monetary Policy Committee (MPC) has yet to announce any plans to raise its Official Bank Rate. In the future, with the availability of wider datasets, readers would even be able to make robust estimates of the potential change in individuals’ interest rate outlook should there be an event of low and unchanged official interest rate targets. Applying these concepts beyond the U.S., it meant that countries like the Canada which is facing a gradual decreasing short-term interest rates target in recent years and have possible contemplation of utilising quantitative easing can study the effects of how long and unchanged short-term interest rate targets have on household’s interest rate expectations in the medium term.

2

Source: http://www.federalreserve.gov/newsevents/press/monetary/20151216a.htm

REFERENCES

Allison, P.D., 1999. Comparing logit and probit coefficients across groups.Sociological methods & research, 28(2), pp.186-208.

Angrist, J.D., Imbens, G.W. and Rubin, D.B., 1996. Identification of causal effects using instrumental variables. Journal of the American statistical Association, 91(434), pp.444-455.

Bernanke, B.S. and Gertler, M., 1995. Inside the black box: the credit channel of monetary policy transmission (No. w5146). National bureau of economic research.

Bernanke, B.S., 2000. Japanese monetary policy: a case of self-induced paralysis?. Japan’s financial crisis and its parallels to US experience, pp.149-166.

Bernanke, B.S. and Reinhart, V.R., 2004. Conducting monetary policy at very low short-term interest rates. The American Economic Review, 94(2), pp.85-90.

Board of Governors of Federal Reserve System (Washington, D.C.) on behalf of the Federal

Reserve System (June 2005). The Federal Reserve System: Purposes & Functions. 9th ed.

Washington, D.C.: The Board of Governors of the Federal Reserve System Publications

Committee. p4-p146.

Brant, R., 1990. Assessing proportionality in the proportional odds model for ordinal logistic regression. Biometrics, pp.1171-1178.

Christensen, J.H. and Rudebusch, G.D., 2012. The Response of Interest Rates to US and UK

Quantitative Easing*. The Economic Journal, 122(564), pp.F385-F414.

Eggertsson, G.B., 2003. Zero bound on interest rates and optimal monetary policy. Brookings

Papers on Economic Activity, 2003(1), pp.139-233.

Ehrmann, Michael , Damjan Pfajfar, and Emiliano Santoro (2015). “Consumers’ Attitudes and Their Inflation Expectations,” Finance and Economics Discussion Series 2015-015.

Washington: Board of Governors of the Federal Reserve System, http://dx.doi.org/10.17016/FEDS.2015.015.

Federal Reserve Bank of New York. Federal Funds Historical Data. Available: https://apps.newyorkfed.org/markets/autorates/fed%20funds. Last accessed 15th Jan 2016.

Fujiki, H. and Shiratsuka, S., 2002. Policy Duration Effect under the Zero Interest Rate

Policy in 1999-2000: Evidence from Japan’s Money Market Data. Monetary and Economic

Studies, 20(1), pp.1-31.

Gürkaynak, R.S., Sack, B. and Swanson, E., 2005. The sensitivity of long-term interest rates to economic news: Evidence and implications for macroeconomic models. American economic review, pp.425-436.

Hamilton, J.D. and Jorda, O., 2000. A model for the federal funds rate target(No. w7847).

National Bureau of Economic Research.

Harrell, F.E., 2013. Regression modeling strategies: with applications to linear models, logistic regression, and survival analysis. Springer Science & Business Media.

Hausman, J.A. and Ruud, P.A., 1987. Specifying and testing econometric models for rankordered data. Journal of econometrics, 34(1), pp.83-104.

Taylor, J., 2013, December. Causes of the Financial Crisis and the Slow Recovery: A 10-

Year Perspective. In joint Conference of the Brookings Institution and the Hoover Institution on the “US Financial System–Five Years after the Crisis,” at the panel “Causes and Effects of the Financial Crisis,” October (Vol. 1).

Kennickell, A.B., 1998, September. Multiple imputation in the Survey of Consumer Finances.

In Proceedings of the Section on Survey Research Methods.

Krishnamurthy, A. and Vissing-Jorgensen, A., 2011. The effects of quantitative easing on interest rates: channels and implications for policy (No. w17555). National Bureau of

Economic Research.

Oda, N. and Ueda, K., 2007. The effects of the Bank of Japan’s zero interest rate commitment and quantitative monetary easing on the yield curve: A Macro-Finance approach*. Japanese

Economic Review, 58(3), pp.303-328.

Okina, K. and Shiratsuka, S., 2004. Policy commitment and expectation formation: Japan’s experience under zero interest rates. The North American Journal of Economics and

Finance, 15(1), pp.75-100.

Rubin, D.B., 1987. The calculation of posterior distributions by data augmentation: Comment:

A noniterative sampling/importance resampling alternative to the data augmentation algorithm for creating a few imputations when fractions of missing information are modest:

The SIR algorithm. Journal of the American Statistical Association, 82(398), pp.543-546.

Rubin, D.B., 1996. Multiple imputation after 18+ years. Journal of the American statistical

Association, 91(434), pp.473-489.

Rubin, D.B., 2004. Multiple imputation for nonresponse in surveys (Vol. 81). John Wiley &

Sons.

Rudebusch, G.D., 1995. Federal Reserve interest rate targeting, rational expectations, and the term structure. Journal of monetary Economics, 35(2), pp.245-274.

Taylor, J.B., 1999. The robustness and efficiency of monetary policy rules as guidelines for interest rate setting by the European Central Bank. Journal of Monetary Economics, 43(3), pp.655-679.

Walker, S.H. and Duncan, D.B., 1967. Estimation of the probability of an event as a function of several independent variables. Biometrika, 54(1-2), pp.167-179.

Williams, R., 2006. Generalized ordered logit/partial proportional odds models for ordinal dependent variables. Stata Journal, 6(1), pp.58-82.

APPENDIX

Full definitions on the variables extracted from the Survey of Consumer Finance for year

2010 and year 2013. Questions phrased here are from the Survey of Consumer Finance 2013

Codebook.

3

Head of Household Characteristics hhsex : The gender of the Head of Household being interviewed. age agesq educ married

:

: 𝑎𝑔𝑒𝑠𝑞 = (𝑎𝑔𝑒 × 𝑎𝑔𝑒)

: The education in years of the Head of Household being interviewed.

:

The age of the Head of Household being interviewed.

The squared age of the Head of Household being interviewed.

Dummy Variable representing if the Head of Household is currently lf married

: Dummy Variable representing if the Head of Household is currently participating in the labour force. race lgincome

: Dummy Variable representing the racial group of the Head of Household

: Income

Attitudinal and Related Variables wsaved : Over the past year, would you say that your (family's) spending exceeded your (family's) income, that it was about the same as your income, or that you spent less than your income?

(Spending should not include any investments you have made.)

IF DEBTS ARE BEING REPAID ON NET, TREAT THIS AS

SPENDING LESS THAN INCOME.

1. *SPENDING EXCEEDED INCOME

2. *SPENDING SAME AS INCOME

3. *SPENDING WAS LESS THAN INCOME

(Source: SCF Codebook 2013) spendmor : Variable that represents if the Head of Household would spend more assuming that assets (both financial and non-financial) appreciated in value.

1.

*Agree Strongly

2.

*Agree Somewhat

3.

*Neither Agree nor Disagree

4.

*Disagree Somewhat

5.

*Disagree Strongly

3

Source: SCF 2013 Codebook, Source: http://www.federalreserve.gov/econresdata/scf/files/codebk2013.txt

bnkruplast5 : Have you the Head of Household or the Spouse filed for bankruptcy in the last 5 years?

1. *YES bshop

5. *NO.

: When making major decisions about borrowing money or obtaining credit, some people search for the very best terms while others don’t. ishop

On a scale from one to five, where one is almost no searching, three is moderate searching, and five is a great deal of searching, where would

(you/your family) be on the scale?

6.

*Almost No Searching

7.

8.

*Moderate Searching

9.

10.

*A Great Deal of Searching

: When making savings and investment decisions, some people search for the very best terms while others don’t.

On a scale from one to five, where one is almost no searching, three is moderate searching, and give is a great deal of searching, where would

(you/your family) be on the scale?

1.

*Almost No Searching

2.

3.

*Moderate Searching

4.

5.

*A Great Deal of Searching

Household Finances (Financial Assets, Non-Financial Assets and related variables) lgliq : The logarithmic value of liquid assets

LIQ: All types of transaction accounts (liquid assets):

𝐿𝐼𝑄 = 𝐶𝐻𝐸𝐶𝐾𝐼𝑁𝐺 + 𝑆𝐴𝑉𝐼𝑁𝐺 + 𝑀𝑀𝐴 + 𝐶𝐴𝐿𝐿

CHECKING: Value of Checking accounts

SAVING: Value of Saving Accounts

MMDA: Money Markey Deposit Accounts, including money market accounts used for checking and other money market account held at commercial banks, savings and loans, savings banks, and credit unions.

MMF: Money Market Mutual Funds, including money market accounts

lghouses lgnliqfin used for checking and other money market account held at commercial banks, savings and loans, savings banks, and credit unions.

MMA: All types of money market accounts

𝑀𝑀𝐴 = 𝑀𝑀𝐷𝐴 + 𝑀𝑀𝑀𝐹

CALL: Value of call accounts at Brokerages

: The logarithmic value of the Primary Residence

: The logarithmic value of total financial assets excluding liquid assets.

𝑁𝐿𝐼𝑄𝐹𝐼𝑁 = 𝐹𝐼𝑁 − 𝐿𝐼𝑄

FIN: Total financial assets

𝐹𝐼𝑁 = 𝐿𝐼𝑄 + 𝐶𝐷𝑆 + 𝑁𝑀𝑀𝐹 + 𝑆𝑇𝑂𝐶𝐾𝑆 + 𝐵𝑂𝑁𝐷𝑆 + 𝑅𝐸𝑇𝑄𝐿𝐼𝑄

+ 𝑆𝐴𝑉𝐵𝑁𝐷 + 𝐶𝐴𝑆𝐻𝐿𝐼 + 𝑂𝑇𝐻𝑀𝐴 + 𝑂𝑇𝐻𝐹𝐼𝑁

LIQ: The total value of the liquid assets

CDS: Certificates of Deposits

NMMF: The total value of directly-held mutual funds, excluding moneymarket mutual funds (MMMFs).

𝑁𝑀𝑀𝐹 = 𝑆𝑇𝑀𝑈𝑇𝐹 + 𝑇𝐹𝐵𝑀𝑈𝑇𝐹 + 𝐺𝐵𝑀𝑈𝑇𝐹 + 𝑂𝐵𝑀𝑈𝑇𝐹 + 𝐶𝑂𝑀𝑈𝑇𝐹

STMUTF: The total value of Stock Mutual Funds

TFBMUTF: The total value of Tax-Free Bond Mutual Funds

GBMUTF: The total value of Government Bond Mutual Funds

OBMUTF: The total value of Other Bond Mutual Funds

COMUTF: The total value of Combination and Other Mutual

Funds

STOCKS: The total market value of Stocks owned by the Household

(including the family’s publicly-traded stock).

BONDS: The total market value of all bonds, not including bond funds or saving bonds.

𝐵𝑂𝑁𝐷𝑆 = 𝑁𝑂𝑇𝑋𝐵𝑁𝐷 + 𝑀𝑂𝑅𝑇𝐵𝑁𝐷 + 𝐺𝑂𝑉𝑇𝐵𝑁𝐷 + 𝑂𝐵𝑁𝐷

NOTXBND: The face value of all of the state or municipal bonds,

or other Tax-Exempt Bonds

MORTBND: The face value of all the mortgage-backed bonds

GOVTBND: The face value of all the U.S. Government bonds or

Treasury bills

OBND: The face value of all of the Corporate, Foreign and other

Bonds

RETQLIQ: The total value of quasi-liquid assets. It represents the sum of

lgnhnfin

IRAs ( ), thrift accounts, current and future pensions

SAVBND: The face value of all the saving bonds

CASHLI: The current cash value of whole life insurance policies

OTHMA: The total value of Other Managed Assets. This includes (trusts, annuities, and managed investment accounts in which the Household has equity interest)

OTHFIN: The total value of Other Financial Assets. This includes loans from the household to someone else future proceeds, royalties, non-public stock, deferred compensation and cash.

: The logarithmic value of total nonfinancial assets excluding principal residences.

𝑁𝐻𝑁𝐹𝐼𝑁 = 𝑁𝐹𝐼𝑁 − 𝐻𝑂𝑈𝑆𝐸𝑆

NFIN: Total nonfinancial assets.

𝑁𝐹𝐼𝑁 = 𝑉𝐸𝐻𝐼𝐶 + 𝐻𝑂𝑈𝑆𝐸𝑆 + 𝑂𝑅𝐸𝑆𝑅𝐸 + 𝑁𝑁𝑅𝐸𝑆𝑅𝐸 + 𝐵𝑈𝑆 +

𝑂𝑇𝐻𝑁𝐹𝐼𝑁

VEHIC: Total prevailing retail value of all the vehicles (including autos, motor homes, RVs, airplanes, boats) that the Head of Household (and others in the family living here) worth (value if sold) as of fall 2013 according to industry guidebook (NADA).

HOUSES: The total value of the Primary Residence

ORESRE: The total value of Other Residential Real Estate, including land contracts/notes household has made, properties other than the Primary

Residence, time shares and vacation homes.

NNRESRE: The net equity in Non-Residential Real Estate. Real Estate other than the Primary Residence, time shares and vacation homes net of mortgages and other loans taken out for investment real estate.

BUS: Business Interest.

For business where the Household has an active interest, value is net equity if business were sold today, plus loans from Households to business, minus loans from business to Households not previously reported, plus value of personal assets used as collateral for business loans that were reported earlier.

For business where the Household does not have an active interest, it represents the market value of the interest.

OTHNFIN: Other Non-Financial Assets. It is defined as the total value of miscellaneous assets minus other financial assets which includes gold,

silver (including silverware), other metals, jewellery, gem stones

(including antiques), cars (antiques or classics), antiques, furniture, art objects, paintings, sculpture, textile art, ceramic art, photographs, (rare) books, coin collections, stamp collections, guns, misc. real estate

(excluding cemetery), cemetery plots, china, figurines, crystal/glassware, musical instruments, livestock, horses, crops, oriental rugs, furs, other collections including baseball cards, records, wine, oil/gas/mineral leases or investments, computer, equipment/tools, association or exchange membership, and other miscellaneous assets.

Word Count (excluding Tables, Graphs, References and Appendices) – 1992 Words

-End of Paper-

H OW TO REFORM BANKING GOVERNANCE ?

Robert Palasik

1

B.Sc Economics

3 rd

year

University College London

Explore Econ Undergraduate Research Conference

March 2016

1

I would like to thank Cloda Jenkins, Parama Chaudhury, Frank Witte and Antonio Guarino at the UCL Economics department for their helpful comments, suggestions and encouragement. I would also like to thank my colleagues at the Institute for Training and Consulting in Banking and foremostly Imre Balogh for providing insight into how reallife executive decision making takes place in a bank and a consultancy, experiences which ultimately inspired this paper.

Introduction

Following recent scandals such as the collapse of Lehman Brothers or LIBOR’s manipulation, many commentators called for reforms in banking governance. This paper will provide an overview of reform proposals, embedded in a systematic treatment of the differences between banking and nonfinancial governance.

Section I establishes the rationale for the existence of governance structures and presents the main mechanisms through which governance operates. Section II presents the arguments explaining why these mechanisms work differently due to the nature of banking and argue how these differences contributed to the 2008 crisis. In section III, I will provide an overview of the suggested reforms and relate them to the mechanisms identified in Sections I and II. Section IV concludes and summarizes the main findings.

Section I: Why do we need corporate governance?

In the Arrow-Debreu world of perfect competition, agents can write contracts costlessly and instantaneously (Zingales, 1998). However, as company owners typically lack the time, interest or expertise necessary to run a company themselves they hire managers, resulting in an agency relationship between a "principal" (owners) and an "agent" (managers). (Shleifer & Vishny,

1997) The economic problem of governance, then, is choosing the incentive structure which selects the best managers and makes them accountable to owners (Tirole, 2001).

What tools do owners have to discipline managers? The four main mechanisms are concentrated ownership, managerial incentive contracts, (the threat of) takeovers, and board oversight.

Concentrated ownership is a “fallback” option for investors. Data shows that countries with poor investor protection typically exhibit more concentrated control of firms (La Porta, et al., 2000).

Large shareholders can monitor firms directly and govern via their voting rights, reducing the scope for fraud. If, however, the legal framework is strong enough to protect smaller owners, then the incentive for concentrated ownership is reduced.

Secondly, bonuses based on accounting data encourage managers to behave in accordance with owner interests (Tirole, 2001). Jensen and Murphy (1990) estimate that for US firms, CEO wealth changes $3.25 for every $1,000 change in shareholder wealth. However as most CEOs can influence how their pay is set, they will try to introduce asymmetries in benchmarking that

penalised them less for bad performance than reward them for good, reducing the mechanism’s effectiveness. Garvey and Milbourn (2006) show in US data that 25% to 45% less pay is lost to bad luck than is won due to good luck.

Thirdly, takeovers represent another important tool, especially in Anglo-Saxon countries where concentrated ownership is less prevalent. Ruback and Jensen (1983) argue that stockholders in a well-functioning market choose the “highest dollar value” manager. As takeovers usually face badly performing firms, and synergies from mergers create redundancies in managerial positions, job security and reputation concerns will motivate managers to perform better if faced with a takeover threat.

The final tool available to owners is board oversight. “Earnings management” allows managers to distort reported income via the use of accruals in accounting to maximise their bonuses or provide false information for owners. If the board and the audit committee possesses the financial sophistication to notice this, then oversight can prevent managerial fraud (Xie, et al.,

2003). Empirical literature shows that board independence is associated with improved decisionmaking, as independent directors have more incentive to truthfully monitor for earnings management (Bebchuk & Weisbach, 2009). However, a recent strain in the accounting literature argues that as human choice is subject to biases and heuristics, numerical measures employed by auditors are not a reliable guide to governance, vividly demonstrated by Enron’s failure in the early 2000s (Marnet, 2007). It is therefore a challenge to identify what the nature of board oversight should be given biased decision making along the decision tree for all firms.

Section II: Why is banking different and what went wrong?

Due to banking’s regulated nature and its tendency to take on excessive risks during booms, the mechanisms mentioned in Section I function differently. Regulation affects governance through both the „too-big-to-fail” de facto government guarantee (Farhi & Tirole, 2012) and the desire to prevent bank runs via deposit insurance (Diamond & Dybvig, 1983). These incentivize large, insured banks to take excessive risks, pushing them to corner solutions and to take as much risk as they can (Boyd & Runkle, 1993). Secondly, high-risk, hard-to-value banking portfolios are unattractive for long-term equity financing, and even when executives could increase equity, they might not do so due to the implicit government subsidy on debt financing from deposit insurance (Becht, et al., 2012). While the Basel Capital Accords provide an international

framework of minimum leverage and capital ratios, there are concerns that to comply during crises, banks will shrink their balance sheets and reduce lending instead of increasing equity, deepening recessions (Drumond, 2009). In terms of governance, this results in less oversight from owners during a period when more would be needed.

Consequently, many commentators argue that banking’s nature limits the efficiency of classical governance mechanisms. Concentrated ownership is limited by regulation due to the “source of strength” doctrine, stating that large owners should be ready to use their own resources to support banks financially to offset “too-big-to-fail”. This is a significant obstacle for most investors in the highly leveraged banking industry (Laeven, 2013). While „source of strength” has been historically prevalent in US and UK banking, the introduction of limited liability and deposit insurance meant that owners no longer needed large reserves to foot the bill in the event of a failure, leading to a spread of small ownership and increased fragmentation of shareholders.

The trend has been especially pronounced in the pre-crisis years: average holding periods for US and UK banks fell from 3 years in 1998 to 3 months in 2008, with the result that investors prioritized excess stock volatility over stability to maximize their short-term payoffs (Haldane,

2011) (Aspen Institute, 2009). As shareholders lost the interest to monitor long-term performance, managers focusing on maximizing immediate returns without a regard for risks had free reign.

Performance bonuses suffer from a similar disadvantage. The typical structure of remuneration packages (rewarding absolute bank performance by stock packages) incentivizes executives to take on portfolios with a high degree of systemic volatility (β) in booms rather than trying to outperform markets (α), thereby increasing banking’s pro-cyclicality (Becht, et al., 2012).

Evidence shows that banks where CEOs had large stock holdings performed substantially worse during the crisis (Fahlebrach & Stulz, 2009). The list of top 5 US firms with the highest CEO equity stakes (Lehman Brothers, Bear Stearns, Merrill Lynch, Morgan Stanley and Countrywide) serve as an illustrative example of this (Haldane, 2011). This suggests that the pre-2008 boom broke down the incentive link between long-term performance and remuneration, stripping this governance mechanism of its value.

Evidence on takeovers is more mixed. Banking mergers were prevalent post-crisis, especially in

Germany, where consolidation between 1990 and 2013 saw a 44% reduction in the number of

banks (Michler & Thieme, 2013). Spain experienced a similar reduction in the number of

„cajas”, non-profit savings institutions with communal ownership. Crespí, García-Crestona and

Salas (2004) show that caja mergers occurred as the diffused ownership of cajas made enforcing owner interests difficult using other mechanisms. This tradeoff has been highlighted in discussions of the „stakeholder society” concept (Tirole, 2001). While consolidation in banking may serve smaller institutions well due to improved risk management, the mergers of banking giants such as JPMorgan&Chase and Bear Sterns have further increased systemic risks (Balogh,

2015). In terms of governance, the gains in governance resulting from mergers can be outweighed by the distortionary effects of too-big-to-fail.

Finally, evidence shows consistent differences between the structure and effectiveness of financial and nonfinancial boards. While it is recognised that “the overall effectiveness of the board, […] tends to vary inversely with its size” (Walker, 2009), average board size of major

European banks is around 18, above the optimal size of 8-12 suggested by psychological literature of group decision-making (Walker, 2009) (de Haan & Vlahu, 2015). Bank boards also face more severe attendance problems than nonfinancial firm boards (Adams & Ferreira, 2008).

These phenomena point toward the presence of freeriding and “groupthink” in bank boards, something painfully underpinned by RBS’s near unanimous support of the disastrous ABM

Lavro acquisition (FSA, 2012). Thus in the case of banks, the excessive size of boards prevents them from performing their oversight duties sufficiently.

Section III: How to fix it?

While the post-crisis period saw widespread reform including the establishment of macroprudential regulators and revising Basel, less has been done to address banking governance. In the UK, changes focused on the UK Combined Code in response to the Walker

Review, such as extending the powers of remuneration committees, and changes to the Code emphasising greater board commitment (Linklaters, 2009). More importantly, the Prudential

Regulation Authority introduced rules requiring 50% of compensation to be paid out in non-cash instruments tied to long-term bank performance (Angeli & Gitay, 2015), while their approach of governance supervision recognises the different roles of executive and non-executive directors and requires that executives inform the board in a way that allows effective challenge without expecting excessive technical expertise (Bailey, 2015).

More radical proposals include changes to ownership structure. The period after the crisis saw the conception of contingent convertible bonds (CoCos), hybrid securities that absorb losses when the issuer’s capital falls below a certain level. CoCos thus provide “emergency ownership” powers for buyers, with a clear incentive for direct monitoring to reverse the bank’s equity deterioration. In the period between 2009 and 2013, $70bn of these bonds have been issued globally (Avdjiev, et al., 2013) There have been suggestions to introduce a form of weighted voting for these bondholders, as well as to abolish the tax deductibility of debt interest which distorts bank financing (Haldane, 2011). This mirrors considerations in the externalities literature about the introduction of full/punitive liability in connection to environmental damage

(Balkenborg, 2004). A similar case could be made for banks considering their importance as systematically important institutions. This could fix structural issues of bank ownership stemming from excessive gearing and strengthen governance by applying more capital market pressure.

Remuneration reform has been another suggestion. Benabou and Tirole (2014) argue in their competitive pay model that when the regulator is able to differentiate between performancebased and fixed parts of compensation, a cap is an effective policy tool. Other suggestions include remuneration dependent on the difference of the bank’s credit default swap spread from the market average (Bolton, et al., 2011) and a change of CEO bonus indexation to return on assets (Haldane, 2011). These would provide incentives undistorted by market fluctuations, reducing asymmetry and introducing risk management directly into managers’ objective function.

Better supervision should also play a role. Game theory suggests that a supervisors’ knowledge of banks’ greater propensity to violate certain rules can help them in adapting their monitoring strategy to lower the number of violations (Smojver, 2012). The importance of efficient monitoring has been crucially underpinned by the failure of Northern Rock, which had three separate FSA Heads of Department supervising it in the three years leading up to its collapse

(FSA, 2008). Crucially, supervisors should concentrate on the presence of a strong and independent CRO or CFO in the board – evidence strongly points toward strong risk management practices as a predictor of better bank performance (Ellul & Yerramilli, 2013).

Section IV: Conclusions

As asymmetric information necessitates the existence of corporate governance, four main mechanisms have been suggested to prevent managerial fraud: concentrated ownership, incentive contracts, takeovers and board oversight. These mechanisms function differently or with less efficiency due to the nature of banking, regulation and banks’ nature to take on excessive risk during booms. Inefficiencies could be reduced by measures such as changing the structure of bank liability by providing contingent debtholders with a weighted vote, limiting equity-based compensation through caps or indexation changes, and revising board oversight by reducing board sizes and strengthening risk executives. Regulators will need to work hard to ensure national regulations are consistent with each other to manage impacts on international competitiveness.

1998 words

Citations

Adams, R. B. & Ferreira, D., 2008. Regulatory Pressure and Bank Directors’ Incentives to

Attend Board Meetings. ECGI - Finance Working Papers, Volume 203.

Aspen Institute, 2009. Overcoming Short-termism: A Call for a More Responsible Approach to

Investment and Business Management, s.l.: s.n.

Avdjiev, S., Kartasheva, A. & Bogdanova, B., 2013. CoCos: a primer. BIS Quarterly Review.

Balkenborg, D., 2004. On extended liability in a model of adverse selection, s.l.: s.n.

Balogh, I., 2015. Crisis management, bank rehabilitation and NPL management: Hungarian and

Slovenian solutions in a regional context.

BCBS, 2004. Bank Failures in Mature Economies, Basel: Basel Commitee on Banking

Supervision, Working Papers.

Bebchuk, L. A. & Weisbach, M. S., 2009. The State of Corporate Governance Research. NBER

Working Papers, Volume 15537.

Becht, M., Bolton, P. & Röell, A., 2012. Why bank governance is different. Oxford Review of

Economic Policy, 27(3), pp. 437-463.

Bénabou, R. & Tirole, J., 2014. Bonus Culture: Competitive Pay, Screening, and Multitasking.

Princeton University William S. Dietrich II Economic Theory Center Research Paper, Volume

66.

Besley, T. & Ghatak, M., 2003. Incentives, choice and accountability in the provision of public services. Oxford Review of Economic Policy, 19(2).

Bolton, P., Mehran, H. & Shapiro, J. D., 2011. Executive Compensation and Risk Taking. FRB of New York Staff Report, Volume 456 .

Boyd, J. H. & Runkle, D. E., 1993. Size and performance of banking firms: testing the predictions of theory. Journal of Monetary Economics, Volume 31, pp. 47-67.

Chey, H.-k., 2015. International Harmonization of Financial Regulation? - The Politics of

Global Diffusion of the Basel Capital Accord. Tokyo: Routledge.

Claessens, S., Djankov, S. & Lang, L. H. P., 2000. The separation of ownership and control in

East Asian Corporations. Journal of Financial Economics, 58(1-2), pp. 81-112.

Crespí, R., Miguel A. Garcı́a-Cestona, M. A. & Salas, V., 2004. Governance mechanisms in

Spanish banks. Does ownership matter?. Journal of Banking & Finance, 28(10), p. 2311–2330. de Haan, J. & Vlahu, R., 2015. Corporate governance of banks: a survey. Journal of Economic

Surveys, 00(0), pp. 1-50.

Diamond, D. W. & Dybvig, P. H., 1983. Bank Runs, Deposit Insurance, and Liquidity. The

Journal of Political Economy, 91(3), pp. 401-419.

Downs, A., 1957. An Economic Theory of Political Action in a Democracy. Journal of Political

Economy, 65(2), pp. 135-150.

Drumond, I., 2009. Bank capital requirements, business cycle fluctuations an the Basel Accords: a synthesis. Faculdade Economia Porto (FEP) Working Papers, Volume 277., pp. 798-830.

Ellul, A. & Yerramilli, V., 2013. Stronger Risk Controls, Lower Risk: Evidence from U.S. Bank

Holding Companies. The Journal of Finance, 48(5).

Fahlebrach, R. & Stulz, R. M., 2009. Bank CEO Incentives and the Credit Crisis. NBER Working

Paper Series, Volume 15212.

Farhi, E. & Tirole, J., 2012. Collective Moral Hazard, Maturity Mismatch, and Systemic

Bailouts. American Economic Review, 102(1), pp. 60-93.

Flannery, M. J., 1998. Using Market Information in Prudential Bank Supervision: A Review of the U.S. Empirical Experience.. Journal of Money, Credit, and Banking, 30(3), pp. 273-305.

FSA, 2008. The supervision of Northern Rock: a lessons learned review, s.l.: s.n.

FSA, 2012. The FSA's report into the failure of RBS, Westminster: House of Commons Treasury

Committee.

Haldane, A. G., 2011. Control rights and wrongs. London: Wincott Annual Memorial Lecture.

Haldane, A. G., 2015. Who owns a company?. Edinburgh: University of Edinburgh Corporate

Finance Conference, 22 May 2015.

Jensen, M. C. & Murphy, K. J., 1990. Performance Pay and Top-Management Incentives.

Journal of Political Economy, 98(2), pp. 225-264.

La Porta, R., Lopez-de-Silvanes, F., Shleifer, A. & Vishny, R., 2000. Investor protection and corporate governance. Journal of Financial Economics, Volume 58, pp. 3-27.

Laeven, L., 2013. Corporate Governance: What’s Special About Banks?.

Annual Review of

Financial Economics , Volume 5, pp. 63-92.

Linklaters, 2009. Final recommendations of Walker review published, s.l.: s.n.

Marnet, O., 2007. History repeats itself: The failure of rational choice. Critical Perspectives on

Accounting, Volume 18, p. 191–210.

Michler, A. F. & Thieme, H., 2013. Rehabilitation of Financial Sector - The case of German banks. Bancni Vestnik, 62(11), pp. 16-24.

Miller, G. & Babiarz, K. S., 2013. Pay-for-performance incentives in low- and middle-income country health programs. NBER Working Papers, Volume 18932.

Neal, D., 2011. The design of performance pay in education. NBER Working Papers, Volume

16710.

Ruback, R. S. & Jensen, M. C., 1983. The Market for Corporate Control: The Scientific

Evidence. Journal of Financial Economics, Volume 11, pp. 5-50.

Shleifer, A. & Vishny, R. W., 1997. A Survey of Corporate Governance. The Journal of

Finance, LII(2), pp. 737-783.

Smojver, S., 2012. Analysis of Banking Supervision via Inspection Game and Agent-Based

Modeling. Central European Conference on Information and Intelligent Systems, pp. 355-361.

Stimpert, J. L. & Laux, J. A., 2011. Does Size Matter? Economies Of Scale In The Banking

Industry. Journal of Business & Economics Research, 9(3).

Tirole, J., 2001. Corporate Governance. Econometrica, 69(1), pp. 1-35.

Walker, D., 2009. A review of corporate governance in UK banks and other financial industry entities: final recommendations, s.l.: s.n.

Xie, B., Davidson, W. N. & DaDalt, P. J., 2003. Earnings Management and Corporate

Governance: The Role of the Board and the Audit Committee. Journal of Corporate Finance,

9(3), p. 295–316.

Zingales, L., 1998. Corporate Governance. The New Palgrave Dictionary of Economics and the

Law.

 

THE IMPACT OF HOUSE PRICES ON FERTILITY DECISION

AND ITS VARIATION BASED ON POPULATION DENSITY

FRAN SILAVONG 1

B.Sc Economics

3 rd year

University College London

Explore Econ Undergraduate Research Conference

March 2016

                                                                                                               

1 Special thanks to Prof. Aureo de Paula, Kieran Larkin, Parama Chaudhury and Prof. Frank Witte

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

1. Introduction

Numerous studies have been conducted to explore the economic determinants of fertility, but only few studies have examined the impact of house prices and population density on fertility rate. Ageing population and rapid urbanisation are happening across the globe simultaneously, raising concerns over issues such as rising property prices and falling fertility rate. The causal relationship between fertility rate and house prices and whether such relationship varies based on population density is crucial when it comes to policy analysis.

A wide variety of socioeconomic variables influence fertility rates, but cost of housing is a significant part of the cost of children, according to the Expenditures on Children by Families

Survey (2013). Fig 1 shows that the correlation between house prices and fertility rate is higher than correlation between fertility rate and unemployment rate. This indicates the importance of house prices on demand of children. Dettling and Kearney (2013) used the empirical result of the OLS regression on MSA-level housing prices and fertility rate to argue that the income effect differs based on homeownership. For homeowners, a rise in house prices result in an increase in birth rates through two channels: a traditional wealth effect and/or an equity extraction effect.

Figure 1: Fertility Rates and Marco Indicators

Source: Dettling and Kearney (2013)

Building on this hypothesis, my paper will examine the impact of house prices on the fertility decision and its variation based on population density using individual-level data from the

British Household Panel Survey merged with regional house pricing indices. Fertility behaviour will be analysed using neoclassical economic theory: the model originated from

Becker (1960), and children are assumed to be normal goods.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

Rapid urbanization also plays a role in the fertility decision-making process, as a negative relationship between population density and fertility rate is observed. Wanamaker (2011) argued that industrialization distorted fertility behaviour in 19 th Century as it lead to an increase in urbanization and an increase in the cost of raising children. Therefore, population density is later introduced into the empirical regression model in order to investigate how it alters fertility preferences and causes the regional differences in fertility rate.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

2. Literature Review and Conceptual Framework

The commonly held view is that fertility behaviour can be analysed using neoclassical economics theory: the model originated from Becker (1960), which is also the foundation of this thesis. It explained the negative correlation observed between income and fertility by applying the theory of the consumer where he showed that the income growth and decline of fertility observed were caused by the variation in household incomes and opportunity cost of children.

To estimate the effect of a determinant, children are assumed to be normal good 2 -– supported by two leading findings: the quality-quantity trade-off model introduced by Becker

(1960) and the theory of allocation of time by Becker (1965).

Given that children are normal goods, in order to obtain an unbiased and consistent estimate of the influence of house prices on fertility, the regression model has to capture other exogenous variations in the fertility decision process. Following Becker and Cain and

Weininger model 3 , the utility function is as followed,

𝑈 = 𝑈 𝑛 , 𝑞 , 𝑠 where n denotes the number of children, s the standard of living of the household (parents), and q is the quality per child. The variables are essential in the decision making process, as they influence the cost and benefit of having a child. For examples, the happiness brought by children and earnings from their children are heavily dependent on the number and quality of children.

                                                                                                               

2   See Appendix 1  

3 Cain and Weininger   (1973) used aggregated standard metropolitan statistical areas data in 1960 and cities in 1940 to estimate the effect of socioeconomic variables on fertility rates. Their findings suggest that female market wages have a significant negative impact on fertility rate while male income has a small but positive effect on fertility.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

Households are also subject to budget constraints: a function of household income, price of goods and services devoted to children such as cost of childcare and education, price of goods and services consumed by the household, substitutes for children, personal tastes and technological constraints. Thus, the demand of children can be denoted as: 𝑛 = 𝑁 ( 𝜋

!

, 𝑤 , 𝑒 , 𝐼 ;   𝜋

!

, 𝜃 ) where π n

represents own-price effect of fertility, w and e represents the wage and education level of mother respectively, I represents the household income; π

0

is a vector of all other prices that affect the demand and θ is a vector of other household-specific parental preferences and constraints.

Based on the household production development model advanced by Becker (1965), the total effect of a change in variables in the demand function can be decomposed into a pure income effect, and substitution effect. Building on Dettling and Kearney (2013) findings, the impact of house can be decomposed in a similar way. The income effect does not necessarily means the actual change in wealth or income; it can refer to the perceived change of wealth for homeowners. It also can act as an increase in home equity for those credit constrained house owners. Thus, the income effect is the combination of ‘home equity effect’ and ‘wealth effect’ where from now on will be refer to home equity effect. For the substitution effect, it refers to the need for a bigger house due to the addition of a child and other goods and services can be consumed with the change in house prices.

Fig 2 demonstrates how a change in house price affects fertility decision using a simple consumption model. For simplicity, assuming the increase in house price only affects child service due to the need for a bigger house for the addition of a child but does not affect parent’s consumption. The price effect of such change will lead to a movement from point a to c resulting in a fall in the probability of deciding to have a child. This price effect applies to both homeowners and renters. However, for homeowner, house prices affect fertility decision through one more channel: home equity effect. The perceived increase of home wealth introduces an incentive for individuals to have a child, as they can now afford both higher quality and quantity of children. As you can see from the diagram, the optimal point moves from c to d. Change in child services remains ambiguous as it depends on which effect dominates. Therefore, this paper aims to determine whether home equity effect or substitution effect dominates using individual-level data from United Kingdom.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

Figure 2: Effect of an increase in house price

Studies by Lovenheim and Mumford (2011) support the wealth effect on fertility rate and their results showed that a $100,000 increase in housing wealth among homeowners leads a

16-18 percent increase in the probability of having a child. In terms of overall effect of changes in house prices, the empirical result of Dettling and Kearney (2013) ‘s paper suggests that income effect dominates for homeowners. Their IV estimates imply that a

$10,000 increase in home prices leads to a 5 percent increase in fertility rates among owners and a 2.4 percent decrease among non-owners 4 .

Rapid urbanization has altered the fertility preferences in high population density regions of the country and distorted the impact of house prices on fertility. Building on studies listed below, the hypothesis of this paper is that high population density regions tend to attract individuals/households with a higher preference for work and hence less mobile. Opportunity cost are weighted higher in high-density regions, such as higher education level leading to lower probability of having a child. Low-density regions tend to have individuals who have a lower housing price elasticity of demand i.e. more elastic, since they have lower preference for high wage jobs and are more willing to move away from those regions with a high living cost. An increase in house price will result in a larger negative price effect on fertility decision for both renters and homeowners in low-density region. In order to determine the overall sign of a change in house price for homeowner, the magnitudes of home equity effect and price effect are crucial. Thus, the empirical results are needed to find out the magnitudes.

                                                                                                               

4   See Appendix 2 for details  

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

Table 1

Wanamaker

(2011)

Sato (2006)

Such distortion is similar to how industrialization caused the 19 th century fertility decline.

Wanamaker (2011) addressed this question of an increase in urbanization and the costs of raising children including the opportunity cost of female time. Out of the five mechanisms that explain how industrialisation might have distorted fertility outcome described in his paper, the two key mechanisms that could be used today to explain the regional variation in fertility rates are: (1) The economic movement to centralisation production in addition to more restrictive child labour laws reduce the economic return to children and thus decrease the parental demand and fertility rate. (2) Industrialisation is associated with increased urbanization, which may have increased the cost of raising children through higher housing and food costs without an associated increase in benefit.

[Sato 2006] analyses the relationship among economic geography, fertility and migration using a two-period overlapping generation model of endogenous fertility, incorporating n-regions, agglomeration economies and congestion diseconomies. The argument behind the negative relationship between population density and fertility rate is that agglomeration economies have a higher wage, which means that the opportunity cost of childbearing is higher.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

3. Data and Methodology

The main empirical approach of this paper is to identify the causal relationship between house prices and fertility. Controlling prices of other goods and services associated with childbearing: fathers’ income, women’s education level and other time-varying and regional fixed effects allows the regression estimators to be unbiased and better represent the causal relationship.

3.1 Data

Data on households’ characteristics and births come from combining individual-level and household-level data from the British Household Panel Survey (BHPS) 5 . To avoid the bust of the housing market period: 1990-1996 and 2007-2010, the sample in this paper is comprised of women aged 20-44 in BHPS between the year 1997 and 2007. Data on house prices come from Land Registry House Prices Index Background tables is also used. The summary of variables used in this paper is listed in Table 1. The effect of those variables in the year of conception 6 should be taken into account instead of in the year of birth since the lagged-term is what determined the decision to have a child. Also, all the income or price variables are adjusted by consumer price index (CPI).

In Dettling and Kearney (2013)’s paper, 5% sample of decennial census were calculated from

1990. Applying a crosswalk procedure to the sample enables them to map out the ownership rate for each MSA-Group, but the variable is time invariant. Unlike their paper, there are no data restrictions on homeownership status in this paper, which allows us to better estimate the impact of house price on homeowner and explain the wealth and home equity effect.

                                                                                                               

5   It is a long-running panel study that follows the same representative sample of individuals over a period of years. The original sample group began in 1991 and additional samples of households were added over the years.

 

6   Year of conception can be determined by the age of youngest child minus the year when interviewed and then minus 1 year.

 

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

Table 2

Variable

House Price

Ownership Status

Female Income

Partners income

Whether has a partner or not

Source

Land Registry

House Prices

Description

Average regional home prices that is seasonally adjusted (SA) from January 1995

It refers to the homeownership status for household i, where it equals to one if household i owns a house and equals to zero if the house is rented.

Annual female income

Weekly income

It equals to one if individuals has a partner and zero otherwise.

Education Level

British

Household

Panel Survey

(BHPS)

For each individual who has either a higher degree, first degree or teaching qualification, EducationLevel will equal to one and it will equal to zero if otherwise.

Regions

Sample consists of 16 regions: Greater London, South East,

South West, East, East midlands, West midlands, Greater

Manchester, Merseyside, North West, South Yorkshire, West

Yorkshire, Yorks and Humber, Tyne and Wear, Rest of North and Wales, where individuals can only live in one of the regions

3.2 Empirical Specification

Conditional (fixed effects) logistic regressions at individual level are used as the empirical foundation to identify the relationship. Using individual-level data raise concerns over whether the result is endogenously determined with childbirth outcome. The use of conditional logistic regression removes the individual preferences and fixed effects as the maximum likelihood is calculated relative to each group i.e. each individual over time.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

The fixed-effects logit model is as followed:

Pr 𝐶 ℎ 𝑖𝑙𝑑

!"#

= 1     𝑿

!

!

!

!

!

) = 𝐹 ( 𝛼 + β

!

𝑿

!

!

!

!

!

)

𝐹 𝛼 + β

!

𝑿

!

!

!

!

!

=  

1 exp   ( 𝛼 + β

!

+ exp ( 𝛼 +

𝑿

β

!

!

!

!

!

!

𝑿

)

!

!

!

!

!

)

  where F is the cumulative logistic distribution, i denotes the independent individuals interviewed over time t . Child itm

is a binary variable which equals to one if the women in household i , in time period t and in region m has a child and zero otherwise. In this paper, only the first child is considered.

𝑿

!

!

!

!

!

=         𝛽

!

+   𝛽

!

+ 𝛽

!

+ 𝛽

!

+ 𝛽

!

(

𝐻𝑜𝑢𝑠𝑒𝑃𝑟𝑖𝑐𝑒

𝑂𝑤𝑛𝑆𝑡𝑎𝑡𝑢𝑠

𝑆𝑝𝑜𝑢𝑠

𝑆𝑝𝑜𝑢𝑠 𝑒 𝑒

!

𝑠

!

  𝑠  

!

!

!

!

!

+ 𝛽

!

𝐼𝑛𝑐𝑜𝑚𝑒

𝐼𝑛𝑐𝑜𝑚𝑒 𝑥   𝑂𝑤𝑛𝑆𝑡𝑎𝑡𝑢𝑠

𝐴𝑛𝑛𝑢𝑎𝑙   𝐼𝑛𝑐𝑜𝑚𝑒

!

( !

!

!

)  

!

( !

!

!

)   𝑥 𝑆𝑝 𝑜𝑢𝑠𝑒

+   𝜀

!"#$

!

!

( !

!

!

)  

!

+ 𝛽

!

𝐻𝑜𝑢𝑠𝑒𝑃𝑟𝑖𝑐𝑒

!

!

!

!

) + 𝛽

!

𝛽

!

𝐸𝑑𝑢𝑐𝑎𝑡𝑖𝑜𝑛

𝑆𝑝𝑜𝑢𝑠𝑒

!

  𝐿𝑒𝑣𝑒𝑙

!

( !

!

!

)  

Since we are interested in the causal relationship, it is important to control other variables that could affect the fertility decision in the year of conception. Thus, the annual income, women education level and spouses/partner’s income(weekly) are included in the vector X i(t-

1)m

alongside with housing price and homeownership status, as shown above. 2847 groups in the sample were dropped because of all positive or all negative outcomes, which resulted in a total of 5979 observations.

The coefficients in the logit model are interpreted as follows, where the odds ratio is equal to the exponential of the actual coefficient:

Table 3

Impact of

House price for renter

House price for homeowner

Women’s Income

Women’s Education Level

Spouse/partner’s Income for non-single mother

Having a spouse/partner

Odds Ratio of 𝛽

!

𝛽

!

𝛽

!

+   𝛽

!

𝛽

!

𝛽

!

+ 𝛽

!

𝛽

!

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

To estimate the effect of population density, the same logistics model was regressed again but on two different sets of samples. The sample was separated into two groups: high population density 7 and low population density. Coefficients are interpreted the same way as the initial logistics regression.

By separating the sample into two groups, the estimators are more unbiased than using the adjusted-population density as a variable in the regression, as it correlates with income variables, house prices and education level. As a result, the estimators can be used to explain the regional variation of fertility rate and the differences in impact of house prices for both owners and non-owners.

                                                                                                               

7  

Based on the adjusted-population density (where the age group 0-16 was taken out of the population size), high population density regions are Greater London, West Midlands, Greater Manchester, Merseyside, South Yorkshire and West Yorkshire, and Tyne and Wear and the rest are grouped as the low population density regions.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

4. Empirical Analysis

4.1 Impact of house price on fertility decision

Table 3 presents the conditional logistic regression results where each column represents a different set of regression variables used and Table 4 shows the odd ratios. Each column shows a separate regression result. In column 1, only the house prices and ownership status were used to establish a baseline for comparing the differences in the impact of house prices caused by an inclusion of other variables i.e. upward or downward bias.

Column 2 includes female income and partners income variables where all the estimates are statistically significant. It has an odds ratio of 0.9999764 for 𝛽

!

  and 1.000026 for 𝛽

!

i.e. if house prices increase by £1,000, the odds of having a child are reduced by 2.36% for renters but increase by (2.6-2.36) = 0.24% for homeowners. The result implies that the negative price effect of house prices is 2.36% where the home equity effect is 2.6%.

For renters, changes in house prices only affect them through price effect but for homeowners, it is through two channels: home equity effect and price effect. In this case, the home equity effect dominates as it is bigger than price effect by 0.24% and hence increase in house prices exert a positive effect on fertility decision. The positive coefficient on female income indicates that there is an overall positive effect on the fertility decision i.e. income effects dominate. 𝛽

!

has an odd ratio of 1.00003 which means if female annual income increases by £1,000, the odds of having a child are increased by 3%. The changes in house prices are larger than changes in female annual income and fluctuate more. Thus, under normal circumstances, an increase in house prices will cancel out the positive effect of increase in female income. Whether the individual has a spouse or partner play an important role in fertility decision as you can see from its odd ratio, 1.01e

10 i.e. having a spouse will multiply the odds by 1.01e

10 . The income level of these individuals’ spouses/partners also has a significant impact on fertility decision where each additional pound increase in the weekly income increase the odds by 39.01.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

Given childbearing is a time-consuming activity, a higher education level increases the opportunity cost of leaving the labour force in the same way as education level and income are positively correlated. Thus, in column 3 and 4, mother’s education level is used instead to represent the opportunity cost in the fertility decision-process. Column 5 includes female income, partner’s income and female education level. If an individual has a high education background, the odds of having a child are reduced by 87.8%. The results obtained from column 3 – column 5 regarding house prices continue to support that income effect dominates among owners and price effect dominates among renters.

Table 4

House Price

House Price *

Ownership Status

Ownership Status

Female Income

Partners income

Whether has a partner or not

Ipinc (Interaction

Term)

(1)

-7.42 e-06

(2.26 e-06)

0.0000222

(2.61 e-6)

-0.7701265

(0.2243)

(2)

-0.0000236

(2.58e-06)

0.0000264

(2.81e-06)

-1.428549

(0.2514)

0.00003

(7.32e-06 )

-3.688168

(0.363678 )

23.03805

(2.188809)

3.68915

(0.3636819)

(3)

-0.0000112

(2.34e-06)

0.0000233

(2.66e-06)

-0.9240886

(0.2300031)

(4)

-0.0000258

(2.63e-06)

0.0000273

(2.86e-06)

-1.572567

(0.2565555)

-3.671248

(0.3669266)

23.02988

(2.207935)

3.672181

(0.3669311)

Education Level

-2.108204

(0.2203855)

-2.099981

(0.2330146)

Number of

Observations

R 2

5979

0.0388

5979

0.1589

5979

0.0640

5979

0.1764

(5)

-0.0000272

(2.67e-06)

0.0000274

(2.87e-06)

-1.589693

(0.2579674)

0.0000297

(7.34e-06)

-3.508275

(0.3708973)

22.05602

(2.230712)

0.0000297

(7.34e-06)

-2.094265

(0.2338242)

5979

0.1804

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

Table 5

House Price

(1)

0.9999926

(2.26e-06)

(2)

0.9999764

(2.58e-06)

(3)

0.9999888

(2.34e-06)

(4)

0.9999742

(2.63e-06)

House Price *

Ownership

Status

Ownership

Status

1.000022

(2.61e-06)

.4629545

(0.1038608)

1.000026

(2.81e-06)

1.000023

(2.66e-06)

0.396893

(0.0912866)

1.000027

(2.86e-06)

0.2075118

(0.0532383)

Female Income

Partners income

Whether has a partner or not

Ipinc

(Interaction

Term)

Education

Level

Number of

Observations

5979

0.2396565

(0.0602424)

1.00003

(7.32e-06 )

0.0250178

(0.0090984)

1.01e+10

(2.22e+10)

40.01082

(14.55121)

5979

0.1214559

(0.0267671)

5979

0.0254447

(0.0093363)

1.00e+10

(2.22e+10)

39.33761

(14.43419)

0.1224587

(0.0285347)

5979

R

2

0.0388 0.1589 0.0640 0.1764

(5)

0.9999728

(2.67e-06)

1.000027

(2.87e-06)

0.2039883

(0.0526223)

1.00003

(7.34e-06)

0.0299485

(0.0111078)

0.0299485

(0.0111078)

33.42264

(12.39649)

0.1231608

(0.028798)

5979

0.1804

The correlation between ownership status and income variables in the model might be causing a false interpretation that home equity effect dominates the price effect. Table 5 shows the result from running two separate regressions for renters and homeowners respectively using the same logistic model except the variables: houseprice*ownership status and ownership status. Estimations from Table 5 illustrate the same pattern observed in Table

3, which then enable me to confirm that home equity dominates.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

Table 6

(1) (2) (3)

House Price

Homeowners

 

0.0000167

(1.89e-06)

6.28e-06

(2.06e-06)

Female Income

Partners income

Whether has a partner or not

Ipinc (Interaction

Term)

Education Level

8.70e-06

(2.04e-06)

0.0000186*

(7.62e-06)

-3.401612

( 0.5241262

21.41448

( 3.150082

3.402665

( 0.52413

-2.706246

( 0.353712

-3.291265

( 0.5334735

20.87837

( 3.200628

3.292275

( 0.5334783

-2.677684

( 0.3669571

Number of

Observations

R 2

4028

0.1616

4028

0.0727

4028

0.1861

Renters

(4)

5.57e-06

( 2.09e-06 )

0.0000189 **

( 7.65e-06 )

-3.1688

( 0.5361469

20.14573

( 3.216069

3.169824

( 0.536151

-2.676141

( 0.3673847

4028

0.1804

House Price

Female Income

Partners income

Whether has a partner or not

-0.0000379

( 3.80e-06 )

0.0000902

( 0.0000233

-3.975729

( 0.6150158

24.829

( 3.721888

-0.0000231

( 3.01e-06 )

-0.0000365

( 3.68e-06 )

-4.343241

( 0.6150071

27.04916

( 3.724991

-0.000041

( 3.98e-06 )

0.0000887

( 0.0000231

-3.927732

( 0.627075

24.59026

( 3.793266

Ipinc (Interaction

Term)

Education Level

3.97624

( 0.6150347

-1.659656

( 0.3860994

4.343721

( 0.615027

-1.695237

( 0.4108306

3.928243

( 0.6270933

-1.703291

( 0.414954

Number of

Observations

1328 1328 1328 1328

R 2 0.1669 0.0748 0.1676 0.1852

* p=0.015; **p=0.013

4.2 Variation based on population density

As discussed in Section 2, the formation of agglomeration economies and congestion diseconomies is closely associated with population density and caused regional differences in the properties of fertility demand. To examine the variation of the impact of house price based on population density, the same logistics model was regressed on two different sets of samples: high population density and low population density regions.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

Table 7

High Population Density Low Population Density

House Price

House Price * Ownership

Ownership Status

Partners income

Partner i

Ipinc (Interaction Term)

Education Level (dummy for high education)

-8.92e-06

(4.33e-06)

.0000216

(4.60e-06)

-1.864742

(0.5525972)

-5.18085

(0.9172823)

32.43272

(5.513249)

5.181352

(0.9172886)

-2.585942

(0.5749972)

-0.000039

(3.54e-6)

0.0000374

(2.81e-6)

-1.942316

(0.3164219)

-3.52495

(0.421296)

21.87343

(2.534131)

3.52612

(0.4213039)

-2.070871

(0.2774392)

Focusing on the set of regression variables used in Column 4 and in Table 3 as the main point of interest, the results are presented in Table 6. The coefficient on house price is smaller in high population density regions than low-density regions i.e. renters are more affected by a change in house price in low-density areas. This indicates that the house price elasticity of renters’ fertility demand is less elastic in high-density areas than low-density areas. For homeowners, the impacts of an increase of £1,000 in house price are as shown below,

Income effect ( 𝜷

𝟏

)

Substitution Effect ( 𝜷

𝟐

Overall Effect

)

Table 8

High Density

2.183%

- 0.886%

1.297%

Low Density

3.81%

-3.825%

-0.015%

For homeowners, the income effect is higher in low-density regions but it is uncertain to determine whether such variation is caused by differences in income elasticity of fertility demand between two regions. This is because of the change of partner’s income has a higher impact on fertility decision in high-density region. Income effect caused by change in house price is similar to a change in partner’s income. Hence, the result regarding whether the income/home equity elasticity of fertility demand is more inelastic in high-density regions is inconclusive. However, it is clear that unlike in high-density areas, substitution effect dominates in low population density areas. Since the price effect/substitution effect applies to both renters and homeowners, the house price elasticity of fertility demand is less elastic in high-density areas resulting in a positive overall effect.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

Moreover, the effect of high female education level is higher in high-density region. Such result is consistent with the argument regarding agglomeration economies and congestion diseconomies that in high-density regions, women have a higher preference for work as compared to low-density regions. Therefore, empirical results above support the hypothesis mentioned in Section 2 that regional impact of variation are caused by the differences in elasticity of demand.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

5. Conclusion

Fixed effects logistic regressions at individual-level are used as the empirical foundation to identify the causal relationship between house prices and fertility decision. Results suggest that if house prices increase by £1,000, the odds of having a child are reduced by 2.36% for renters but increase 0.24% for homeowners. The estimations of price effect and home equity effect are statistically significant and the pattern remains the same when different sets of variables are used. Thus, it is reasonable to conclude that rising property prices has contributed to the decline in fertility rate. This paper’s results show that home equity effect dominates for homeowners and price effect dominates for renters, which is consistent with findings from Lovenheim and Mumford (2011), and Dettling and Kearney (2013).

A significant variation in the magnitude of price effect is observed when the sample is separated into two group based on their population density. In low-density regions, the price effect is much higher than in high-density regions causing homeowners to have a negative overall effect i.e. £1,000 increase in house prices leads to 0.015% decline in the odds of having a child. The reason behind such outcomes is the differences in elasticity of fertility demand. Individuals who live in low-density regions are more sensitive to changes in house prices, causing price effect to dominate for both renters and homeowners. Such findings are consistent with the formation of agglomeration economies and congestion diseconomies suggested by Sato (2006).

Density-based variation obtained from this paper has profound policy implication. To encourage individuals to have babies, governments usually intervene in the housing market and limit increases in house prices since such measures will benefit both homeowners and renters. However, in high-density areas such as London and Manchester, the relatively inelastic demand will reduce the effectiveness of the policy. Hence, governments should encourage individuals to become homeowners for example through increasing the supply of social housing, which will lead to a positive, more effective impact on fertility decision. In low density regions, instead of encouraging homeownerships, governments should focus on limiting the changes in house prices as price effect dominates, or implementing other measures such as tax-cut. In conclusion, as shown that population density and house prices are a crucial part of the demand for children, they should also be considered as economic determinants of fertility in future studies.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

6. Reference

[1]: Yasuhiro Sato, Economic geography, fertility and migration, 2006

[2]: Lisa J. Dettling and Melissa S.Kearney, House Prices and Birth Rates: The impact of real estates market on the decision to have a baby, 2013

[3]: Alan Greenspan and James Kennedy, Sources and Uses of Equity Extracted from Homes,

2007

[4]: Guy Laroque and Bernard Salanie, Does fertility respond to financial incentives? , 2008

[5]: Glen G.Cain and Adriana Weininger, Economic Determinants of Fertility: Results from cross-sectional aggregate data, 1973

[6]: Masakatsu Mizuno, Akira Yakita, Elderly labour supply and fertility decisions in agingpopulation economies, 2013

[7]: Eddie C.M. Hui, Xian Zheng, Jiang Hu, Housing price, elderly dependency and fertility behaviour, 2011

[8]: V. Joseph Hotz, Jacob Alex Klerman and Robert J.Willis, The economics of fertility in developed countries

[9]: S. K. Happel, J.K. Hill And S. A. Low, An Economic analysis of the timing of Childbirth,

1984

[10]: Michael F. Lovenheim Kevin J. Mumford, Do Family Wealth Shocks AÆect Fertility

Choices? Evidence from the Housing Market, 2011

[11]: Dan A. Black Natalia Kolesnikova Seth G. Sanders Lowell and J. Taylor ,Are Children

“Normal”?, 2011

[12]: Gary S. Becker, An Economic Analysis of Fertility, 1960

[13]: Gary S. Becker, A Theory of the Allocation of Time, 1965

[14]: Lino, Mark, Expenditures on Children by Families, 2013

[15]: T. Paul Schultz, The fertility transition: economic explanations, 2001

[16]: Marianne H. Wanamaker, Industrialization and the 19th Century American Fertility

Decline: Evidence from South Carolina

[17]: E Mörk, A Sjögren, H Svaleryd, Childcare costs and the demand for children— evidence from a nationwide reform, 2011

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

7. Appendix

Appendix 1

The negative correlation between income and fertility observed in both time series and crosssectional data raises concerns over whether children are normal goods or inferior goods.

There are two leading hypotheses supporting children as normal goods. Firstly, the qualityquantity trade-off model introduced by Becker (1960) suggests that an increase in income results in households substituting away from the number of children to the quality per child given the assumption that income elasticity of demand for quality exceeds the income elasticity of demand for number of children holds. The assumptions are considered to be empirically plausible. A recent paper done by Black (2011) used cross sectional data of non-

Hispanic white married couples in the U.S. to examine Becker’s (1960) model. The comparisons of similarly-educated women living in similarly-expensive locations show that completed fertility is positively correlated with the husband’s income. Secondly, the theory of allocation of time by Becker (1965) addressed this empirical puzzle by associating higher income with higher cost of female time experienced by high-income households. Assuming childbearing is a time intensive activity, the opportunity cost of children increases as the sources of satisfaction increases and leads to a substitution effect against children proving children as normal goods.

The Impact of House Prices on Fertility Decision and its Variation Based on Population Density

F.Silavong

Appendix II

Studies by Lovenheim and Mumford (2011) support the wealth effect on fertility rate. Using data from the Panel Study of Income Dynamics, they showed that a $100,000 increase in housing wealth among homeowners leads a 16-18 percent increase in the probability of having a child. In terms of overall effect of changes in house prices, the empirical result of

Dettling and Kearney (2013) ‘s paper suggests that income effect dominates for homeowners.

They have investigated the impact of the real estate market on the decision to have a baby.

The result of the OLS regression on MSA-level housing prices and fertility rate suggests that increase in house prices will have a negative price effect on fertility rate in the current period, however, the sign of income effect is based on whether the household is a homeowner or not.

For homeowner, this could lead to an increase in birth rates through two channels: a traditional wealth effect and/or an equity extraction effect. Their empirical analysis controls for time-varying economic conditions, fertility timing decision, local area unemployment rate and measures of local wages and also added an instrumental variable, house supply elasticity to exploit the exogenous variation in house price movement. The IV estimates imply that a

$10,000 increase in home prices leads to a 5 percent increase in fertility rates among owners and a 2.4 percent decrease among non-owners. Moreover, the changes in house prices exert a larger effect on current period birth rates than changes in unemployment rates.

[WORD COUNT: 2800 words]

H

OW TO

M

EASURE THE

E

CONOMIC

I

MPACT OF

UCL'

S

N

EWHAM

C

AMPUS ON THE

L

OCAL

R

EGION

Fangzhou Xu

B.Sc Economics

3 rd

year

University College London

Explore Econ Undergraduate Research Conference

March 2016

Introduction

2018 will be remembered as the year when the world saw the FIFA World Cup hosted by

Russia, the establishment of an unified African Central Bank, and the first ever Spike S-512 supersonic jet take on its first commercial flight over the Atlantic, from London to New

York, in just 3.5 hours. But more significantly, 2018 will mark the birth of “UCL East”, the new campus based in the London Borough of Newham, envisaged to be a radical new model of how a university campus can be embedded in the local community and businesses, as well as providing world-leading research, education, entrepreneurship and innovation. (UCL

News, 2014)

The purposes of this paper are to share a brief history of UCL East’s development so far, to describe the common methodological approaches to economic impact studies, to identify the key lessons learnt from methodology pitfalls of previous studies, and to ultimately derive a more advanced and tailored model that measures the economic impact of UCL East, the largest ever single expansion of UCL since its founding 200 years ago.

A Note from the Author

This paper is a condensed version of my current thesis on this topic. Unlike my actual thesis, this paper has neither included sufficient examples to particular observations nor has it explained many concepts in depth due to the word limit. Nevertheless, it provides a thorough overview of the core methodology in economic impact study, RIMS II multiplier and the ways I have suggested for the model to be improved to tailor UCL’s Newham campus.

Brief Background of UCL Newham Campus

Initial interests from UCL in the Borough of Newham started in 2011 where subsequent negotiations over a possible new campus in the area in the months that followed. By mid-

2013, the £1bn plan was axed due to fierce opposition from campaigners of Carpenters

Against Regeneration Plan (CARP) and hence, no commercial agreement could be reached.

(York, 2013) Pace and success picked up again at end of 2014. This time, there was heavyduty backing from the government as part of their regeneration strategy, Olympicopolis, that

focuses on rebranding and globally positioning the site in a symbolic relationship to national economic and imaging priorities. (Melhuish, 2015) The support from the government also comes with generous monetary funding of £141 million to the new cultural and education quarter of which a significant proportion will be awarded to UCL. In the words of UCL’s

Provost, Michael Arthur:

“The development represents one of the most important moments in UCL’s history and will, for the first time since the development of the Bloomsbury campus, allow us to consider how best to plan a university fit for future generations of our community”. (Arthur, 2014)

However, since construction work has begun in Carpenters Estate, there have continuously been challenges and controversies that were posed to the new campus development such as the clear divide between Newham. There is also a clear divide between the stated aims of

Newham Council and UCL, and the impacts of the regeneration process on residents of

Carpenters Estate. (Frediani et al, 2013)This makes it even more important to conduct an economic impact study that justifies the changes and redevelopments to the local region.

Common Methodologies Used in Economic Impact Studies :

There are three major approaches commonly found in the literatures to examining and assessing the impacts of universities on regional economic development: regional inputoutput model system, production-function estimations, and cross-sectional or quasiexperimental designs. The former being the most mainstream methodology used in impact studies of individual universities, and the latter two more commonly seen at a larger spatial context (at the national level) that studies the aggregate economic impact of all universities in that country. The most appropriate model to study the economic impact of UCL’s Newham

Campus is to implement the regional input-output model system.

But before we explore each of the three methodologies any further, it is crucial to define the boundaries of the “local region”. There are two principles that govern the choice of geographic boundaries. First, the boundaries should fit the purpose of the study. Second, the area should be consistent throughout the investigation. (J.J. Siegfried et all, 2007). Taking the two principles into account, we shall define the “local region” as the London Borough of

Newham for the rest of this paper.

i) Regional Input-Output Model System

The most popular regional economic model is used to estimate the local impact of new expenditures in an area is the Regional Input-Output Modelling System (RIMS II), developed by the US Bureau of Economic Analysis. The RIMS II can objectively assess the potential economic impacts of certain developments through calculating multipliers that are used in economic impact studies to estimate the total impact of a project on a region.

(BEA RIMS II multiplier) The basic framework can be expressed in the following equation:

X i

= z i1

+ z i2

+ z i3

+ . . . + z in

+ Y i

Input-Output accounts organize producers into n industries. Each industry i produces gross output of Xi (measured in dollars). This output is sold to industries j as intermediate inputs, zij, or to final users, Yi. (ibid) Some of these inputs include: direct employment and payroll, less federal taxes; expenditures for equipment, supplies and services; construction costs; spending in the local community by faculty members, administrative staff and students; public and private support of research grants and contracts; tuition and fees paid by students from outside the local area and by local students who would alternatively have attended college elsewhere; and expenditures by visitors, including alumni, who visit the campus for academic and/or athletic events. (J.J. Siegfried et al, 2007)

The above equation assumes that production takes place under strict linear conditions. A set of relationships called “technical coefficients,” a ij

, are defined as: a ij

= z ij

/ X j

Each coefficient shows how much of industry i’s output is needed to produce a dollar of output in industry j. These coefficients show how I-O models assume that industries always use the same proportions of inputs to produce output. (BEA RIMS II multiplier)

The RIMS II also imposes the following six assumptions:

Backward linkages

Fixed purchase patterns

Industry homogeneity

No supply constraints

No regional feedback

No time dimension ii) Production-Function Model

The latest econometric research in the area of knowledge creation is configured by the

Griliches-Jaffe production-function model that uses a measure of innovation such as patents or new product introductions, as the dependent variable, with industry and university R&D expenditures as two independent variables. (Jaffe, 1989) The equation can be expressed as the following: ln(P) = α

0

I ln(RD

I

) +α

U ln(RD

U

) +ε

P = measure of innovation

RDI = industrial R&D

RDU = university R&D

The aim of this model is to examine the effects of university research on corporate patents, particularly in pharmaceuticals, optics and electronics. Although the results of knowledgeproduction studies are more easily generalizable than in single-university studies, the

Griliches-Jaffe model concentrates on the technological innovation outputs of research universities, which neglects other ways in which universities contribute to regional economic development. (Goldstein & Drucker, 2007) iii) Cross-Sectional and Quasi-experimental Research Designs

Cross-sectional analysis is a regression-based approach that analyses the empirical relationships between the variables at one point in time. In the context of the economic impact studies, it has been used, along with the data set from the US Census Bureau’s

Longitudinal Establishment & Enterprise Microdata, to demonstrate that university research does spur the creation of new firms (Kirchhoff et al, 2002).

Quasi-experimental designs mimic the conditions of true experiments in field settings

(Shadish et al, 2002) and have only been explicitly used in one economic impact study. In that study, the research contrasts the growth rates of average wages between two time periods across United States. The results suggest that universities have a significant influence on regional economic development especially in the later period where there was a widespread in entrepreneurial activities in universities. (Goldstein & Renault, 2004)

On the whole, the RIMS II multiplier takes a more holistic approach when measuring economic impact relative to the other two methodologies. In order to assess the true merits of a particular model, it is essential to consider all the contributing measurement factors. Each of these methods presents its own unique advantages and drawbacks, and each, continues to be put to use commonly to investigate the impacts of universities. (Goldstein & Drucker, 2007)

8 Lessons to Improve the RIMS II Multiplier

From the review of past literature of economic impact studies, it is clear that there are still many pitfalls to these historical methodologies. The following 8 lessons are some of the key takeaways of case studies from a combination of literatures. These lessons are missing factors that are critical to conducting a more optimised economic impact study and therefore they should be explicitly included in future methodologies of the RIMS II multiplier.

Lesson 1: Counterfactual valuation

To measure a more accurate economic impact, it is crucial to contrast the difference in effect of the region from the status quo to the counterfactual scenario, i.e. the economic impact of the original Carpenters Estate to the region. These comparisons should include the differences in externalities, welfare effects, multipliers and expenditures.

In historical cases, the counterfactual scenario has proven to be too hard to estimate since many of the researched universities have been established for decades if not for centuries.

Therefore it is a huge challenge to make explicit, defensible assumptions regarding alternative state of the world and many studies have opted not to include the counterfactual comparisons. (Beck et al, 1995)

The counterfactual valuation for UCL’s Newham campus is much easier to achieve because the physical counterfactual case had just been demolished a few years ago and data can still be retrieved from the Newham Council to project a realistic counterfactual scenario.

However, this may not be an accurate counterfactual, because in no scenario would the

Carpenters Estate remain in its original form since the Newham campus was part of a greater scheme by the government to regenerate deprived areas of London. Therefore, either the

Newham campus would be developed or another project constructed on Carpenters Estate. So the counterfactual should compare a new scenario instead of the original estate which would be much more challenging to estimate.

Lesson 2: Offsetting effect

The benefits gained from the new university campus maybe offset by the loss incurred in another region. For example, the Newham campus attracts students who would alternatively enrol at another institution. This suggests a marginal gain in economic welfare to region A is equal to the marginal loss in welfare to region B which has little effect on the national aggregate. This lesson not only teaches us to incorporate the offsetting effects into future methodologies of impact studies, but also, more importantly, how to take advantage of existing changes and more effectively reallocate current resources that improves the aggregate welfare. (J.J. Siegfried et al, 2007)

Lesson 3: Classification of the new residents (in-migrants) from the local residents (those that have remained after regional developments)

Previous economic impact studies have failed to separate the effect of the institution on residents attracted to the area by the university developments from the effect of the institution on those who would have resided there anyway. (J.J. Siegfried et al, 2007) We cannot assume that the new campus development will generate more jobs than the net migration of residents.

Of course most of the in-migrants have resided here because of the new job creation by the university. But some in-migrants such as the family members of the faculty could make local residents worse off as they would be in direct competition for jobs with the locals. Therefore it is significant to clearly classify the sub-population of the region in order to take into account of the net economic impact of in-migrants.

Lesson 4: Role of university in attracting ancillary businesses to the region

As shown from the quasi-experimental example, it is common that ancillary businesses will also set up shop in the region following the university developments. Nonetheless, the quantity and value of ancillary businesses that can be attracted to by each university would vary. Therefore it is worth to include their effects in the impact study as it would also improve the job opportunities to the local residents who would have lived in the area absent the college. We should assume that the level of skill required for these jobs can match the skilled level of the locals.

Lesson 5: Avoid double-counting of expenditures

The most practical method to ensure that the circulation of funds be counted once is to add the expenditures made by the college and financial flows to other local vendors. Student expenditures can be neglected under this method because the majority of student spending is made to the university or local vendors. If the student is working for the university, their payroll should also be neglected as that would be a double counted. Furthermore, donations from faculty and staff of the university back to the employer should be neglected as well in the counting as these funds will be transferred to the university as revenues. (J.J. Siegfried et al, 2007)

Lesson 6: The magnitude of regional multipliers should be around two (Elliot et al, 1988)

For the impact study to remain accurate, it is essential to have results close to this magnitude as too many historic cases have generated unprecedented multipliers that seem to imply that these results are used for other social agendas rather than for a reliable impact study. It would also reflect badly on public trust in higher education officials.

Lesson 7: Effect of local property tax exemptions

Property tax exemption for “not-for-profit” institutions like universities is another key factor that is rarely considered in previous impact studies. It is significant because it creates a burden on local public services and regional taxpayers. This burden may be offset by rising property values due to university developments in the region. But further investigation on property tax is required in future impact studies as it incorporates large monetary value. (J.J.

Siegfried et al, 2007)

Lesson 8: Consideration of Indirect Human Capital impacts

Despite these intangible benefits are difficult to quantify, there is growing evidence that these effect are very meaningful in economic impact studies. (Lochner & Moretti, 2004) These impacts include enhanced productivity, reduced crime, improved public health and greater civic responsibility.

Conclusion and Recommendations

This paper has explored some of the most common methodologies used in economic impact studies of university campus on the local region. By clearly defining the 8 lessons, the paper has also examined and explained the biggest pitfalls in previous impact studies which may have been simply neglected in previous methodologies. The 8 lessons are: counterfactual valuation, offsetting effect, classification of sub-population, attracting ancillary businesses, avoiding double-counting, defining the magnitude of regional multipliers, including property taxes and considering the indirect human capital impacts. These lessons combined have made the most advanced model for impact studies in Economic research and can be tailored to the

Newham Campus.

If the opportunity to measure the latest impact study does arise when data becomes available, it is highly recommended to consider the following 3 suggestions. First, some components of the model must remain constant for all future studies in order to stay comparable. These components include the boundaries of the “local region” and sources that provides consistent input factors. Second, impact studies for one campus should be continually conducted over a period of time. (10-20 years) This will take into account of both the short-term impacts and

the long-term impacts. (Beck et al, 1995) Third, the RIMS II multiplier should be gradually adjusted depending on the maturity of the Newham campus. This is because there are new factors that need to be considered in the model at a later stage. For instance, we need to take into account of the incremental future income of graduates who continues to stay in the region after graduation. These 3 suggestions will further enhance the model over a time period and will therefore produce more realistic results from future economic impact studies.

References

Arthur, M. (2014). Provost's View: UCL East – a new model for the university of the future.

[online] Ucl.ac.uk. Available at: https://www.ucl.ac.uk/news/staff/staffnews/1114/04122014-provosts-view-ucl-east-a-new-model-for-the-university-of-the-future

Beck. R., P. Curry, D. Elliott, J. Meisel, S. Levin, R. Vinson, and M. Wagner. (1993). The economic impact of Southern Illinois University. Edwardsville: Southern Illinois University.

Beck, R., Elliott, D., Meisel, J., & Wagner, M.. (1995). Economic impact studies of regional public colleges and universities. Growth and Change, 26 (2), 245-260.

Berger, Mark C. and Dan A. Black. (1993). The long run economic impact of Kentucky public institutions of higher education: Final report. University of Kentucky Center for

Business and Economic Research.

Blackwell, M., Cobb, S., & Weinberg, D. (2002). The economic impact of educational institutions: Issues and methodology. Economic Development Quarterly, 16(1), 88-95.

Bluestone, B. (1993). UMASS/Boston: An economic impact analysis.

Duke University Office of Public Affairs. (2003). Durham and Duke: An analysis of Duke

University’s estimated total annual economic impact on the City and County of Durham.” At

⟨ http://www.dukenews.duke.edu/2004/02/economics_0204.html/ ⟩ .

Frediani, A. A., S. Butcher, P. Watt. (2013). Regeneration and Well-Being in East London:

Stories from Carpenters Estate. MSc Social Development Practice Student Report

Goldstein, H. A., and J. Drucker. (2006). The economic development impacts of universities on regions: Do size and distance matter? Economic Development Quarterly 20: 22–43.

Goldstein, H. A., and C. S. Renault. (2004). Contributions of universities to regional economic development: A quasi-experimental approach. Regional Studies 38: 733–46.

Jaffe, A. B. (1989). Real effects of academic research. The American Economic Review 79:

957–70.

Kirchhoff, B. A., C. Armington, I. Hasan, and S. Newbert. (2002). The influence of R&D expenditures on new firm formation and economic growth. Washington, DC: National

Commission on Entrepreneurship, 27.

Lochner, L., & Moretti, E. (2004). The effect of education on criminal activity: Evidence from prison inmates, arrests and self-reports. American Economic Review, 94(1), 15518

Melhuish, C. (2015). The role of the university in urban regeneration. Architecture Research

Quarterly, 19(1) 5-7

Moretti, E. (2004). Estimating the social return to higher education: Evidence from longitudinal and cross-section data. Journal of Econometrics, 121(1-2), 175-212.

RIMS II. Bureau of Economic Analysis: User’s Guide. At

⟨ https://www.bea.gov/regional/pdf/rims/RIMSII_User_Guide.pdf

⟩ .

Sedway Group. (2001). Building the Bay Area’s future: A study of the economic impact of the University of California, Berkeley

Siegfried, J.J., A.R. Sanderson, P. McHenry. (2007) The economic impact of colleges and universities. Economics of Education Review 26, 546-558.

UCL News (2014). UCL has announced a second campus – UCL East – on Queen Elizabeth

Olympic Park. [online] Available at: https://www.ucl.ac.uk/news/newsarticles/1214/021214_UCL_East_govt_funding_announcement

York, M. (2013). UCL and Newham Council axe £1bn campus deal for Carpenters Estate,

Stratford. [online] Newham Recorder. Available at: http://www.newhamrecorder.co.uk/news/politics/ucl_and_newham_council_axe_1bn_campu s_deal_for_carpenters_estate_stratford_1_2183978

Download