BANK FAILURES IN NIGERIA: BY Adeyemi, S. L. and

advertisement
BANK FAILURES IN NIGERIA:
THE CASE FOR EARLY WARNINGS
BY
Adeyemi, S. L. and
Adefila, J. J.
ABSTRACT
This paper develops a classification method, which should aid bank supervisory
authorities in the early detection of problem banks. It is particularly useful in situations in
which the number of banks under supervision is relatively small. The method relies on a
graphical device known as the biplot, which depicts a two-dimensional approximation of
a data matrix. The rows of this matrix are banks and the columns consist of seven simple
financial ratios which we assume capture the various aspects of risks taken by banks
(operating, financial, etc.). The attractive feature of the biplot technique is that it enables
one to detect by sight groups in a body of data. On the basis of these groups, the paper
develops a two-stage classification procedure and tests the extent to which it fits an a
priori division of objects, which might be of interest to the researcher. The application of
the technique described in this paper concerns a two-group classification of Nigerian
banks (failures and non-failures).
INTRODUCTION
Statistical techniques have been applied more frequently to the analysis of economic
failure, its prediction and its prevention. One of the more popular techniques, which has
been frequently employed in the studies of corporate bankruptcy (see Altman [1968,
1973]), credit evaluations (see Lane [1972]), etc, is discriminant analysis (D.A). This
method, which relies on predetermined groups known to the researcher in advance,
applies to cases in which homogeneous groups exist. In systems which consist of a small
number of heterogeneous observations, the effectiveness of D.A. technique is reduced.
1
Dr (Mrs.) S.L Adeyemi is an Associate Professor in the Department of Business
Administration, University of Ilorin, Ilorin, Nigeria.
This paper develops an alternative method to detect early problem banks. The method is
based on multivariate classification, which unlike discriminant analysis, does not rely on
predetermined groups known to the researcher in advance and can be applied to small
systems. We assume that problem banks are those, which take upon themselves excessive
risks. We also assume that the various aspects of the risks taken by banks 9operating,
financial, etc.) can be captured by some simple financial ratios.
Since the failure of a bank is the termination of a process over time, information about a
bank in previous periods is of utmost importance for early detections of problem banks.
Thus, we employ financial ratios at various points in time. At each point our data consist
of a matrix whose rows are banks and whose columns are the financial ratios.
Our proposed method uses a graphical device known as the biplot, which depicts a twodimensional approximation of a data matrix (see Gabriel 1971 and Rubin and Friedman
1967). Its usefulness is that it enables one to detect by sight groups in a body of data. On
the basis of these groups, we will develop a two-stage classification procedure and test
the extent to which it fits an a priori division of objects, which might interest the
researcher. The application of the technique described in this paper concerns a two-group
classification of banks (failures and non-failures). The rest of the paper is organized as
follows; the next section describes the setting of the problem. In it we examine the
various ratios in terms of types of risks they are meant to express. In the third section, the
paper discusses, with the aid of an example, the technical aspects of the biplot, which aids
in the classification method. In section four we develop a two-stage criterion of
classification of problem banks based on the information available on a group of Nigerian
banks during the period 1988-1997.The results of an empirical validation of the
classification method was given in section five. The final section gives a summary and
conclusions from the study.
2
THE NATURE OF THE PROBLEM
Every bank is characterized by its own risk structure, which consists of several risk
components. These components are given different weights in different banks, reflecting
the relative importance of the various aspects of risk in each bank. Thus, for example, an
increased degree of liquidity in a bank may compensate for an increase in the riskiness of
its assets or a change in the distribution of its deposit flows, but the compensation varies
from bank to bank. The assumption underlying this study is that the banking system as a
whole does not take upon itself excessive risks. Thus, while differences among the risk
structures of banks do exist, they are not very significant in banks, which are basically
“healthy”. Our hypothesis is that the riskiness of a bank is reflected in a risk structure
which is “significantly different” from the rest. In the flowing sections we shall develop a
precise meaning to the notion of a “significantly different risk structure”.
Risk, in this study, is not something that can be measured even ordinal in ordinal terms,
but rather associatively. In other words, we are able to determine only whether a bank is
risky or not but are unable to tell how risky it is. The study considers three components of
the overall risk undertaken by banks. The first is the operating risk associated with the
lending activities of banks. The second stems from the financial risk and is related to its
liabilities, and the third stems from the interdependency between banks assets and
liabilities.
In trying to capture these three components of the overall risk by selecting seven crude
financial ratios. This selection was informed by the quality and availability of the data as
well as on the number of the investigated banks. Ideally perhaps, more disaggregated
information and more refined data should have been used, but this was not possible due
to data problems. It should be stressed that the selection of appropriate financial ratios is
an art more than it is a science. Thus, it should be judged against the result it yields. We
turn now to an analysis of the three components of risk and to the financial ratios with
which we express these components.
3
Operating Risk
The operating risk of a bank is associated with its lending activities and stems from the
possibility that loans will turn into partial or complete losses. The extent to which a bank
is exposed to operating risk is measured by the ratio “loans/total assets” (variable no.2).
In addition, we separated regular loans from loans and investments in subsidiaries. The
reason for doing so is the belief that the criteria for granting loans to subsidiaries (or
investing in them) are often different from the criteria for extending regular loans (with
respect to collateral requirements, for example). Loans and investments in subsidiaries
reflect, to some extent, self-serving interests. In order to capture the operating risk that
stems from such activities, we use the additional ratio of loans and/or investments in
subsidiaries to total assets (variable no.7). Other important elements of operating risk,
such as the quality of loans and their degree of concentration are not captured by these
two ratios. The use of additional data is, in part, intended to fill this gap.
Financial Risk
The financial risk in banking is related to its liabilities, i.e. deposits. The bank’s deposits
are external sources of finance, and they are presented as risk because they may be either
be withdrawn at will (demand deposits) or subject to early withdrawals or non renewal
(time and earmarked deposits). We express the financial risk by using the following three
ratios:
a) Demand deposits/total liabilities (variable 4);
b) Time deposits/total liabilities (variable 5) and
c) Earmarked deposit/total liabilities (variable 6).
Liquidity and Capital Risk
Banks need to maintain a certain level of liquid reserves for their day-to-day activities.
The degree of required liquidity depends on the three following facts:
1.
The risk structure of their assets. That is the distribution of loan repayments
for a given maturity date of these loans
4
2.
The structure of its liabilities, which are reflected in the distribution of the
inflows and outflows of deposits and/or by unexpected changes in the cost of
renewing or attracting new ones.
3.
The nature of the time structure of the assets relative to the liabilities of banks.
The degree of liquidity is measured by the ratio “liquid assets/total assets”
(variable 1).
The extents to which banks can withstand losses, including losses resulting from
insufficient liquidity, depend to a large extent on their capital positions. With respect to
the problem of measuring capital adequacy, there are several approaches. In this study we
use the ration “capital/total assets”(variable 3).
BIPLOT: THE TECHNIQUE
For each period there is a (n x k) standardized data matrix X. The n rows of this matrix
consist of n observations (banks), and the k columns are the financial ratios that express
the various risk components. It is possible to factor every such matrix x of rank s into two
matrices G and H of dimension n x S and S x K respectively (see for example Gabriel
1971). One way to obtain such a factorization is to use the characteristic vectors of ‘XX’
and X’X respectively. According to Householder and Young (1938), if the rows of G and
H are arranged in a decreasing order of the corresponding character tics roots of XX’
(i.e., the characteristics vector, corresponding to the characteristics root 1, appears first,
and so on for 2 > 3 > 4….) we obtain two matrices, such that the product of
multiplication of their  first rows yields the best approximation of the -th dimension to
the matrix X. That is:

M in X  M
2

: rank of M is l  2l 1  2l  2  .........  2 S
This minimum is obtained for:
M = G (l) * H’(l)
The goodness of fit in approximately X by M may be measured (see Gabriel [1971]) by
l
  s

p 2    2  /   2 
i

1
i

 

i = 1i
5
The goodness of fit of the approximation for a given  is an indicator as to the extent of
homogeneity between the risk structures in the various banks. The more homogeneous
are the risk structure, the higher the goodness of fit.
Any multivariate classification system is based on forming scores while, at the same
time, reducing the dimensionality of the original matrix. The reduction of dimensions
may result in a loss of information so that there is always a tradeoff between simplicity
from dimension reduction and the possible loss of information resulting from it. In this
study, we use a two-dimensional discriminating score ( = 2). This choice was made by
testing the contribution of the goodness of fit of approximation that is obtained from
using one, two and three dimensions respectively. The results of this test appear in Figure
1.
Figure 1. The Relative Contribution of the Three Dimensions to the Approximation
of the Original Data Matrix.
First Dimension
Second Dimension
Third Dimension
1989
68
16
5
1990
65
16
6
1991
64
19
6
1992
76
10
2
1993
60
20
8
1994
70
13
5
1995
66
22
10
1996
36
29
13
1997
39
28
12
1998
40
30
10
1999
34
29
12
2000
37
28
7
2001
44
25
11
53.71
23.07
8.36
6
From observing the results, we note that adding a second dimension to the first improves
the goodness of fit of the approximation (by 23% on average). The goodness of fit that is
obtained from a two-dimensional approximation of X is 77%. The addition of a third
dimension adds, however, only 8% to the goodness of fit, and each additional dimension
will add less. We do not know on a priori grounds the significance of the information loss
from an approximation of 77% as compared to 95% (for example); but if such an
information loss turns out to be important, it should ultimately be released in the quality
of the classification.
The two-dimensional approximation of X by G
(2)
and H
(2)
is shown in Figure 2.
Following Gabriel (1971), the rows of G(2) are presented as points, and the rows of H(2) as
vectors from the origin. There is no direct interpretation of any single point on the graph
when considered by itself. Its meaning becomes clear only when taken in conjunction
with other points. The proximity of points in the diagram reflects the (relative) degree of
similarity between the corresponding row vectors of M. The closer two points are to each
other, the greater the similarity of their corresponding two rows in M. Since the rows of
M are an approximation to the rows of X, the closeness of two points in the diagram
implies similarity of their corresponding rows in X, and hence similarity of their risk
structures.
The columns of H(2) are an approximation of the column effects of X, and thus represent
the effects of the financial ratios which express the three risk components discussed
above.
Their graphical presentation as vectors from the origin is natural, since it can be shown
(see Gabriel [1971] that (a) the length of the ith vector approximates the standard
deviation of the ith column of X and (b) the angle between two vectors approximates the
correlation between the corresponding financial ratios.
It should be stressed that the relative proximity of two points (and hence the similarity
between the risk structure of the corresponding two banks) is determined by the entire
vector of financial ratios, and thus it is possible that two banks are similar with respect to
one risk component and yet the distance between their corresponding points is a large
one. If we project the points (that correspond to the various banks) on to one of the
7
vectors, the distance between the projections expresses approximately the difference
between banks with respect to the risk component to which that vector corresponds.
As an example, we will analyze Figure 2. The goodness of fit obtained by a twodimensional approximation is 0.83. By drawing a circle of radius r around each point in
the diagram (the way r is determined will be explained later), it is possible to divide the
banks under study into two groups. The first consists of banks whose corresponding
circles intersect each other, while in the second group are banks whose circles do not
intersect any other circle. As an example, the circle around the bank designated by the
number 12 does not intersect with any other circle, while the banks with number 2,11,
and 8 all have circles intersecting each other and form a “homogeneous group”. In a
similar way, it is possible to obtain a number of “homogeneous groups” (such as 8, 11, 2
and the group 6, 9, 1, 5 and 10).
8
Figure 2. A Representation of the Nigerian Banking System in 1961
Years – 1991
*2 = .83
F

17
3

7

2
1

6

9

14


10

5
4
1
5
F
16
7
F
8

14

2
15

12

13
3

18
(F- stands for banks which failed in subsequent years)
F

4
Observing the various vectors, we note that they too can be grouped according to the
three risk components, which they represent. Thus, vectors 2 (loan/total assets) and 7
(loans and investment in subsidiaries/total assets), which represent operating risk, are
highly correlated and form one group. Similarity, vectors 1 (liquid assets/total assets) and
3 (capital/total assets) represent the third risk component, which, as expected, is
correlated with both the operating and the financial risk components.
9
For illustration, let us compare the two banks which correspond to points 13 and 17.
When we project these two points onto the various vectors, we note that the differences
between the corresponding banks are mainly with respect to the vectors which represent
the financial risk components (vectors 4, 5 and 6). In contrast, the difference between the
same two banks with respect to operating risk (vectors 2 and 7) is relatively small. The
same type of comparison between the banks corresponding to points 17 and 14 reveals
that they differ with respect to both financial and operating risk.
When the banks are mapped according to the proposed method, we expect to find a
number of clusters of banks and a number of banks, which do not belong to any cluster.
Each cluster consists of banks with a “similar risk structure”. That is, banks that belong to
the same cluster have a similar internal balance with respect to the three risk components.
Banks that do not belong to any cluster are said to be outliers.
A TYPICAL TWO-STAGE CRITERION
In this section we develop a classification criterion which is shown to be useful for early
detection of problem banks. The criterion consists of two parts. In the first, we detect
outlier banks and test whether these banks are in the group of banks that failed. In the
second, an additional measure is used to examine whether banks, which were not found
to be outliers in the first part also are risky (and belong to the group of banks that failed).
Let mj be the number of periods in which bank j appears in the sample. Around
each bank j in the ith period, a circle of radius ‘r’ is drawn (constant for all banks in the
ith period). We define the jth bank in the ith period as being “different” if the circle
around its corresponding point does not intersect any other circle.
In every period , ri, is determined by the proportion of banks that it was decided to
examine. If, for instance, it was decided (by the approximate bank supervisory
authorities) to examine 25% of all banks in the ith period, then ri will be determined in
such a way as to yield 25% of the examined banks “different”.
Next define:
dji
=
 if the jth bank was “different” in the ith period, 0 otherwise
10
Let dj be the number of times the jth bank was found to be “different,” that is,
dj 
mj

i 1
d ji
For every bank j, let’s look at
dj
mj
which is the proportion of time that the bank was
found to be “different”. The larger this proportion is, the more likely it is that the jth bank
is “different” not by chance only.
Next, we define d 
n

j 1
dj
mj
, and to every bank j we assign the ratio kj =
d j / mj
d
.
This ratio measures the relative contribution of the jth bank to the proportion of times that
all banks investigated were different. The next step is to arrange the banks in a decreasing
order of kj and choose a critical value, , (0<<1) so that it satisfies the condition:

kj < 
over banks with the largest kj
All those banks whose kj is large enough to enter the summation are classified as
“outliers”.
The larger the value of , the larger the number of banks that will be classified as
outliers, ceteris paribus. The choice of a particular  is determined by the considerations
of the bank supervisory authorities. It essentially depends on the costs that the
supervisory authorities assign to the failure of detecting a problem bank against the cost
of wasting resources on examining a non-problem bank.
It should be pointed out that the determination of ri does not affect the number of banks
classified as outliers. As long as ri is not so large as to leave no outliers or as small as
zero the use of information over time offsets the influence of any particular value of ri
chosen.
To see this, we note that  (and thus the number of banks that ought to be examined) are
given exogenously. An attempt to force a particular bank unnaturally into the outliers
group by manipulating ri will result in either of the following:
11
a.
other banks which previously were not classified as outliers, will now be
so classified prior to the bank in question. This will cause the restriction
kj <  to be violated.
b.
The bank in question will be the only additional one to enter the outliers
group, but this too will result in a violation of the above restriction
c.
Third and least likely is the case where the bank in question is the only
bank to enter the outliers group without violating the restriction.
We stress, however, that ri should be chosen independently of the graphical
representation.
So far we have developed a definition of an “outlier bank”. Our hypothesis is that all
outliers are problem banks. This, however, does not mean that banks that according to
our definition were not found to be outliers are not problem banks. Thus, our
classification criterion consists of a second part in which we detect those problem banks
which are not “outliers”. B. Lev (1969) suggests that firms with large changes in the
composition of their balance sheets should be suspected of being unstable. Following this
approach, we argue that significant changes in the location of banks on the graph implies
an unstable risk structure, and hence such banks are suspected of being problem banks.
We define a “path of development over time”: In each period, we obtain the location of
every bank in relation to the rest. If we connect all these points (that is, we pass a line
through the position of the bank in period I and period I + 1, and this will be done for
every year), we obtain a “path of development” for every bank over time. The longer this
path, the more the corresponding bank is suspected of being unstable and thus classified
as a problem bank.
In order to examine the path each bank passes, we have to take into account the distance
“traveled” by the entire system during the same period. This is so because we are
interested in the fluctuation of each bank relative to the fluctuation of the entire system.
The length of the paths is determined by the Euclidian distance. If we arrange the length
12
of the paths in decreasing order, we shall obtain a list of their respective path lengths and
classify as problem banks all those with path lengths greater than the average.
Before reporting on the result, it is useful to demonstrate how to use the classification
criterion developed above in determining whether to classify a bank as a problem bank.
This is done with the aid of a graph, which represents five banks from the investigated
banks in the period 1989-2001 (see Figure 3.).
13
Figure 3. Relative Position and Movement of Five Banks.
1990
1989
1991
17

17

8

17

15

2

2

12

8

8

12


15
12

15

1993
1992

8

8

15

2


15
2

17

1994
12

12

17

2

1996
8

1997
12

15

17

15

12


12
17


8
15

8

2

2

1999
1998
15

2000
8

8

15

12

12


2
17


12
12

1995
2

8

15

2

12

8

2

14
15

Observing the location of each bank through time, we get a picture on the path of each
bank. Large fluctuations in the banks’ position (bank [17] for instance) result in a
relatively long path (see Figure 4.). From this we infer that its risk structure is unstable
and thus bank is classified as a problem bank. As can be seen, bank 2 does not fluctuate
from year to year. Based on the second stage of our criterion, it will not be classified as a
problem bank.
Figure 4. The path of Development of Five Banks (Stretched Out) for the Period
1989-1996.
17
15
12
2
8
EMPIRICAL RESULTS
Because of the small number of banks that were considered (17-23 banks during 19892001) there are large jumps between changes in  and corresponding changes in the
number of banks classified as problem banks.
It was argued earlier that the failure of a bank is the culmination of the process of its
being a problem bank. Thus, the validation of our classification procedure can be carried
out only by comparing the problem banks with those that actually failed. As was stressed
throughout this work, our classification procedure is based on information over time.
Each bank that failed and was detected by our method was classified as a problem bank
several years before its failure.
15
It should be stressed, however, that this model cannot claim to have predicted these
failures. In order to do so, one would need to apply this classification technique (as well
as any other technique) to an unbiased holdout sample. Such a procedure, however, could
not be carried out in the present study because of insufficient data. It should be
remembered, nevertheless, that bank regulators are not so much interested in predicting
bank failures as in identifying banks that are on the road to failure while they can still be
saved. This means that bank supervisory authorities are interested in establishing early
warning systems to identify banks that warrant closer scrutiny by bank examiners. It is
toward this end that our efforts are directed.
Our validation test was carried out against five different values of . These values of 
were not chosen arbitrarily but were selected in order to express the discontinuities in the
relation between  and the number of banks that failed.
Figure 6. Empirical Results
 = 0.35
 = 0.6
Actual
Failures
Non-failures
Classified
Failures
3
2
Actual
Failures
Non-failures
 = 0.65
Classified
Failures
Non-failures
4
1
6
12
Non-failures
2
16
Actual
Failures
Non-failures
16
Failures
4
4
Non-failures
1
14
 = 0.7
Failures
4
7
Non-failures
1
11
 = 0.95
Classified
Failures
Non-failures
4
1
14
4
Figure 6 shows the results corresponding to the five different values of .
We shall examine whether the classification results obtained by the proposed method are
significantly different from those obtained by chance. Two different chance classification
methods were tried. The first method assumes that there are two populations –failures and
non-failures. It assigns to every bank from each population the same probability of being
classified as a failure. Our approach is to test whether, given the predetermined
populations, the classification obtained is sufficient to reject the null hypothesis based on
chance. To do this, we use Fisher’s exact test of proportions equality between two
populations.
The second method classifies observations to groups with probabilities equal to group
frequencies. In this case, we test to what extent the fraction of banks correctly classified
as failures is due to chance. The results of these comparisons are presented in Figure 7.
In general, the results indicate a low probability of obtaining our classification results by
chance.
When using the second component of the classification criterion, the “path of
development of a bank through time,” we find that there is a large degree of overlap
between the two components of the classification criterion.
There are, however, some banks which were outliers by the first component, but whose
path of development was shorter than the average, or whose “path” was longer but they
were not outliers according to the first component. The results obtained by taking the
union of the two components are given in Figure 8.
COMPARISON WITH DISCRIMINANT ANALYSIS.
A widely used alternative classification method is discriminant analysis (for detailed
discussion of the applications of discriminant analysis to problems in economics and
finance, see Sinky (1975). The starting point of discriminant analysis is discrete groups,
known to the researcher in advance.
The underlying assumption of this technique is that each group is known to be
homogeneous, that is , the variance within each group is small at each point in time.
17
Table 7. The Probability of Obtaining the Results Under the Two Alternative
Classification Methods.
 = 0.35
N =5
 = 0.6
 = 0.65
 = 0.7
 = 0.095
N =9
N = 10
N = 11
N = 18
Alternative
method I
0.06
0.09
0.12
.0
0.71
0.08
0.11
0.18
0.06
0.22
Alternative
method II
Table 8. The Results Obtained From the Union of the Two Components of the
Classification Criterion
 = 0.6
 = 0.35
Classified
Actual
Failures
Non-failures
Failures
Non-failures
Failures
4
1
3
2
Non-failures
1
17
2
16
Since the failure of a bank is the termination of a process over time, and this process may
differ from bank to bank, the groups may not be homogeneous. In such a situation it is
better to begin with unknown groups.
A different approach used by Sinky (1975) relies on the multivariate chi-square
techniques. The one-dimensional score, which is obtained by this technique relies on the
mean of the specific
(control) group of banks and classifies as outliers those banks
which differ significantly from the mean. It thus assumes that this control group is
sufficiently homogeneous and representative. Hence with respect to heterogeneous
groups, it suffers from the same drawbacks as does discriminant analysis, and a twodimensional score is then preferable.
18
SUMMARY AND CONCLUSION
In this Paper, we developed a classification method whose goal is the early detection of
problem banks. We assume that the process of becoming a problem bank is the result of
excessive risk taking, and bank failure is the end of this process. The various stages of
this process and the dynamics that cause banks to fail are held constant. We also assumed
that each bank is characterized by a risk structure composed of several components,
which reflect the various types of activities undertaken by banks.
Our method also developed a comparative method to evaluate risk taking in banks, which
is based on a few financial ratios chosen to express the various risk components.
In contrast to the widely used method of discriminant analysis, the method did not rely on
predetermined groups. Rather, it used time series information to follow the development
of a bank through time, and we build a method, which is applicable to small samples.
The classification method developed here is based on a two-dimensional score and is
drawn on a graph in order to visualize the location of each bank relative to the rest of the
system. The fact that we succeeded in detecting early four out of the five banks that failed
since 1985 demonstrates the relative effectiveness of the method as an early warning
system.
19
REFERENCES
Altman, Edward I., “Financial Ratios, Discirminant Analysis and the Prediction of
Corporate Bankruptcy,” Journal of Finance (September 1968) pp. 589 –609.
Altman, Edward I., “Predicting Railroad Bankruptcies in America,” The Bell Journal of
Economics and Management Science, vol. 4, No. 1 (Spring 1973).
Edmister, R. O. “An Empirical Test of Financial Ratio Analysis for Small Business
Prediction,” Journal of Financial and Quantitative Analysis (March 1972) pp. 1477-1493.
Esenbeis, R. A., “Pitfalls in the Application of Discriminant Analaysis in Business,
Finance and Economics,” F.D.I.C Executive Summary No. 75-6
Gabriel, K. R., “The Biplot Graphical Display of Matricies with Special Applications to
Principal Component Analysis.” Biometrica, vol. 58 (1971) pp. 453-467.
Gabriel K. R., “Canonical Decomposition and Factorization of Matrices and Its
Application to Multivariate Statistical Method,” Hebrew University Mimeograph (1971).
Householder, A. S. and G. Young,” Matrix Approximation and Latent Roots,” American
Mathematical Monthly, vol. 45 (1938) pp. 165-171.
Lane, Sylvia, “Sub-marginal Credit Risk Classification,” Journal of Financial and
Quantitative Analysis, vol. 7 (January 1972) pp. 1379-1386.
Lev. B., “Financial Statement Analysis” – A New Approach, Prentice Hall (1969)
Englewood Cliffs.
Mayper, P. and H. Pifer, “Prediction of Bank Failures,” Journal of Finance, 25.
(September 1970) pp. 853-868.
Rubin, J. and H. P. Friedman, “On Some Invariant Criteria for Grouping Data,” Journal
of the American Statistical Association (December 1967) pp. 1159-1178.
Sinky, J. P., “Multivariate Statistical Analyses of the Characteristics of Problem Banks,”
Journal of Finance (March 1975) pp. 21-35
Sinky, J. P., “Identifying Problem Banks, How Do the Banking Authorities Measure a
Bank’s Risk Exposures?,” Journal of Money, Credit and Banking (May 1978) pp. 184193
Swary, I., “On the Substitutability of Liquid Assets and Capital in Commercial Banks,”
Unpublished Paper – Examiner of Banks, Bank of Israel (1976).
Theil, H. “Introduction to Economics, Wiley, New York (1971).
20
Download