Kubelka-Munk or Neural Networks for Computer Colorant Formulation

advertisement
Kubelka-Munk or Neural Networks for Computer Colorant Formulation?
Stephen Westland, Laura Iovine and John M Bishop
Introduction
Traditionally Computer Colorant Formulation (CCF) has been implemented using a theory of radiation transfer known
as Kubelka-Munk (K-M) theory [1-4]. Kubelka-Munk theory allows the prediction of spectral reflectance for a mixture
of components (colorants) that have been characterised by absorption K and scattering S coefficients. It has been shown
that the Kubelka-Munk coefficients K and S are related to, but not equal to, the fundamental optical coefficients for
absorption  and scattering  [4]. More recently it has been suggested that Artificial Neural Networks (ANNs) may be
able to provide alternative mappings between colorant concentrations and spectral reflectances [5-7] and, more
generally, are able to provide transforms between colour spaces [8-9]. This paper addresses two key issues; firstly, it
presents a quantitative comparison of K-M theory and ANNs for a given problem domain; secondly, it suggests that
significant advances may be made by combining both K-M and ANNs in form a hybrid ANN-KM model.
Theory
The Kubelka-Munk theory characterises colorants according to two coefficients, K and S, the absorption and scattering
coefficients respectively. The K-M theory is a two-flux version of a multi-flux method for solving radiation transfer
problems. Although more exact theories exist [10] the continued use of the K-M theory is due to its simplicity and the
ease by which the coefficients K and S can be measured. The application of K-M theory varies depending upon whether
the application is for the prediction of the colour of textiles, (opaque) paints or (translucent) printing inks. The version
of the theory that is considered in this research is that for opaque paints whereby the scatter of a white pigment is fixed
as 1 at every wavelength and the absorption and scattering coefficients for the other colorants are computed relatively
[11]. This so-called relative two-constant approach is possible because the reflectance R of an opaque colorant layer is
related to the ratio of the K and S coefficients thus
K/S = (1- R)2/(2 R)
(Eqn 1)
and the inverse relationship
R = 1 + K/S – ((1+K/S)2 - 1)0.5.
(Eqn 2)
The coefficients Ki() and Si() are obtained normalised for unit concentration and unit film thickness for each colorant
i and at each wavelength . In this research a single estimate of Ki() and Si() was made using two opaque samples (a
masstone and a mixture with white) for each colorant. The coefficients are assumed to be linearly related to colorant
concentration so that for a colorant mixture or recipe c (where c is a vector of colorant concentrations) the ratio K/S can
be computed at each wavelength and Eqn 2 used to predict reflectance r. In fact, the K-M theory does not account for
reflections that take place at the interface between the colorant layer and air and therefore appropriate corrections need
to be applied. The Saunderson correction equation has been used to correct reflectance values before computing K and S
and similarly to correct predicted reflectance values [12]. The Saunderson correction requires estimates of the internal
and external surface reflectance and values of 0.04 and 0.60 respectively were used.
The K-M theory allows a mapping between a colorant vector c and a reflectance vector r that defines the colourprediction problem. A class of ANNs known as multi-layer perceptrons MLP have been shown to be capable of
approximating any continuous function to any degree of accuracy [13]. An MLP is a layered structure of simple
processing units. The input layer’s units take their input from a real-world vector and the output of the output layer’s
units form the output of the network. There may be one or more hidden layers of units between the input and output
layer. Most MLPs are fully connected; that is, each unit provides a weighted input to each unit in the next layer.
Information is thus processed from the input layer to the output layer in order to perform a mapping from an input
vector i to an output vector o. MLPs can learn to perform an arbitrary mapping if they are presented with sufficient
examples of the mapping i  o. Learning, in an MLP, is a process of optimisation (during which changes are made to
the weights in the network) to minimise the RMS error between the desired output vector ot and the actual vector o.
MLPs thus require a training set of i,o pairs. Once suitable trained, however, the network can perform the mapping i 
o for input vectors i that were not used during the training of the network – this important property is known as
generalisation. The Kolmogorov theorem [13] states that a single hidden layer of units is sufficient to solve any
problem. Thus, a three-layer network can learn the colour-prediction problem c  r. However, although the
dimensionality of the input and output layers is determined by the problem (specifically the length of the vectors c and
r) the number of hidden units that are required can only be determined empirically.
Experimental
A relative two-constant Kubelka-Munk model was implemented and used to characterise a paint system and a set of
known mixtures was used to quantify the colour prediction properties of the model. The known mixtures were not used
to derive the K and S coefficients of the colorants and thus represent an independent test set. The spectral reflectance R
was predicted for each of the samples in the test set and the colour difference was computed between each predicted
spectrum and the measured reflectance for the sample using the CMC(2;1) colour-difference equation under D65 / 1964
conditions.
A number of MLPs were trained, using the back-propagation with momentum learning algorithm, to perform the
mapping c  r for a set of known paint samples (referred to as the training set). Each of the MLPs used a single hidden
layer of processing units and the number of units in that layer was varied between 3 and 15. Each MLP was trained
using the full training set and alternative training sets each being sub-sets of the full training set in order that the
performance of the networks could be assessed for different training set sizes. Furthermore, two types of MLPS were
employed. The first type of MLP was a standard MLP that was fully connected. The second type of MLP was a handcrafted partially connected system that was specially designed for the task of colour prediction. This second type of
MLP was inspired by a consideration of the K-M model. The performance of all of the trained networks was assessed
by computing CMC(2;1) colour differences, under D65 / 1964 conditions, between the predicted reflectances and the
measured reflectances for the test set.
Results
The experiments described above are currently underway. The errors of reconstruction for the test set will be quantified
by median and maximum E values to allow a direct comparison between the K-M model and the ANN. The effect of
training set size and number of hidden layers will be presented and the performance of the K-M model and the ANNs
compared. The performance of the partially connected ANNs will be contrasted with that of the full-connected
networks and some theoretical explanations for the difference presented.
Conclusions
This research will quantitatively compare and contrast the K-M model with various ANNs for the task of colour
prediction. The results obtained will be discussed in terms of practical CCF and advantages/disadvantages of the neuralnetwork approach will be presented. The benefits of a hand-crafted partially connected system will be presented and
some ideas for a hybrid ANN-MK model will be proposed.
References
1. Kubelka P (1948), New Contributions to the Optics of Intensely Light-Scattering Materials. Part I, JOSA, 38 (5),
pp. 448–451.
2. Kubelka P (1954), New Contributions to the Optics of Intensely Light-Scattering Materials Part II:
Nonhomogeneous Layers, JOSA, 44 (4), pp. 330-335.
3. Allen E (1973), Prediction of Optical Properties of Paints from Theory,
Journal of Paint Technology, 45 (584),
pp. 65-72.
4. Nobbs JH (1985), Kubelka-Munk theory and the prediction of reflectance, Review of Progress in Coloration
(SDC), 15, pp. 66-75.
5. Bishop JM, Bushnell MJ & Westland S (1991), Application of neural networks to computer recipe prediction,
CRA, 16 (1), 3-9.
6. Westland S, Bishop JM, Bushnell MJ & Usher AL (1991), An intelligent approach to colour recipe prediction,
JSDC, 107, pp. 235-237.
7. Tokunanga T & Honda Y (1991), CCM system utilising a neural network, Kako Gijutsu (Dyeing & Finishing
Technology), 26 (8), 553-557.
8. Kang HR & Anderson PG (1992), Neural network applications to the colour scanner and printer calibrations,
Journal of Electronic Imaging, 1 (1), 125-134.
9. Tominaga S (1993), Color notation conversion by neural networks, CRA, 18 (4), 253-259.
10. Van de Hulst HC (1980), Multiple Light Scattering: Tables, Formulas, and Applications, Academic Press (New
York).
11. Nobbs JH (1997), Colour-match prediction for pigmented materials, in Colour Physics for Industry, R McDonald
(ed.), Society of Dyers and Colourists.
12. Saunderson JL (1942), JOSA, 32, 727.
13. Funahashi K (1989), On the approximate realization of continuous mappings by neural networks, Neural Networks,
2, 182-192.
Correspondence: Dr Stephen Westland, s.westland@colour.derby.ac.uk
Colour Imaging Institute, Kingsway House, Derby, DE22 3HL, UK.
Download