Various explanations of the size principle

advertisement
Various explanations of the size principle
W. Senn, K. Wyler, H.P. Clamann, J. Kleinle, M. Larkum,
H.-R. Luscher, L. Muller, J. Streit, K. Vogt, T. Wannier
IAM-95-012
July 1995
Various explanations of the size principle
W. Senn1 4 5, K. Wyler1 4, H.P. Clamann2 6, J. Kleinle1 4, M. Larkum2,
H.-R. Luscher2 , L. Muller1 3, J. Streit2, K. Vogt2, T. Wannier2
; ;
;
;
;
;
The size principle consists in the activation of motoneurons of a muscle pool in
ascending order of their sizes when that pool of motoneurons receives a common,
increasing input. This technical report is a survey of possible explanations of the
size principle for recruitment of motor units. As a pre-study for further works, we
collected existing explanations, suggested some new ones and collected them. According to the constitution of our own group, the report is divided into 2 parts: a
collection from a physiological point of view and a collection from a more theoretical
point of view.
CR Categories and Subject Descriptors: G.1.6 [Numerical Analysis]: Optimization { Constrained optimization: G.2.1 [Discrete Mathematics ] Combinatorics { Permutations and combinations; I.2.6 [Articial Intelligence]: Learning{
Connectionism and neural nets.
General Terms: Neurophysiology
Additional Key Words size principle, recruitment order, motor units, muscle
activation.
1 Institut fur Informatik und angewandte Mathematik, Universitat Bern, Switzerland
2 Physiologisches Institut, Universitat Bern, Switzerland
3 Ascom Tech AG, Bern, Switzerland
4 Supported by the Swiss National Science Foundation (NFP/SPP grant no. 5005-03793)
5 email: wsenn@iam.unibe.ch
6 email: clamann@pyl.unibe .ch
2
Contents
1 Part I: The view of a physiologist
3
2 Part II: The view of theoreticians
6
1.1 Presynaptic mechanisms : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3
1.2 Postsynaptic mechanisms : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4
2.1 Explanations emerging from an optimization task or what recruitment by
size is good for : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
2.1.1 Minimization of energy functions : : : : : : : : : : : : : : : : : : : :
2.1.2 A combinatorial approach : : : : : : : : : : : : : : : : : : : : : : : :
2.1.3 Some functional approaches : : : : : : : : : : : : : : : : : : : : : : :
2.1.4 Information theoretical approach : : : : : : : : : : : : : : : : : : : :
2.2 Physiological explanations or how recruitment by size is achieved : : : : : :
2.2.1 Electro-Geometrical level: due to scaling properties : : : : : : : : : :
2.2.2 Molecular level: by adaption of the cell characteristics : : : : : : : :
2.2.3 Signal-processing level: due to non-linear EPSP summation : : : : :
2.2.4 Structural level: due to properties of the connectivity : : : : : : : :
2.3 Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
6
6
7
7
8
9
9
10
10
11
12
Introduction
The present report collects possible mechanisms which could explain the size principle.
The size principle implies (among other things) the activation of motoneurons of a
muscle pool in ascending order of their sizes when that pool of motoneurons receives a
common, increasing input.
How may this work?
We put together 2 distinct contributions to this question. The rst one is a synopses
due to a physiologist working long years on this eld. The second one is a consideration
from mathematicians and computer scientists who were confronted with the fascinating
organization principles of in the spinal cord. For an exposition and a history of the scientic
eld we refer to (Henneman, 1990) and the other contributions in the book of Binder and
Mendell.
1
Part I: The view of a physiologist
1.1 Presynaptic mechanisms
a)
Small motoneurons receive a powerful input selectively, larger motoneurons receive
progressively weaker inputs. Suggested by Burke et al., esp. in relation to inputs from red
nucleus and cutaneous aerents. Inputs from the same source may even be inhibitory on
small, excitatory on large motoneurons. This results in a reversal of the normal recruitment
order.
3
Burke, R.E., Jankowska, E., ten Bruggencate, G. A comparison of peripheral and
rubrospinal synaptic input to slow and fast twitch motor units of triceps surae . J.
Physiol. 207: 709-732, 1970.
Burke, R.E., Fedina, L., Lundberg, A. Spatial synaptic distribution of recurrent and
group Ia inhibitory systems in cat spinal motoneurones. J. Physiol. 214: 305-326,
1971.
Powers, R.K., Robinson, F.R., Konodi, M.A., Binder, M.D. Distribution of rubrospinal
synaptic input to cat triceps surae motoneurons. J. Neurophysiol. 70: 1460-1468,
1993.
b)
Inputs are distributed to all motoneurons of a pool randomly or roughly equally, or
perhaps even such that small motoneurons receive fewest boutons, and number of boutons
per motoneuron increases with MN size. The larger and more complex the connection,
the more likely are some boutons not to be invaded by action potentials (branch point
failure) and to be ineective. Thus small motoneurons end up receiving the strongest
inputs. Luscher and coworkers.
Luscher, H.-R., Ruenzel, P., Henneman, E. How the size of motoneurones determines
their susceptibility to discharge. Nature 282: 859-861, 1979.
Luscher, H.-R., Ruenzel, P., Henneman, E. Composite EPSP's in motoneurons of
dierent sizes before and during PTP: implications for transmission failure and its
relief at Ia projections. J. Neurophysiol. 49: 269-289, 1982.
Luscher, H.-R., Ruenzel, P., Henneman, E. Eects of impulse frequency, PTP, and
temperature on responses elicited in large populations of motoneurons by impulses
in single Ia-bers. J. Neurophysiol. 50: 1045- 1058, 1983.
Henneman, E., Luscher, H.-R., Mathis, J. Simultaneously active and inactive synapses
of single Ia bres on cat spinal motoneurones. J. Physiol. 352: 147-161, 1984.
Henneman, E. The size-principle: a deterministic output emerges from a set of
probabilistic connections. J. exp. Biol. 115: 105-112, 1985.
1.2 Postsynaptic mechanisms
Inputs are in some sense equal on all motoneurons, which eliminates presynaptic inuences.
Motoneurons respond to these in a size-dependent manner. This allows several dierent
mechanisms.
a)
Motoneurons have input impedance inversely proportional to size. Then EPSP's are
weakly attenuated due to high input impedance by small cells, and strongly attenuated
by the low input impedance of large cells. Henneman's originally proposed mechanism.
Henneman, E., Somjen, G., Carpenter, D.O. Functional signicance of cell size in
spinal motoneurons. J. Neurophysiol. 28: 560-580, 1965.
4
Henneman, E., Somjen, G., Carpenter, D.O. Excitability and inhibitibility of mo-
toneurons of dierent sizes. J. Neurophysiol. 28: 599-620, 1965.
Lindsay, A.D., Binder, M.D. Distribution of eective synaptic currents underlying
recurrent inhibition in cat triceps surae motoneurons. J. Neurophysiol. 65: 168-177,
1991.
b) There are systematic dierences in membrane properties of motoneurons which correlate with cell size. These determine the thresholds of motoneurons. Munson et al.
suggested that there exist 4 types of motoneurons (S, FR, FI, FF) and that threshold and
hence recruitment order is random within each group, but diers from group to group.
However, they also suggested that dierent MU types attract dierent strengths of inputs,
a presynaptic mechanism (see above). The suggestion was refuted by Bawa et al. who
showed that recruitment order was strictly obeyed in soleus muscles, whose motoneurons
comprise only type S.
Bawa, P., Binder, M.D., Ruenzel, P., Henneman, E. Recruitment order of motoneurons in stretch reexes is highly correlated with their axonal conduction velocity. J.
neurophysiol. 52: 410-420, 1984.
Fleshman, J.W., Munson, J.B., Sypert, G.W., Friedman, W.A. Rheobase, input
resistance, and motor-unit type in medial gastrocnemius motoneurons in the cat. J.
Neurophysiol. 46: 1326- 1338, 1981.
Fleshman, J.W., Munson, J.B., Sypert, G.W. Homonymous projection of individual
group Ia-bers to physiologically characterized medial gastrocnemius motoneurons
in the cat. J. Neurophysiol. 46: 1339-1348, 1981
Friedman, W.A., Sypert, G.W., Munson, J.B., Fleshman, J.W. Recurrent inhibition
in type-identied motoneurons. J. Neurophysiol. 46: 1349-1359, 1981.
c)
Binder and coworkers have shown that motoneurons of dierent sizes or types receive
input currents in relation to size which accounts for the recruitment order, BUT that
these input current dierences are relatively greater than may be accounted for by input
impedance dierences alone. This is particularly true for Ia excitatory inputs; rubrospinal
inputs are selective to motoneurons of dierent sizes (see above) and Renshaw inhibitory
current falls equally on all cells, but EPSPs are correlated with cell size. He suggests there
are systematic size-related membrane dierences among motoneurons.
Heckman, C.J., Binder, M.D. analysis of steady-state eective synaptic currents generated by homonymous Ia aerent bers in motoneurons of the cat. J. Neurophysiol.
60: 1946-1966, 1988.
Heckman, C.J., Binder, M.D. Neural mechanisms underlying the orderly recruitment
of motoneurons. Ch. 10, pp 182-204 in: The Segmental Motor System, M.D. Binder
and L.M. Mendell, eds. Oxford. Univ. Press, N.Y., 1990.
Heckman, C.J., Binder, M.D. Computer simulation of the steady-state input- output
function of the cat medial gastrocnemius motoneuron pool. J. Neurophysiol. 65:
952-967, 1991
5
Heckman, C.J., Binder, M.D. Computer simulations of the eects of dierent synap-
tic input systems on motor unit recruitment. J. Neurophysiol. 70: 1827-1840, 1993.
Heckman, C.J., Binder, M.D. Analysis of Ia-inhibitory synaptic input to cat spinal
motoneurons evoked by vibration of antagonist muscles. J. Neurophysiol. 66: 18881893, 1991.
Powers, R.K., Robinson, F.R., Konodi, M.A., Binder, M.D. Eective synaptic current can be estimated from measurements of neuronal discharge. J. Neurophysiol.
68:964-968, 1992.
2
Part II: The view of theoreticians
We distinguish 2 classes of `explanations': optimization principles favouring recruitment by
size and physiological mechanisms realizing this same recruitment. The rst are abstract
ideas which lie behind the physiological phenomenon. Such optimization principles could
be the bases for the physiological realization of recruitment by size. It would be the bases
just in the same way as e.g. the energy function of an articial neural network is the
bases for learning a series of input-output patterns. One may imagine biological `learning
algorithms' which optimize an appropriate energy function and, in doing so, the order of
recruitment is adapted to recruitment by size.
To give an example, Hebbian learning, reinforcing correlated pre- and postsynaptic
activity, would maximize the information extraction capability of the MN-pool as dened
in (Senn et al., 1995a). This follows from the fact that Hebbian learning leads to an
equidistribution of the possible states of the MN-pool. In a next step, the theoretical
considerations show that a maximal information extraction capability is only possible if
recruitment by size holds.
Such considerations are not only of its own interest, they open the door even to new
physiological explanations of the sice principle. Since Hebbian learning ultimately leads,
via maximation of information extraction, to recruitment by size, it is enough to ask how
such Hebbian learning could work. Indeed, there is a current hypothesis on activationcorrelated learning for synapses. A possible mechanism of reinforcing active synapses could
be the action potentials propagating backwards through the dendritic arborization. Via
the concept of information extraction, this kind of learning, sometimes called `physiological
backpropagation', then turns out to be a possible reason for recruitment by size.
2.1 Explanations emerging from an optimization task or
what recruitment by size is good for
2.1.1 Minimization of energy functions
a) Minimum-energy principle This approach is due to Hatze (Hatze and Buys,
1977). He distinguishes between 3 type of bers: slow-twitch, intermediate and fasttwitch. The activated bers of each of the 3 types contribute to the total energy E of the
6
actual muscular contraction. This energy E is of the form
E = eective work + activation heat + maintenance heat +
+ shortening heat + dissipation heat :
A constant muscle force F may be reached in many dierent ways, e.g. by recruiting a few
slow-twitch but strong units or by recruiting many fast-twitch but weak units. However,
this is Hatzes result, there is a unique partition of the activated muscle bers into the 3
types which yields a minimum of energy E . Considering this partition as a function of
the muscle force, recruitment by size is followed. The optimal relative amounts of muscle
bers were calculated numerically.
b) Principle of least action
Since Hatze's approach a) from above is only tractable
by computer simulations and since there are too many parameters to adjust, we suggest a
simplied optimization principle which may be treated formally. We base on the hypothesis
that recruitment of motor units under isometric conditions and without heat production
minimizes the time integral of some Lagrangian. This approach is worked out in (Senn
et al., 1995b).
2.1.2 A combinatorial approach
a) Fine tuning In order to get a ne tuning at a low level of muscle force, the small
motor units (MUs) have to be recruited at the beginning of muscle contraction. The
tuning is simpler with a large amount of small MUs.
b) Optimal strategy
Recruitment by size may be seen as a strategy to solve the
following combinatorial problem (Henneman, 1990, p. viii): `How can dierent tensions
that individual motor unit develop be combined by activating appropriate motoneurons
to produce any total force that is required with the necessary precision and speed?' We
formalize this approach and show that, compared with strategies of linear complexity,
recruitment by size is indeed optimal (cf. Appendix). For the connection with the iinformation theoertical approach and further theorems we refer to (Senn et al., 1995a).
2.1.3 Some functional approaches
a) Linear muscle force Muscle force should be linear to muscle activation.
Since
from the late and seldom recruited motor units there are only few (by economical reasons),
such an unit has to generate a larger force to guarantee linear increase of total muscle force.
b) Simultaneous twitches
In order to synchronize the twitches of all muscle bers
at the beginning of a rapid movement, the large (and therefore fast) MUs have to be
recruited later.
c) Bullock et al.
The motoneuron pool is described by a single cell which has a certain
saturation level B . (To be precise, the output M (t) of the motoneuron pool is governed by
the equation dtd M = ;M + (B ; M )A , where A is the input to the pool and > 0 is an
inverse decay time. For a constant input A, the motoneuron pool reaches a steady state
7
activity of M = B AA+ , which is bounded from above by B .) This saturation level may
be reached e.g. by stiening the limb at a certain angle. To guarantee still the sensitivity
with respect to input variations, the level B itself has to grow with A. The size principle
is identied with this parametric covariation of input A and saturation level B . (To speak
again in formulas, a xed increase 4M of the output as response to a xed increase 4A
of the input requires that B grows with A.) For consequences of this sort of size principle
to the connectivity between aerents, interneurons and motoneurons see (Bullock and
Contreras-Vidal, 1993).
d) Principle of maximum grading sensitivity
Hatze (Hatze, 1979) considers a
receptor organ in steady-state with input X and output Y . The input X is a Gaussian
distributed random variable with mean x 2 R and standard deviation x = sx > 0
proportional to the mean. This says that the relative mean uctuation is the same for
all input intensities x. The output Y is required to be a monotone increasing function of
input X . It is again Gaussian distributed with mean y = y (x) and standard deviation
dy > 0. Now, to guarantee optimal sensitivity, has to be independent of
y = x dx
y
the actual input intensity x. (Hatze derives this requirement by minimizing the `average
dy = sxy 0 has to be constant. This leads
uncertainty' of the receptor.) Therefore, y = x dx
to the dierential equation y 0 = const
x with solution y (x) = y +const ln x . Alternatively,
the dierential equation may be written as 4x const 4y x which represents Weber's
law (1834) for biological sensory systems: In order to achieve a constant eect 4y the
increment 4x of stimulus has to be proportional to the actual stimulus x.
This theoretical result is applied in the following way to the recruitment of motor units.
The receptor input X is identied by the cross-sectional area u occupied by the activated
bers. The receptor output Y consists in the number N of recruited motor units. Since
then 4u const 4N u , a xed number 4N of additionally activated motor units would
correspond to an additional area 4u which is proportional to the actual activated area u.
One concludes that a single motor unit corresponds to a larger area 4u if it is recruited
at a higher level of activity.
2.1.4 Information theoretical approach
a) Minimum description length The principle
of minimum description length
(MDL) is borrowed from (Mumford, 1993) and may be applied in the following way to
the recruitment problem. The MN-pool has the task to put together any force F required
e.g. by the descending pathway. From a coding theoretical point of view it is inecient
to pick out for every demanded force F some new subset MF of all available MUs. More
reasonably, some force F + 4F should be achieved by taking all MUs activated to generate
the force F (i.e. all MU lying in the subset MF ) and then activating some supplementary
MUs to generate the small residual force 4F . Thus, the principle of MDL requires that
the set MF +4F of MUs needed to generate F + 4F is a super-set of the set MF of all MUs
needed to generate F ( MF +4F MF ). This way, the sets MF and MF +4F do not have
to be encoded independently and therefore do not require unnecessary description length.
Rather, the two sets will be encoded by a code for MF together with a code for the small
dierence MF +4F n MF . Iterating this reasoning, the MDL principle leads in a natural
way to a linear recruitment order which may be expressed by F1 < F2 =) MF1 MF2 .
8
For more details see Appendix.
We compared the situation of encodig the states of the motor units with the sourcechannel coding theorem for information transmission: First the source-signal describing
the states of all eerents and aerents projecting to the pool is compressed into the sourcecode of the MN pool. The source-code then is encoded into the channel-code optimizing
the signals with respect to the channel-capacity of the axons. This approach of compressing
and optimizing the signal is worked out in (Senn et al., 1995a).
2.2 Physiological explanations or how recruitment by size
is achieved
2.2.1 Electro-Geometrical level: due to scaling properties
a) Sphere capacitor (Cf. (Luescher, 1994a, 3.i.)) Let us describe the MN as a
sphere capacitor with inner radius r and membran thickness ". The MN is spiking if
the potential dierence U at the membran is larger than some voltage threshold U . Let
U and " be independent of the radius r and let us interpret r as the `size' of the MN.
To realize the size principle, U = U (r) should be monotonically decreasing in r. Now,
we have U (r) = CQ((rr)) , where Q(r) denotes the total charge on the capacitor surface and
C (r) denotes the capacity. It is naturally to assume that Q(r) is proportional to the
surface of the sphere, thus Q(r) = const (r + ")2 . The capacity C (r) of the sphere shape
membran may be calculated by C (r) = const r(r"+") . This leads to the membran potential
U (r) = const " (1 + r" ) which indeed decreases monotonically in r.
b) Cylinder capacitor A better model of the MN of course would be a long cylinder,
say with inner radius r, membran thickness " and total length L. The capacity of such a
cylinder is calculated by
1 :
C (r; L) = const ln(1L+ " ) const " ;L( " )2 = const rL
" 1 ; r"
r
r
r
(If we do not neglect the boundary eect of the cylinder the L-dependency of C would
adapt to C (L) = const tanhL L which, for large L, may be approximated by C (L) = const L,
cf. (Rall, 1977) or (Pinter, 1990, p. 172).)
Assuming again that the charge Q is proportional to the membran surface we get
Q(r; L) = const rL. According to the formula U = CQ we get a membran potential of the
form U (r; L) = const " (1 ; "r ) . Unfortunately, U (r; L) does neither decrease with r nor
with L as it would be required by the size principle.
c) Disproportionality volume/surface
Since this is a working sheet we allow
us the following suggestion: Let us assume that a single EPSP (excitatory postsynaptic
potential) would be `distributed' over the whole volume of the dendritic tree (instead
of its surface). A tree consisting of cylinders with average radius r and total length l
has a volume V of V = r2l and a surface S of S = 2rl. Assuming that the total
number of EPSPs is proportional to surface S , the average tree potential U would be
U / VS = 2r2rll = 2r . This indeed would be the desired inverse proportionality of membran
potential U and `size' r.
9
2.2.2 Molecular level: by adaption of the cell characteristics
To realize the size principle a formal scaling disproportionality is not a priori needed.
In case 2.2.1 b) e.g. the eects of larger capacity and larger surface charge more or less
cancel since they are in a rst order approximation inversely proportional. However, a local
disproportionality which does not necessarily extend to a disproportionality on the whole
parameter domain may be already enough to guarantee the size principle. Such a local
disproportionality of course may be inuenced by a great variety of cell characteristics.
a) Locally disproportional input impedance
Cf. (Luescher, 1994a, 2.) A locally
disproportional decreasing of the input impedance with the size could explain the size
principle. The input impedance has 3 contributions: an Ohm resistance, a capacitive
resistance and an inductive resistance. Since each of this ingredients could vary with the
size there is a great variety of possible explanations of the size principle.
b) Locally disproportional threshold
Cf. (Luescher, 1994a, 2.and 3.i.) While no
preference for recruitment by size may be deduced by any theoretical argument, a tiny
increase of the hillock threshold may tip the scales in favor to the size principle.
c) Locally disproportional soma size
Cf. (Luescher, 1994a, 3.iii.) If there are no
direct connections from the aerents onto the soma any growing of the soma size would
reduce the membran potential: While the charge Q would remain constant the capacity
C would grow. By U = CQ this leads to a smaller membran potential U .
2.2.3 Signal-processing level: due to non-linear EPSP summation
a) Active synapses are `leaky' Large MNs have the greater chance to have two or
more synapses in a close vicinity. (If you don't agree see Appendix.) Since close synapses
mutually reduce their EPSP these EPSPs should sum up to a lower soma potential in
larger MNs. However, the eect is experimentally shown to be minute Cf. (Luescher,
1994b).
b) Dierent phase lags between the EPSPs
The EPSPs in a large MN may
have a greater variety of phase lags since they have a more complex branching structure.
Due to these phase lags the EPSPs would not add optimally in large MNs. This leads
to the possibility that small MNs could be better coincidence-detectors of their EPSPs
than large MNs. ((Softky, 1994, sect. 2.6) speaks of coincidence-detection of second
type.) In addition, the temporal vicinity of the EPSPs in small MNs could cause so called
facilitation in the sense of (Softky, 1994): the local and temporal vicinity of two or more
EPSPs may trigger active propagation within the dendrite. This would once more enlarge
the functional gap between small and large MNs.
c) `Branch-point factor' in MN-dendrites
If indeed the dendrites actively propagate the EPSPs, a sort of branch point failure may occur within the MN arborization.
The breakdown of active propagation at a branch point may reduce the EPSP by a small
amount. However, the eect may signicantly distinguish between small and large arborizations. For an evidence of a branch point factor see Appendix.
10
2.2.4 Structural level: due to properties of the connectivity
a) Branch-point failures Cf. (Luescher, 1994a, 4.) A further explanation of the
size principle could be the following: Although the anatomical density (with respect to
the MN surface) of synapses is equal on small and large motoneurons the relative amount
of active synapses is greater on small motoneurons. We do not - of course - exclude such a
possibility. However, the disproportional large amount of silent terminals can't be derived
from branch point failures if one only assumes identically distributed synapses on the
terminal arborization. This may be seen by the subsequent reasoning.
Suppose that the synapses of a collateral projecting onto MN1 and MN2 are identically
distributed along the branches of the collateral. Let i denote the density of synapses on
the collateral projecting onto MNi. Thus, i is the total number of synapses from the
collateral onto MNi divided by the total length of the branches of the collateral. Denoting
the total length of the collaterals branches (or their total surface if a branch is considered
to be 3-dimensional) by S , the number of synapses onto MNi is given by i S . Let us
assume that 1 < 2. If now some branch point failures happen, a certain amount of the
terminal arborization functionally drops out, the rest being still active. Let us denote by
Sa S the total length (or surface) of all those branches of the collateral which are not
aected by the branch point failures. The number of synapses still active after the failures
are 1 Sa for MN1 and 2 Sa for MN2 , respectively. This means that the proportion of
active synapses after the failures is 11SSa = SSa and this indeed does not depend on the
MN. (The proportion of silent synapses correspondingly is 1 ; SSa for both MNs.) For an
explicit calculation of the number of active synapses in case of a complete binary tree see
Appendix.
b) Non-homogeneous distribution of the synapses
There is no hope for disproportional more silent synapses on large MNs if all what the MNs distinguishes is a
scaling proportional to their size. What would help, for instance, would be a dierent
distribution of the synapses for MNs of dierent size.
c) Distribution of spindles
We make two hypotheses:
1. There is an order preserving mapping from the aerents back to the pool: aerents
measuring the state of certain muscle bers project mainly to the MNs innervating
these same muscle bers. (Cf. Appendix)
2. Looking down to the muscle, S-type units have much more spindles than FF-units
(as we learned from (Clamann, 1994)).
Suppose now that by -innervation or by an external force a certain portion of the spindles
is activated. If the spindles do approximately have the same sensitivity, these activated
spindles are distributed according to their density within the muscle. In particular, typeS-units have more activated spindles (assumption 2.). Due to the order preserving map
(assumption 1.) small MNs receive (disproportional!) more aerent input than large MNs.
d) Dierent sensitivities of spindles
In contrast to above, we go out from the
assumption that spindles on S-type muscle bers are more sensitive than there colleagues
11
on FF-type bers. If the spindles get excited with increasing intensity, spindles on S-type
bers are recruited earlier. According to the order preserving mapping (assumption 1.
above) the size principle would be induced.
e) Do aerents project on dierent ways on small and large MNs?
The
terminal arborizations of skin receptors seem to distinguish between small and large MNs.
Or how else should the following citation from (Burke, 1981, p.392) be understood? `Skin
simulation raised threshold units for most low-threshold units and decreased them in highthreshold units,.. .' Isn't it possible that Ia-, II- and Ib-aerents could distinguish as well
between small and large MNs?
2.3 Appendices
Appendix to 2.1.2 b)
Suppose that the MN-pool has to put together some nal forces F lying between 0 and
Fmax > 0. Let p(F ) > 0 be the probability that the pool has to provide the nal force
F 2 [0; Fmax]. In order to reach this nal force in some nite time 4t as smoothly as
possible any intermediate force 0 F < F has to be approximated by the pool in a
ascending order. Such an intermediate force F will be reached whenever the nal force F
is larger than F . Thus, the probability that an intermediate force F has to be furnished
by the pool is given by
Z Fmax
1
P (F ) = k p(F ) dF ;
F
R
Fmax
where k =
p(F) dF is a normalization factor. Obviously, the probability P (F ) is
0
strictly monotonically decreasing in F .
Let us dene a class of high speed strategies to solve Henneman's combinatorial problem. Let the i-th motor unit MUi be able to furnish the partial force Fi .
Denition 1 A linear strategy S consists in a enumeration MU1, MU2, ..., MUN of
the MUs which has to be running through in time in order to achieve any nal force F .
According to a linear strategy, the pool will recruit MU1, MU2, ..., MUj until the actual
muscle force F1 + F2 + : : : + Fj approximates `best' the intermediate force F < F , say
until F1 + FP2 + : : : + Fj rst
exceeds F . Let us denote this excess by ES (F ) , i.e. ES (F ) =:
P
j
j
min1j N f i=1 Fi ; F : i=1 Fi ; F 0g . Thus, ES (F ) describes the error between the
actual muscle force F1 + F2 + : : : + Fj recruited by the pool and the intermediate force F
which has to be put together.
Denition 2 A linear strategy S is optimal (in precision) if the expected error ES (F )
is minimal with respect to the probability distribution P on the set of all intermediate
forces FR . In formulas: S is optimal i hES i hES~i for any linear strategy S~, where
hES i =: 0Fmax ES (F ) P (F ) dF .
Theorem 1 Let p(F) be any probability distribution for the nal muscle forces F and
let P (F ) be the induced (monotone decreasing) probability distribution for the intermediate
muscle forces F . The linear strategy S is optimal if and only if the partial forces Fi of the
MUi are ordered according to F1 F2 : : : FN .
12
Proof If for some linear strategy S one had Fi > Fi+1 we dene the new strategy S~ by
interchanging Fi and Fi+1 . Iterating this procedure obviously would end up at the strategy
characterized by F1 F2 : : : FN . It remains to show that, with the permutation
Fi $ Fi+1 leading from S to S~, the expectation value of ES will be reduced, i.e. that
hES i ; hES~i > 0 .
Let us x the linear: strategy S (i.e. the enumeration MU
1 ,..., MUN ) with Fi > Fi+1
:
S
and set 4F 4Fi = Fi ; Fi+1 (> 0) . Dening Fi = F1 + : : : + Fi , the function
F ! ES (F ) is seen to be a saw-tooth map with jumps at the arguments Fi from the
function value Fi down to 0. Using this fact we calculate
hES i ; hES~i =
Z Fi
Z Fi+1
Fi+1 P (F ) dF ;
4F P (F ) dF >
Fi
> Fi+1 P (Fi ) 4F ; 4F P (Fi) (Fi+1 ; Fi) = 0 :
Fi ;4F
Here, the relation > comes from the probability distribution P (F ) which is strictly decreasing. To estimate the rst integral we used the fact that P (Fi ) < P (F ) for all F 2
[Fi ; 4F; Fi) , while to estimate the second integral we used the fact that P (Fi) > P (F )
for all F 2 (Fi ; Fi+1] .
2
Remark Consulting theorem 1 one could argue that the precision of the MN-pool is high
if the forces Fi , i = 1; : : :; N , of the MUs all are very small. However, the gain of precision
would be payed with a lower total force F1 + : : : + FN . The well known distribution of
the forces Fi (a lot of units with small forces, few units with large forces) may be seen as
a compromise between high precision and large total force. Indeed, dierent valuations
of the two contradictory goals of high precision and large force would lead to dierent
optimal (in a sense to be dened) distributions of the Fi 's.
Appendix to 2.1.4 a)
There is an information theoretical notion which measures the MDL of an encoding: the
entropy H . Let Xi denote the random variable which takes the value 1 if MUi is active
and 0 else. Depending on the recruitment ordering we have dierent probabilities that
MUi gets activated. A recruitment ordering may be seen as a map F ! MF which
assigns to a force F the set MF of MUs which are needed to generate F . Recruitment
by size is a linear recruitment ordering (i.e. satisfying F1 < F2 =) MF1 MF2 ) with
the property that, whenever MF1 MF2 , the additional motor units MUj 2 MF2 n MF1
have forces Fj F1 .
Let us go out from the probability distribution P (F ) describing the probability that
the pool has to generate the force
R F . The probability pi = P (Xi = 1) that MUi gets
activated is then given by pi = P (F ) dF where we have to integrate over all forces F
which will recruit MUi , i.e. over all F with MUi 2 MF . These probabilities pi induce
the joint probability distribution P
on the space = f(x1; : : :; xN ) 2 f0; 1gN g of all
sequences 1; 0; :::; 0 encoding the states of MU1 ,...,MUN . Without proof we claim
Lemma 1 If P (F ) is monotonically decreasing, the entropy H (X1; : : :; XN ) of the joint
probability distribution P
is maximal for recruitment by size.
13
The entropy hpool = H (X1; : : :; XN ) is interpreted as the information extraction capability of the MN pool.
The code words induced by a linear recruitment ordering are of the form 1110...00
with all `1's at the beginning. Obviously, such a code has little information per symbol.
It is the source-code which is compressed from the states of all interneurons, aerents
and eerents. The source-code consisting of the N + 1 words 00::0, 100::0, 110::0, ... ,
111::1 now is optimal for information extraction if the rst binaries are encoded for small
motoneurons. This is the statement of the lemma.
The maximal information containing in N + 1 words is log(N + 1). Comparing with
the possible amount of information in a string of N binaries (which by denition is N
bits) this information of log(N + 1) may seem poor. However, to be robust against coding
failures, a code necessarily needs redundancy.
Let us propose a formal denition of the `computational capability' of the MN pool.
The information rate of a code is dened as the number of bits per symbol. In the case of
the channel-code, this rate is RC = log (NN +1) . To determine the rate of the source-code, let
us assume that the same amount of information is encoded in M binaries. M is interpreted
as the number of interneuorons, motoneurons etc. that describe the source. This leads to
an information rate of RS = log (MN +1) < RC . The computational capability Comp Cap
of the pool now is dened as the gain of information rate:
Comp Cap = R ; R = log (N + 1)( 1 ; 1 ) :
C
S
N
M
There is an other advantage of the fact that the muscle could work with very few
information. All the activation patterns could be encoded by a code with minimal possible
length L (namely L = H (X1; : : :; XN ) log(N + 1)). A descending pathway does only
need to transmit an information of L bits down to the pool and the pool itself may
operate with a minimum of information receiving from the CNS.
Appendix to 2.2.3 a)
Although the density of synapses per unit length is constant, the probability that two
or more synapses are in close vicinity is larger for large MNs. To be precise we assume
that the synapses are independent and uniformly distributed over the total length of the
dendrites. Suppose there are exactly L places on the dendritic arborization where synapses
may attach. Distributing N synapses on these L places we ask for the probability that
at least two synapses are on the same place. The counter probability, i.e. the probability
that all N synapses have dierent places is calculated by
p(L; N ) = 1 L ; 1 L ; 2 : : : L ; N + 1 :
L
L
L
(Note that for L = 365 we have the famous birthday problem: What is the probability
that from a group of N people, say N = 23, at least two have the same birthday? Answer:
1 ; p(365; 23) = 0:51.)
We now assume that the number N of synapses is proportional to L, N = L with
0 < < 1. Let two dendritic arborizations have total length L1 and L2 and assume
that N1 = L1 and N2 = L2 are natural numbers. Let p(Li) =: 1 ; p(Li; Ni) denote the
probability that at least two synapses of the dendritic arborization i are at the same place.
Using this notation the statement from above is expressed in the following precised form:
14
Lemma 2 The ordering L1 < L2 implies p(L1) < p(L2) :
To give an example let the density per unit length be = 101 and consider 3 dendritic
arborizations of total length L1 = 100, L2 = 200 and L3 = 400, respectively. Then the
probability that at least two synapses are at the same place is given by p(100) = 0:37,
p(200) = 0:62, p(400) = 0:86, thus being larger for large MNs.
Appendix to 2.2.3 c)
An evidence that a branch point factor reduces the EPSP at a branch point of a dendrite may be the following vague and somewhat constructed reasoning: Assume that the
dendritic arborization tends to make accessible the largest possible drainage volume with
its dendritic material available. In order to get such an optimization between drainage
volume and total dendritic length an optimal ratio between the number of branch points
and the total dendritic length emerges. It is just a feeling that the dendritic arborizations
of MNs have too few branch point compared with their dendritic length. (Moreover, the
density of branch points seems to be largest at a middle radius. Optimal, however, would
be a density which does not depend on the radius.) An explanation of this lack of branch
points may be their ineciency in transporting EPSPs. Indeed, an ineciency of dendritic
branch points would force the MN simply to have less branch points.
Appendix to 2.2.4 a)
Let the terminal arborization have the special form of a complete binary tree with N generations and a constant length L of the branches. Let the buttons be uniformly distributed
on the tree with a density of synapses per unit length. Since there are 2N +1 ; 1 branches
on the whole tree, the expected number of boutons is (2N +1 ; 1) L .
Now, we assume that at any branch point a transmission failure may occur with
probability p > 0. If such a failure occurs the active transmission of the EPSP breaks
down in both subsequent branches. Consequently, all boutons lying below the failing
branch point will be silent. We ask for the expected number of active boutons in the tree
if an aerent AP (action potential) is injected at the stem.
Let p 2 [0; 1] be the probability that at some branch point the AP is transmitted in
the two subsequent daughter branches. With probability 1 ; p both daughter branches
will be silent. Let Zn be the random variable describing the number of active branches
at the n-th generation. By fn(s) we denote the probability generating function of Zn, i.e.
fn(s) = P1k=0 P (Zn = k)sk where P (Zn = k) is the probability of having k active branches
in the n-th generation. Thus, f0 (s) = s and f1 (s) = (1 ; p)+ ps2 . The generating function
for the active branches at higher generations is determined iteratively by the following
Lemma 3 The generating function of Zn is given by the n-th iterate fn(s) = f1 : : :f1(s) .
Proof Suppose that the generating function of Zn is fn. Under
the condition that
k
Zn = k, the distribution of Zn+1 has the generating function (f1(s)) , k = 0; 1; : : : . In the
general case, the generating function of Zn+1 is
fn+1(s) =
1
X
P (Z
k=0
n
= k) (f1 (s))k = fn (f1(s)) ; n = 0; 1; : : : :
15
Iteration starting at n = 0 proves the lemma.
2
Pk=0 kP (Zn = k) = fn0 (1) ,
Since the expectation value of Zn is calculated by hZni = 1
the lemma yields
hZni = fn0 (1) = f10 f1 : : : f1(1) : : : f10 f1(1) f10 (1) = (2p)n :
Denoting the total number of active branches up to the n-th generation by Yn =: Z0 + Z1 +
: : : + Zn one calculates the expectation value of Yn according to
p)n+1 ; p 6= 1 :
(1)
hYni = hZ0i + : : : + hZni = 1 + 2p + : : : + (2p)n = 1 ;1 (2
; 2p
2
We conclude that the Nexpected
number of active boutons in the binary tree in consid+1
1
;
(2
p
)
eration is given by 1;2p L . In particular, the number of active boutons is seen
to be proportional to the density of boutons per unit length (of the terminal arborization). Large MNs with a higher density therefore have to cope with an amount of active
boutons proportional to .
Our formulas for hYn i and hZni are found in (Kliemann, 1987, formula (5) and (10))
within the context of dendritic branching patterns. He considers the more general case
where the branching probability p depends on the actual generation of the branch. In our
case, such a generalization would only inuence formula (1) while the proportionality of
the density and the number of active boutons remains valid.
Appendix to 2.2.4 c)
An order preserving mapping from the aerents to the pool could be learned by the
Hebbian rule: Strengthening aerent connections between activated spindles and activated
MNs leads to the development of closed circuits
MNi ;! extrafusal muscle ber ;! spindles ;! mainly the same MNi :
At time we simulate such a Hebbian learning and try to generate a 1-dimensional Kohonen
map from the muscle up to the pool.
From an information theoretical point of view the closed circuits correspond to the
principle of maximum autonomy.
References
Bullock, D. & Contreras-Vidal, J. L. (1993). How Spinal Neural Networks Reduce Discrepancies Between Motor Intention and Motor Realization. In: Variability and Motor
Control, K. M. Newell & D. M. Corcos, ed., chapter 9, pages 183{221. Humman Kinetics Publishers.
Burke, R. (1981). Motor units: anatomy, physiology, and functional organization. In:
Handbook of Physiology, The Nervous System, V. B. Brook, ed., volume III, Part 1,
ch. 10, pages 345{422. American Physiology Society, Bethesda.
16
Clamann, P. (1994). Personal communication. On the occasion of the weekly tee meeting
at the Physiological Institut, Uni Bern.
Hatze, H. (1979). A teleological explanation of Weber's law and the motor unit size law.
Bull. Math. Biology, 41:407{425.
Hatze, H. & Buys, J. (1977). Energy-Optimal Controls in the Mammalian Neuromuscular
System. Bio. Cybernetics, 27:9{20.
Henneman, E. (1990). Comments on the Logical Basis of Muscle Control. In: The Segmental Motor System, M. Binder & L. Mendell, ed., pages vii{x. Oxford University
Press.
Kliemann, W. (1987). A Stochastic Dynamical Model for the Characterisation of the
Geometrical Structure of Dendritic Processes. Bull. Math. Biology, 49(2):135{152.
Luescher, H.-R. (1994a). Hypotheses on the size principle. BRAINTOOL-teaching 4,
Physiologisches Institut, Universitaet Bern.
Luescher, H.-R. (1994b). Private communication. On the occasion of the BRAINTOOL
seminar.
Mumford, D. (1993). Pattern Theory: A Unifying Perspective. Preprint.
Pinter, M. (1990). The Role of Motoneuron Membrane Properties in the Determination of
Recruitment Order. In: The Segmental Motor System, M. Binder & L. Mendell, ed.,
pages 165{181. Oxford University Press.
Rall, W. (1977). Core conductor thoery and cable properties of neurons. In: Handbook of
Physiology, The Nervous System, Cellular Biology of Neurons, E. Kandel, ed., volume
1, Part 1, pages 39{97.
Senn, W., Wyler, K., Clamann, H., Kleinle, J., Larkum, M., Luscher, H.-R., Muller, L.,
Streit, J., Vogt, K., & Wannier, T. (1995b). Recruitment by size and principle of least
action. Tecnical report, Inst. of Comp. Sci. and Appl. Math., Bern.
Senn, W., Wyler, K., Clamann, H., Kleinle, J., Larkum, M., Luscher, H.-R., Muller, L.,
Vogt, K., & Wannier, T. (1995a). Size Principle and Optimization. J. theor. Biol.
intended to submit.
Softky, W. (1994). Sub-Millisecond Coincidence Detection in Active Dendritic Trees.
Neurosci., 58(1):13{41.
17
Download