i ≤ n - Kenshi Miyabe

advertisement
Rate distortion function in betting
game system
Masayuki Kumon
Association for Promoting Quality Assurance in Statistics
1
Abstract
Among various aspects of game theoretic
probability, when exploring
mathematical structure of the optimal
strategies in betting games,
Kullback-Leibler divergence is naturally
derived as the optimal exponential growth
rate of the betting capital process.
2
This structure had been obtained by Prof.
Takeuchi nearly fifty years ago.
Inspired by Claude Shannon’s Information
Theory, an optimizing betting strategy was
also pioneered by John Larry Kelly Jr. in
A New Interpretation of Information
Rate. Bell System Technical
Journal, Vol.35, 917-926, 1956.
3
The optimalities of Kelly’s strategy
• Minimal expected time property
• Asymptotic largest magnitude property
were investigated by Leo Breiman in
Optimal Gambling Systems for
Favorable Games. Fourth Berkeley
Symposium on Probability and
Statistics I, 65-78, 1961.
4
The historical reviews and the recent
developments concerning Kelly’s strategy
such as T. M. Cover’s Universal Portfolio
are presented in
L. C. MacLean, E. O. Thorp, W. T.
Ziemba eds. The Kelly Capital
Growth Investment Criterion :
Theory and Practice. Handbook in
Finantial Economic Series, Vol.3,
World Scientific, London, 2010.
5
In this talk, the following are addressed.
• Game mutual information which
measures information transmission
between betting games is introduced.
• Two characteristics Game channel
capacity and Game rate distortion
function are defined from the mutual
information, and these meanings are
explained.
6
• The effect of the optimal strategy in
conditional betting game is verified by
using real stock price data.
• As an application of Game rate
distortion function, an efficient lossy
source coding scheme based on the
optimal conditional betting strategy is
proposed.
7
1. Mutual information in betting game
system
1.1 Definition of mutual information
¥ Mutual information in information
theory
X ∼ PX (x) Y ∼ PY (y) (X, Y ) ∼ PXY (x, y)
H(X) = −EPX [log PX (X)] etc.
8
• Shannon’s source coding theorem :
Entropy H(X) is the nearly achievable
lower bound on the average length of the
shortest description of the random
variable X.
I(X; Y ) = H(X) − H(X|Y ) = H(Y ) − H(Y |X)
= H(X) + H(Y ) − H(X, Y )
= D(PXY kPX PY ) ≥ 0
I(X; Y ) = 0 ⇔ PXY (x, y) = PX (x)PY (y)
9
H(X,Y)
H(X | Y)
I(X;Y)
H(Y | X)
H(Y)
H(X)
10
¥ mutual information in betting games
A, B ∼ two betting games
C ∼ joint betting game of A and B
PA , PB , PC : empirical prob. of Reality
QA , QB , QC : risk neutral prob. of Forecaster
µA := D(PA kQA ) : quantity of the game A
µB := D(PB kQB ) : quantity of the game B
µC := D(PC kQC ) : quantity of the game C
11
ȝC
ȝA
I(A;B)
ȝB
ȝ B|A
ȝ A|B
12
I(A; B) := µB|A − µB = µA|B − µA
= µC − (µA + µB )
∵ µC = µA + µB|A = µB + µA|B (additivity)
µB|A := µC − µA = D(PB|A kQB|A |PA )
µB|A : quantity of the conditional betting game
B|A given A
D(PB|A kQB|A |PA ) :
conditional K-L divergence between
PB|A and QB|A given PA
13
¥ Decomposition of I(A; B)
I(A; B) = I1 (A; B) − I2 (A; B)
I1 (A; B) = D(PC kPA PB ) ≥ 0 :
usual mutual information between PA and PB
·
I2 (A; B) = EPC log
QC (X, Y )
¸
QA (X)QB (Y )
QC (x, y) = QA (x)QB (y) ⇒ I2 (A; B) = 0
14
Reality's move
b1
1p q
a1
A
p
q
p
q
a2
1p q
b2
B
b3
Binary symmetricerasure channel
15
Forecaster' s move
ȕ1
1r s
Į1
s
r
r
A
s
Į2
1r s
ȕ2
B
ȕ3
Binary symmetricerasure channel
16
PB|A (y|x) = δxy
QB|A (y|x) = PB (y)
X =Y
⇒ D(PB|A kQB|A |PA )
X
X
PB|A (y|x)
PB|A (y|x) log
PA (x)
=
Q
(y|x)
B|A
x∈X
y∈Y
=
X
PA (x)
x∈X
=−
X
X
δxy log
y∈Y
δxy
PB (y)
PA (x) log PA (x) = H(X)
x∈X
17
Reality's move
A
a1
b1
a2
b2
a3
b3
Entropy channel
18
B
Forecaster' s move
Į1
A
b1
b2
Į2
b3
Į3
Entropy channel
19
ȕ1
ȕ2
ȕ3
B
1.2 Game channel capacity
¥ Channel capacity in information theory
C = sup I(X; Y ) : capacity of channel X ⇒ Y
PX
• Shannon’s channel coding theorem :
Capacity C is the supremum of rates R at
which information can be sent with
arbitrarily low probability of error.
20
¥ Channel capacity in betting games
Cg :=
sup
I(A; B) :
PA ,QA =PA
capacity of betting game channel A ⇒ B
I(A; B) = µB|A − µB = µA|B − µA
Cg = sup µA|B = sup D(PA|B kQA|B |PB ) ≥ 0
PA
PA
21
Xn
W
Encoder
A
Ŵ
Decoder
Delay
k
Yn k
Cg
Yn
Channel
n
n1
P(Yn | X ,Y )
B
sup D(PA|B || QA|B | PB )
PA
C sup H(Y) H(Y | X)
PX
Communication channelwith feedback
22
1.3 Game rate distortion function
¥ Rate distortion function in information
theory
R(D) =
inf
PX̂|X :EP
X X̂
I(X; X̂) :
d(X,X̂)≤D
Rate distortion function of transmission X ⇒ X̂
• Shannon’s rate distortion theorem :
Rate distortion function R(D) is the
infimum of rates R that asymptotically
achieve the distortion D.
23
¥ Rate distortion function in betting
games
Rg (D) :=
inf
PÂ|A :EP
AÂ
I(A; Â) :
d(X,X̂)≤D,QÂ|A :QÂ =PÂ
Rate distortion function of transmission A ⇒ Â
I(A; Â) = µA| − µA = µÂ|A − µÂ
Rg (D) = inf µÂ|A = inf D(PÂ|A kQÂ|A |PA ) ≥ 0
PÂ|A
PÂ|A
24
X
n
n
X̂
RateR
Encoder
Decoder
Delay
A
nk
R g (D) inf D(PAˆ |A || Q Aˆ |A | PA )
PAˆ |A
ˆ)
R(D) inf H(X) H(X | X)
X
PXˆ |X
Ratedistortion with feedforward
25
Â
2. Optimal conditional betting strategy
2.1 Optimal limit order strategy (cf. [6])
¥ Investor selects δ > 0 and sequentially
decides the trading times 0 < t1 < t2 < · · ·
as follows.
26
S(t) > 0 : continuous asset price of Market
ti+1 : first time after ti such that
S(ti+1 )
1
= 1 + δ or
S(ti )
1+δ
⇔ log S(ti+1 ) − log S(ti ) = η or − η
η = log(1 + δ)
27
0.10
Limit order for dlog S
2
0.04
0.02
0.00
LS
0.06
0.08
1=
t2
t1
0
200
400
t3
600
Time
28
t4
800
t5
t6
1000
Embedded Coin-Tossing Game
K0 := 1
FOR n = 1, 2, . . . :
Investor announces αn ∈ R
Market announces xn ∈ {0, 1}
Kn = Kn−1 (1 + αn (xn − ρ))
END FOR
ρ=
1
2+δ
: risk neutral prob. set by Investor
29
• Notations
χn1 , χn0 : number of xi = 1, 0 (i = 1. . . . , n)
P (xn ) =
B(χn1 + c1 , χn0 + c0 )
B(c1 , c0 ) =
B(c1 , c0 )
Γ(c1 )Γ(c0 )
xn = x1 · · · xn
c1 , c 0 > 0 :
Γ(c1 + c0 )
beta binomial distribution modeled by Investor
30
Maximize EP [log Kn ] ⇒ {αi }ni=1
αi =
pi − ρ
ρ(1 − ρ)
pi = P (xi = 1|x
i = 1, . . . , n
i−1
)=
i−1
χ1
+ c1
i − 1 + c1 + c0
The optimal capital process of Investor is
expressed as the likelihood ratio
Kn =
P (xn )
Q(xn )
=
B(χn1 + c1 , χn0 + c0 )/B(c1 , c0 )
n
χ
ρ 1 (1
31
−
n
χ
ρ) 0
From the Stirling’s formula
1
log Kn = nD (p̂n kq) − log n + O(1)
2
µ n n¶
χ1 χ0
: empirical prob. by Market
p̂n =
,
n n
q = (ρ, 1 − ρ) : risk neutral prob. by Investor
D (p̂n kq) : empirical K-L divergence
32
2.2 Optimal conditional limit order
strategy (cf. [7])
¥ Investor determines the betting ratios
αn ∈ R of conditional betting game B|A
given A as follows.

α+ if x = 1
n
n
α1 = 0, αn =
n = 2, 3, . . .
α− if xn = 0
n
33
• Notations
χnx1 , χnx0 : number of xi = 1, 0 (i = 1. . . . , n)
χn11 , χn10 , χn01 , χn00 :
number of (xi , yi ) = (1, 1), (1, 0), (0, 1), (0, 0)
(i = 1, . . . , n)
P + (y n |xn ) =
P − (y n |xn ) =
B(χn11 + c1 , χn10 + c0 )
B(c1 , c0 )
B(χn01 + c1 , χn00 + c0 )
:
B(c1 , c0 )
beta binomial distribution modeled by Investor
35
n
Maximize EP [log Kn ] P = P + × P − ⇒ {α±
}
i i=2
α+
i =
+
pi
p+
i −ρ
ρ(1 − ρ)
α−
i =
p−
i −ρ
ρ(1 − ρ)
= P + (yi = 1|xi−1 ) =
−
i−1
p−
=
P
(y
=
1|x
)=
i
i
36
χi−1
11 + c1
i−1
χ11
i−1
+ χ10 + c1
χi−1
01 + c1
i−1
χi−1
+
χ
01
00 + c1
+ c0
+ c0
The optimal capital process of Investor is
expressed as the likelihood ratio
−
Kn = K+
×
K
n
n
K+
n =
K−
n =
P + (ξ n )
Q+ (ξ n )
P − (ξ n )
Q− (ξ n )
=
=
ξ n = (x1 , y1 ) · · · (xn , yn )
B(χn11 + c1 , χn10 + c0 )/B(c1 , c0 )
n
χ
ρ 11 (1
n
χ
ρ) 10
n
χ
ρ 01 (1
n
χ
ρ) 00
−
B(χn01 + c1 , χn00 + c0 )/B(c1 , c0 )
37
−
¡
¢
log Kn = nD p̂n,y|x kq|p̂n,x
¢
1¡
n
n
− log χx1 + log χx0 + O(1)
µ 2
µ n
n ¶
n ¶
n
χ11 χ10
χ01 χ00
p̂n,y|1 =
, n
p̂n,y|0 =
, n
:
n
n
χx1 χx1
χx0 χx0
empirical conditional prob. by Market
q = (ρ, 1 − ρ) :
risk neutral prob. by Investor
¶
µ n
n
¡
¢
χx1 χx0
D p̂n,y|x kq|p̂n,x
p̂n,x =
,
:
n
n
empirical conditional38 K-L divergence
¥ In the following figures
SA (t) : daily closing prices of Nikkei 225
SB (t) : daily opening prices of
Toyota Sony Nintendo
2003/1/6 - 2007/9/28
Kn : capital process of Investor
39
pn1 = p̂n,1|1 pn0 = p̂n,0|0 :
empirical conditional prob. by Market
¡
¢
MDIV = D p̂n,y|x kq|p̂n,x :
empirical conditional K-L divergence
mLKn =
log Kn
:
n
empirical exponential growth rate of Kn
40
Toyota Opening & Nikkei Closing Prices
15000
03/1/6-07/9/28
5000
10000
Toyota
Nikkei
0
200
400
600
Days
41
800
1000
1200
Sony Opening & Nikkei Closing Prices
15000
03/1/6-07/9/28
5000
10000
Sony
Nikkei
0
200
400
600
Days
42
800
1000
1200
60000
Nintendo Opening & Nikkei Closing Prices
50000
03/1/6-07/9/28
10000
20000
30000
40000
Nintendo
Nikkei
0
200
400
600
Days
43
800
1000
1200
1.0
Capital Process for Toyota 826 rounds
0.8
k =
6.7
a1 = 2 a2 = 2
0.6
p0 = 0.498
0.4
Kn d = 0.01*2^k
0.0
0.2
K(T) = 0.01
0
200
400
Rounds
44
600
800
0.4
0.6
0.8
1.0
Conditional Prob. of Toyota & Nikkei
0.0
0.2
pn1
pn0
0
100
200
300
Rounds
45
400
500
0.4
0.6
0.8
1.0
Conditional Prob. of Sony & Nikkei
0.0
0.2
pn1
pn0
0
100
200
Rounds
46
300
400
0.4
0.6
0.8
1.0
Conditional Prob. of Nintendo & Nikkei
0.0
0.2
pn1
pn0
0
100
200
300
Rounds
47
400
500
600
Toyota & Nikkei
48
Sony & Nikkei
49
Nintendo & Nikkei
50
0.0 e+00 5.0 e+13 1.0 e+14 1.5 e+14 2.0 e+14 2.5 e+14
Capital Process for Toyota & Nikkei 500 rounds
k =
p1 = 0.71
6.5
p0 = 0.67
Kn d = 0.01*2^k
K(T) = 5.64e+13
0
100
200
300
Rounds
51
400
500
50000 60000
Capital Process for Sony & Nikkei 432 rounds
k =
30000 40000
p1 = 0.64
6
p0 = 0.62
Kn d = 0.01*2^k
0
10000 20000
K(T) = 14449
0
100
200
Rounds
52
300
400
50
Capital Process for Nintendo & Nikkei 597 rounds
k =
p0 = 0.53
40
p1 = 0.61
6.4
30
Kn d = 0.01*2^k
0
10
20
K(T) = 20
0
100
200
300
Rounds
53
400
500
600
3. Source coding and betting strategy
3.1 Lossy source coding with feedforward
¥ Source coding model
X n = (X1 , . . . , Xn ) ∈ X n : source sequence
X̂ n = (X̂1 , . . . , X̂n ) ∈ X̂ n : estimated sequence
dn : X n × X̂ n → R+ : distortion measure
54
X
n
RateR
Encoder
n
X̂
Decoder
n
i i 1
{g }
fn
Delay
A
nk
Â
f n : ȋ n o {1, 2, .
.
., 2nR } : encoding function
nR
g i : {1, 2, .
.
., 2 } u X
i k
ˆ i 1, .
.
., n :
oX
sequenceof decoding functions
n
ˆ
X
ˆ , .
ˆ ) : reproduction sequence
(X
.
.
,
X
1
n
Sourcecoding system with feedforward
55
• (2nR , n) code with k-delayed feedforward
fn : X n → {1, 2, . . . , 2nR } : encoding function
gi : {1, 2, . . . , 2nR } × X i−k → X̂
i = 1, . . . , n :
sequence of decoding functions
fn (X n ) = w ∈ {1, 2, . . . , 2nR }
gi (w, X i−k ) = X̂i
i = 1, . . . , n
£
¤
n
n
Dn = EX n dn (X , X̂ ) :
distortion associated with the (2nR , n) code
56
(R, D) : achievable
⇔ ∃(2nR , n) code with lim sup Dn ≤ D
n→∞
• Rate distortion theorem
R ≥ Rfk f (D) ⇒ (R, D) is achievable
Rfk f (D)
=
1
lim inf
Ik (X̂ n → X n ) :
n
rate distortion function with k-delayed
PX̂ n |X n ,Dn ≤D
feedforward
57
Ik (X̂ n → X n ) =
= I(X̂ n ; X n ) −
n
X
I(X̂ i+k−1 ; Xi |X i−1 )
i=1
n
X
I(X i−k ; X̂i |X̂ i−1 ) :
i=k+1
directed information from X̂ n to X n with
k-delayed feedforward
n
X
I(X i−k ; X̂i |X̂ i−1 ) :
i=k+1
information quantity obtained for free
58
Reality's move & distortion
0
a1
â1
1
A
Â
1
a2
0
â 2
Binary symmetric transmission
59
Forecaster' s move
1ı
Į1
â1
ı
A
Â
ı
Į2
1ı
â 2
Binary symmetric transmission
60
¥ Binary symmetric transmission
Rg (D) = I(A; Â)
⇔ ρ = PA|Â (x2 |x̂1 ) = PA|Â (x1 |x̂2 ) = D
σ = QA|Â (x2 |x̂1 ) = QA|Â (x1 |x̂2 )
=
α1 − a1 + D(α2 − α1 )
a2 − a1
Rg (D) = D(PÂ|A kQÂ|A |PA )
= D(ρkσ) − D(a1 kα1 )
61
QC = QA × QB ⇔ σ = α1 = 0.5
QÂ|A = PÂ
Rg (D) = D(PÂ|A kPÂ |PA )
= D(ρk0.5) − D(a1 k0.5)
= H(a1 ) − H(D) = R(D)
Rg (0) = D(δÂ|A kP |PA )
= D(0k0.5) − D(a1 k0.5)
= H(a1 ) = R(0)
62
Rate distortion function Rg(D) in BST
1.0
1 = 0.6
0.5
1 = 0.5
1 = 0.4
0.0
Rg(D)
1.5
a1 = 0.3
0.0
0.1
0.2
D
63
0.3
0.4
¥ Conditional betting game Â|A given A
• Notations
χnx1 , χnx0 : number of xi = 1, 0 (i = 1. . . . , n)
χnx̂1 , χnx̂0 : number of x̂i = 1, 0 (i = 1. . . . , n)
χn11 , χn10 , χn01 , χn00 :
number of (xi−k , x̂i ) = (1, 1), (1, 0), (0, 1), (0, 0)
(i = k + 1, . . . , n)
64
P± (x̂n |xn ) = PÂ+ (x̂n |xn ) × PÂ− (x̂n |xn )
PÂ+ (x̂n |xn ) =
−
PÂ (x̂n |xn )
=
B(χn11 + c1 , χn10 + c0 )
B(c1 , c0 )
B(χn01 + c1 , χn00 + c0 )
B(c1 , c0 )
:
conditional beta binomial distribution
modeled by Skeptic
65
QÂ (x̂n |xn ) = PÂ (x̂n ) =
B(χnx̂1 + c1 , χnx̂0 + c0 )
B(c1 , c0 )
beta binomial distribution modeled by
Forecaster
Maximize EP± [log Kn ] ⇒
±
αi
α±
i
=0 1≤i≤k

αÂ+ if x
i−k = 1
i
=
αÂ− if xi−k = 0
i
66
± n
{αi }i=1
k+1≤i≤n
:
Â+
αi
=
+
PÂ
QÂ
pi − qi
QÂ
QÂ
qi (1 − qi )
−
PÂ
QÂ
pi − qi
Â−
αi = Q
QÂ
Â
qi (1 − qi )
i−1
χ
11 + c1
i−k
1|x ) = i−1
χ11 + χi−1
10 + c1 +
i−1
χ01 + c1
i−k
1|x ) = i−1
χ01 + χi−1
00 + c1 +
i−1
χx̂1 + c1
i−k
+
PÂ
pi
=
−
PÂ
pi
= PÂ− (x̂i =
QÂ
qi
= QÂ (x̂i = 1|x
+
PÂ (x̂i
=
)=
67
i − 1 + c1 + c0
c0
c0
The optimal capital process of Skeptic is
expressed as the likelihood ratio
n ³
´
Y
±
QÂ
±
PÂ
n n
K (x̂ |x ) =
1 + αi (x̂i − qi )
i=1
i−k
=
n
Y
P ± (x̂i |x
Â
)
i−k )
Q
(x̂
|x
i
Â
i=1
=
PÂ+ (x̂n |xn ) × PÂ− (x̂n |xn )
QÂ (x̂n |xn )
68
log K
±
PÂ
n
n
¡
¢
(x̂ |x ) = nD p̂n,x̂|x kq̂n,x̂ |p̂n,x
¢
1¡
n
n
− log χx1 + log χx0 + O(1)
2
¶
µ n
µ n
n
n ¶
χ11 χ10
χ01 χ00
p̂n,x̂|1 =
, n
p̂n,x̂|0 =
, n
:
n
n
χx1 χx1
χx0 χx0
empirical conditional prob. of Reality
µ n
n ¶
χx̂1 χx̂0
,
q̂n,x̂ =
:
n
n
empirical risk neutral prob. of Forecaster
¡
¢
D p̂n,x̂|x kq̂n,x̂ |p̂n,x :
empirical conditional69 K-L divergence
3.2 Efficient source coding scheme
¥ Betting strategy and data compression
(a variant of the arithmetic coding)
• Encoding
xn = x1 . . . xn ∈ {0, 1}n ⇒ x̂n = x̂1 . . . x̂n ∈ {0, 1}n
such that PX̂ n |X n achieves Rfk f (D)
x̂|x(n) = (x̂1 |x1 , . . . , x̂n |xn ) :
observed sequence by the encoder
2n sequences {x̂n } : in lexicographical order
70
The encoder calculates the cumulative sum
X
±
0
G±
(x̂|x(n))
=
R
(x̂
|x(n))
Â
Â
x̂0 |x(n)≤x̂|x(n):typical
±
RÂ (x̂0 |x(n))
=
1
±
PÂ
(x̂0 |x(n))
=
QÂ (x̂0 |x(n))
P± (x̂0 |x(n))
K
0
x̂ |x(n) : typical
¯
¯
¯1
¯
±
P
0
k
⇔ ¯¯ log K  (x̂ |x(n)) − Rf f (D)¯¯ < ²
n
±
GÂ (x̂|x(n)) ∈ [0, 1] as n → ∞
71
l
m
±
` = log KPÂ (x̂|x(n)) + 1
m = dlog ne :
specifed numbers of bits
j
k
G±
(x̂|x(n)) = .c1 c2 . . . c` ci ∈ {0, 1} :
Â
binary decimal to ` place accuracy
χnx1 = d1 d2 . . . dm di ∈ {0, 1} :
binary number to m digits
c(`) = (c1 , c2 , . . . , c` ) d(m) = (d1 , d2 , . . . , dm ) :
code sequences sent to the decoder
72
• Decoding
When there exists a feedforward X → X̂
and χnx1 is known, the decoder can also
sequentially calculate the cumulative sum
X
±
0
(x̂|x(n))
=
R
(x̂
|x(n))
G±
Â
Â
x̂0 |x(n)≤x̂|x(n):typical
until
±
GÂ (x̂|x(n))
≥ .c(`)
⇒ x̂|x(n) : the encoded sequence
73
From the expression
log K
±
PÂ
n
n
¡
¢
(x̂ |x ) = nD p̂n,x̂|x kq̂n,x̂ |p̂n,x
¢
1¡
n
n
− log χx1 + log χx0 + O(1)
2
the required number of bits is
m
l
±
` + m = log KPÂ (x̂|x(n)) + 1 + dlog ne
§
¡
¢¨
= nD p̂n,x̂|x kq̂n,x̂ |p̂n,x + O (log n)
74
The empirical codeword length L∗n =
per source symbol is
L∗n
=
`+m
n
`+m
n
µ
¶
¢
¡
log n
≤ D p̂n,x̂|x kq̂n,x̂ |p̂n,x + O
n
→ Rfk f (D) as n → ∞
75
References
[1] Leo Breiman. Optimal gambling
systems for favorable games. Fourth
Berkeley Symposium on Probability
and Statistics I, 65-78, 1961.
[2] Thomas M. Cover. Universal
portfolios. Mathematical Finance, 1
(1), 1-29, 1991.
76
[3] John Larry Kelly Jr. A new
interpretation of information rate.
Bell System Technical Journal,
Vol.35, 917-926, 1956.
[4] L. C. MacLean, E. O. Thorp and W.
T. Ziemba eds. The Kelly Capital
Growth Investment Criterion :
Theory and Practice. Handbook in
Finantial Economic Series, Vol.3,
World Scientific, London, 2010.
77
[5] M. Kumon, A. Takemura and K.
Takeuchi. Capital process and
optimality properties of a Bayesian
Skeptic in coin-tossing games.
Stochastic Analysis and Applications,
26, 1161-1180, 2008.
[6] K. Takeuchi, M. Kumon and A.
Takemura. A new formulation of asset
trading games in continuous time with
essential forcing of variation exponent.
78
Bernoulli, 15, 1243-1258, 2009.
[7] K. Takeuchi, M. Kumon and A.
Takemura. Multistep Bayesian
strategy in coin-tossing games and its
application to asset trading games in
continuous time. Stochastic Analysis
and Applications, 28, 842-861, 2010.
[8] K. Takeuchi, A. Takemura and M.
Kumon. New procedures for testing
whether stock price processes are
79
martingales. Computational
Economics, 37, 67-88, 2011.
[9] M. Kumon, A. Takemura and K.
Takeuchi. Sequential optimizing
strategy in multi-dimensional bounded
forecasting games. Stochastic Analysis
and Applications, 121, 155-183, 2011.
[10] M. Kumon. Studies of information
quantities and information geometry
of higher order cumulant spaces.
80
Statistical Methodology, 7, 152-172,
2010.
[11] M. Kumon, A. Takemura and K.
Takeuchi. Conformal geometry of
statistical manifold with application to
sequential estimation. Sequential
Analysis, 30, 308-337, 2011.
81
Download