Document 10948677

advertisement
Hindawi Publishing Corporation
Journal of Probability and Statistics
Volume 2013, Article ID 623183, 5 pages
http://dx.doi.org/10.1155/2013/623183
Research Article
On-Line Selection of c-Alternating Subsequences from
a Random Sample
Robert W. Chen,1 Larry A. Shepp,2 and Justín Ju-Chen Yang3
1
Department of Mathematics, University of Miami, Coral Gables, FL 33124, USA
Department of Statistics, Wharton School, University of Pennsylvania, Philadelphia, PA 19104, USA
3
Department of Statistics, Harvard University, 1 Oxford Street, Cambridge, MA 02138-2901, USA
2
Correspondence should be addressed to Robert W. Chen; chen@math.miami.edu
Received 24 August 2012; Accepted 4 January 2013
Academic Editor: Shein-chung Chow
Copyright © 2013 Robert W. Chen et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A sequence 𝑋1 , 𝑋2 , . . . , 𝑋𝑛 is a 𝑐-alternating sequence if any odd term is less than or equal to the next even term −𝑐 and the any
even term is greater than or equal to the next odd term +𝑐, where 𝑐 is a nonnegative constant. In this paper, we present an optimal
on-line procedure to select a 𝑐-alternating subsequence from a symmetric distributed random sample. We also give the optimal
selection rate when the sample size goes to infinity.
1. Introduction
Given a finite (or infinite) sequence π‘₯ = {π‘₯1 , π‘₯2 , . . . , π‘₯𝑛 , . . .}
of real numbers, we say that a subsequence π‘₯𝑖1 , π‘₯𝑖2 , . . . , π‘₯π‘–π‘˜ , . . .
with 1 ≤ 𝑖1 < 𝑖2 < ⋅ ⋅ ⋅ < π‘–π‘˜ < ⋅ ⋅ ⋅ is 𝑐-alternating if we have
π‘₯𝑖1 + 𝑐 < π‘₯𝑖2 > π‘₯𝑖3 + 𝑐 < π‘₯𝑖4 ⋅ ⋅ ⋅, where 𝑐 is a nonnegative
real number. When 𝑐 = 0, π‘₯𝑖1 , π‘₯𝑖2 , . . . , π‘₯π‘–π‘˜ , . . . is alternating.
We are mainly concerned with the length 𝛼(π‘₯) of the longest
𝑐-alternating subsequence of π‘₯. Here, we study the problem of
making on-line selection of a 𝑐-alternating subsequence. That
is now we regard the sequence π‘₯1 , π‘₯2 , . . . as being available to
us sequentially, and, at time 𝑖 when π‘₯𝑖 is available, we must
choose to include π‘₯𝑖 as a term of our subsequence or reject π‘₯𝑖
as a member of our subsequence.
We will consider the sequence to be given by independent, identically distributed, symmetric random variables
over the interval [0, 1]. In [1], Arlotto et al. studied the case
that the sequence to be given by independent, identically
distributed, uniform random variables over the interval [0, 1]
and 𝑐 = 0. So this paper can be considered as an extension of
their paper.
Now we need to be more explicit about the set 𝐼𝐼 of
feasible strategies for on-line selection. At time 𝑖, when 𝑋𝑖 is
presented to us, we must decide to select 𝑋𝑖 based on its value,
the value of earlier members of the sequence, and the actions
we have taken in the past. All of this information can be
captured by saying that π‘–π‘˜ , the index of the π‘˜th selection, must
be a stopping time with respect to the increasing sequence
of 𝜎-fields, 𝐹𝑖 = 𝜎{𝑋1 , 𝑋2 , . . . , 𝑋𝑖 }, 𝑖 = 1, 2, . . .. Given any
feasible policy πœ‹ in 𝐼𝐼 the random variable of most interest
here is π΄π‘œπ‘› (πœ‹), the number of selections made by the policy
πœ‹ up to and including time 𝑛. In other words, π΄π‘œπ‘› (πœ‹) is equal
to the largest π‘˜ for which there are stopping times 1 ≤ 𝑖1 <
𝑖2 < ⋅ ⋅ ⋅ < π‘–π‘˜ ≤ 𝑛 such that {𝑋𝑖1 , 𝑋𝑖2 , . . . , π‘‹π‘–π‘˜ } is a 𝑐-alternating
subsequence of the sequence {𝑋1 , 𝑋2 , . . . , 𝑋𝑛 }. In this paper,
we are interested in the optimal selection and the asymptotic
rate of the optimal selection. That is, we have a selection
policy πœ‹π‘›∗ in 𝐼𝐼 such that
lim 𝐸 [π΄π‘œπ‘› (πœ‹π‘›∗ )] = lim sup𝐸 [π΄π‘œπ‘› (πœ‹)] .
𝑛→∞
𝑛→∞
πœ‹∈𝐼𝐼
(1)
2. Main Results
For each 0 ≤ 𝑐 < 1 and each 0 ≤ πœ† ≤ (1 − 𝑐)/2, we define a
threshold function 𝑓∗ as follows: 𝑓∗ (𝑦) = max{𝑐+πœ†, 𝑐+𝑦} for
all 0 ≤ 𝑦 ≤ 1 − 𝑐. We now recursively define random variables
{π‘Œπ‘– : 𝑖 = 1, 2, . . .} by setting π‘Œ0 = 𝑦 and taking π‘Œπ‘– = π‘Œπ‘–−1 if
𝑋𝑖 < 𝑓∗ (π‘Œπ‘–−1 ), π‘Œπ‘– = 1 − 𝑋𝑖 if 𝑋𝑖 ≥ 𝑓∗ (π‘Œπ‘–−1 ). Introduce a value
𝑖−1
∗
function 𝑉(πœ†, 𝑦, 𝜌) = 𝐸{∑∞
𝑖=1 𝜌 𝐼[𝑋𝑖 ≥ 𝑓 (π‘Œπ‘–−1 )] | π‘Œ0 = 𝑦},
2
Journal of Probability and Statistics
where 0 < 𝜌 < 1 is a constant and 𝐼[𝑋𝑖 ≥ 𝑓∗ (π‘Œπ‘–−1 )] is the
indicator function of the event [𝑋𝑖 ≥ 𝑓∗ (π‘Œπ‘–−1 )].
Let 𝐺 be the distribution function and 𝑔 the probability
density function of 𝑋𝑖 . If 𝑋𝑖 is not a uniform random variable
over the interval [0, 1], then we will assume that 𝑔󸀠 exists
and is nonzero. Since 𝑋𝑖 is symmetric over the interval [0, 1],
𝐺(1 − π‘₯) = 1 − 𝐺(π‘₯) and 𝑔(π‘₯) = 𝑔(1 − π‘₯) for all 0 ≤ π‘₯ ≤ 1.
It is easy to see that
𝑉 (πœ†, 𝑦, 𝜌) = 𝜌𝐺 [max (πœ†, 𝑦) + 𝑐] 𝑉 (πœ†, 𝑦, 𝜌)
+∫
1
max(πœ†,𝑦)+𝑐
In summary, we have the following equations:
𝑉 (1 − πœ† − 𝑐) [1 − 𝜌 + 𝜌𝐺 (πœ†)] = [1 + πœŒπ‘‰ (πœ†)] 𝐺 (πœ†) ,
𝑉󸀠 (πœ†) [1 − 𝜌𝐺 (πœ† + 𝑐)]
= 𝑔 (πœ† + 𝑐) {𝜌 [𝑉 (πœ†) − 𝑉 (1 − πœ† − 𝑐)] − 1} ,
(7)
𝑉󸀠 (1 − πœ† − 𝑐) [1 − 𝜌 + 𝜌𝐺 (πœ†)]
= 𝑔 (πœ†) {𝜌 [𝑉 (1 − πœ† − 𝑐) − 𝑉 (πœ†)] − 1} .
[1 + πœŒπ‘‰ (πœ†, 1 − π‘₯, 𝜌)] 𝑔 (π‘₯) 𝑑π‘₯
For all πœ† ≤ 𝑦 ≤ 1 − 𝑐, let us define
= 𝜌𝐺 [max (πœ†, 𝑦) + 𝑐] 𝑉 (πœ†, 𝑦, 𝜌)
+∫
1−max(πœ†,𝑦)−𝑐
0
2
β„Ž (𝑦) =
[1 + πœŒπ‘‰ (πœ†, π‘₯, 𝜌)] 𝑔 (π‘₯) 𝑑π‘₯
[1 − 𝜌𝐺 (𝑦 + 𝑐)] (1 − 𝜌 + 𝜌𝐺 (𝑦))
.
𝑔 (𝑦 + 𝑐)
(2)
since 𝑔(π‘₯) = 𝑔(1 − π‘₯) for all 0 ≤ π‘₯ ≤ 1. For simplicity, we will
let 𝑉(𝑦) denote 𝑉(πœ†, 𝑦, 𝜌) for fixed πœ† and 𝜌. It is easy to check
1−πœ†−𝑐
that for all 0 ≤ 𝑦 ≤ πœ†, 𝑉(𝑦) = 𝜌𝐺(πœ† + 𝑐)𝑉(𝑦) + ∫0
[1 +
πœŒπ‘‰(π‘₯)]𝑔(π‘₯)𝑑π‘₯, for all πœ† < 𝑦 ≤ 1 − 𝑐, 𝑉(𝑦) = 𝜌𝐺(𝑦 + 𝑐)𝑉(𝑦) +
1−𝑦−𝑐
∫0
[1 + πœŒπ‘‰(π‘₯)]𝑔(π‘₯)𝑑π‘₯, and for all 1 − 𝑐 < 𝑦 ≤ 1, 𝑉(𝑦) = 0.
Since for all 0 ≤ 𝑦 ≤ πœ†, 𝑉(𝑦) = 𝑉(πœ†), 𝑉(𝑦) = 𝜌𝐺(𝑦 + 𝑐)𝑉(𝑦) +
[1 + πœŒπ‘‰(πœ†)]𝐺(1 − 𝑦 − 𝑐) for all 1 − πœ† − 𝑐 ≤ 𝑦 ≤ 1 − 𝑐. Since
𝐺(1 − 𝑦 − 𝑐) = 1 − 𝐺(𝑦 + 𝑐), 𝑉(𝑦) = 𝜌𝐺(𝑦 + 𝑐)𝑉(𝑦) + [1 +
πœŒπ‘‰(πœ†)][1−𝐺(𝑦+𝑐)] for all 1−πœ†−𝑐 ≤ 𝑦 ≤ 1−𝑐. From now on,
we will let 𝑉󸀠 (πœ†) denote the right derivative of the function 𝑉
at πœ† and 𝑉󸀠 (1 − 𝑐) denote the left derivative of the function 𝑉
at 1 − 𝑐.
For πœ† < 𝑦 < 1 − 𝑐, when we differentiate 𝑉(𝑦) we have the
following differential equation:
𝑉󸀠 (𝑦) {1 − 𝜌𝐺 (𝑦 + 𝑐)}
= πœŒπ‘” (𝑦 + 𝑐) 𝑉 (𝑦)
Then we have the following equation:
𝑉󸀠 (1 − πœ† − 𝑐)
= 𝑉󸀠 (πœ†)
−
β„Ž (πœ†)
β„Ž (1 − πœ† − 𝑐)
1−πœ†−𝑐
2𝜌
(1 − 𝜌 + 𝜌𝐺 (π‘₯ − 𝑐)) 𝑔 (π‘₯) 𝑑π‘₯.
∫
β„Ž (1 − πœ† − 𝑐) πœ†
(9)
Proof of (9). Differentiate (4). Again, we have
𝑉󸀠󸀠 (𝑦) [1 − 𝜌𝐺 (𝑦 + 𝑐)]
= πœŒπ‘” (𝑦 + 𝑐) [2𝑉󸀠 (𝑦) + 𝑉󸀠 (1 − 𝑦 − 𝑐)]
(3)
+ 𝑔󸀠 (𝑦 + 𝑐) {𝜌 [𝑉 (𝑦) − 𝑉 (1 − 𝑦 − 𝑐)] − 1} .
− [1 + πœŒπ‘‰ (1 − 𝑦 − 𝑐)] 𝑔 (1 − 𝑦 − 𝑐) .
(10)
Since 𝑔(1 − 𝑦 − 𝑐) = 𝑔(𝑦 + 𝑐) and 𝑔(1 − 𝑦) = 𝑔(𝑦). Now we
have the following differential equations:
𝑉󸀠 (𝑦) {1 − 𝜌𝐺 (𝑦 + 𝑐)}
= πœŒπ‘” (𝑦 + 𝑐) 𝑉 (𝑦) − [1 + πœŒπ‘‰ (1 − 𝑦 − 𝑐)] 𝑔 (𝑦 + 𝑐)
Replace 𝜌[𝑉(𝑦) − 𝑉(1 − 𝑦 − 𝑐)] − 1 by 𝑉󸀠 (𝑦)((1 − 𝜌𝐺(𝑦 +
𝑐))/𝑔(𝑦 + 𝑐)) and replace 𝑉󸀠 (1 − 𝑦 − 𝑐) by {−2 − 𝑉󸀠 (𝑦)((1 −
𝜌𝐺(𝑦 + 𝑐))/𝑔(𝑦 + 𝑐))}{𝑔(𝑦)/(1 − 𝜌 + 𝜌𝐺(𝑦))}. And after the
simplification, we have the following equation:
𝑉󸀠󸀠 (𝑦) [1 − 𝜌𝐺 (𝑦 + 𝑐)]
= 𝑔 (𝑦 + 𝑐) {𝜌 [𝑉 (𝑦) − 𝑉 (1 − 𝑦 − 𝑐)] − 1} ,
(4)
= πœŒπ‘” (𝑦 + 𝑐)
𝑉󸀠 (1 − 𝑦 − 𝑐) {1 − 𝜌𝐺 (1 − 𝑦)}
= πœŒπ‘” (𝑦) 𝑉 (1 − 𝑦 − 𝑐) − [1 + πœŒπ‘‰ (𝑦)] 𝑔 (𝑦)
(5)
= 𝑔 (𝑦) {𝜌 [𝑉 (1 − 𝑦 − 𝑐) − 𝑉 (𝑦)] − 1} .
Add (3) and (4) together, we have
𝑉󸀠 (1 − 𝑦 − 𝑐)
(8)
1 − 𝜌𝐺 (1 − 𝑦)
1 − 𝜌𝐺 (𝑦 + 𝑐)
= −𝑉󸀠 (𝑦)
− 2.
𝑔 (𝑦)
𝑔 (𝑦 + 𝑐)
(6)
−2𝑔 (𝑦)
+ 𝑉󸀠 (𝑦)
1 − 𝜌 + 𝜌𝐺 (𝑦)
× {𝑔󸀠 (𝑦 + 𝑐)
1 − 𝜌𝐺 (𝑦 + 𝑐)
𝑔 (𝑦 + 𝑐)
+πœŒπ‘” (𝑦 + 𝑐) [2 −
𝑔 (𝑦) [1 − 𝜌𝐺 (𝑦 + 𝑐)]
]}
𝑔 (𝑦 + 𝑐) [1 − 𝜌 + 𝜌𝐺 (𝑦)]
+ πœŒπ‘” (𝑦 + 𝑐) .
(11)
Journal of Probability and Statistics
3
Multiplying both sides of (11) by [1 − 𝜌 + 𝜌𝐺(𝑦)][1 − 𝜌𝐺(𝑦 +
𝑐)]/𝑔(𝑦 + 𝑐), we obtain the following equation:
𝑉󸀠󸀠 (𝑦)
2
[1 − 𝜌𝐺 (𝑦 + 𝑐)] [1 − 𝜌 + 𝜌𝐺 (𝑦)]
𝑔 (𝑦 + 𝑐)
(iv) 𝑉󸀠 (1 − πœ† − 𝑐)
×∫
[1 − 𝜌 + 𝜌𝐺 (π‘₯ − 𝑐)] 𝑔 (π‘₯) 𝑑π‘₯.
(16)
2
1 − 𝜌𝐺 (𝑦 + 𝑐)
] [1 − 𝜌 + 𝜌𝐺 (𝑦)]
𝑔 (𝑦 + 𝑐)
+ 𝑉󸀠 (𝑦) πœŒπ‘” (𝑦 + 𝑐) {2
1−πœ†
πœ†+𝑐
= −2πœŒπ‘” (𝑦) [1 − 𝜌𝐺 (𝑦 + 𝑐)]
+ 𝑉󸀠 (𝑦) 𝑔󸀠 (𝑦 + 𝑐) [
2𝜌
β„Ž (πœ†)
−
β„Ž (1 − πœ† − 𝑐) β„Ž (1 − πœ† − 𝑐)
= 𝑉󸀠 (πœ†)
[1 − 𝜌 + 𝜌𝐺 (𝑦)] [1 − 𝜌𝐺 (𝑦 + 𝑐)]
𝑔󸀠 (𝑦 + 𝑐)
Now we have four unknown variables 𝑉(πœ†), 𝑉(1 − πœ† − 𝑐),
and 𝑉󸀠 (πœ†), 𝑉󸀠 (1 − πœ† − 𝑐) and also have four linear equations
involving these four unknown variables. We solve these four
linear equations and obtain the following solutions.
Theorem 2.
2
−𝑔 (𝑦) [
1 − 𝜌𝐺 (𝑦 + 𝑐)
] }.
𝑔 (𝑦 + 𝑐)
(i) 𝑉 (πœ†) = (𝜌𝐺 (πœ†) [1 − 𝜌𝐺 (πœ† + 𝑐)]
(12)
+𝜌 ∫
[1 − 𝜌 + 𝜌𝐺 (π‘₯ − 𝑐)] 𝑔 (π‘₯) 𝑑π‘₯)
πœ†+𝑐
Notice that
−1
× (𝜌 (1 − 𝜌) [1 − 𝜌𝐺 (πœ† + 𝑐)]) ,
2
1 − 𝜌𝐺 (𝑦 + 𝑐)
−β„Ž (𝑦) = 𝑔 (𝑦 + 𝑐) [
] [1 − 𝜌 + 𝜌𝐺 (𝑦)]
𝑔 (𝑦 + 𝑐)
σΈ€ 
1−πœ†
σΈ€ 
+ πœŒπ‘” (𝑦 + 𝑐) {2
(ii) 𝑉󸀠 (πœ†) = (𝑔 (πœ† + 𝑐)
[1 − 𝜌 + 𝜌𝐺 (𝑦)] [1 − 𝜌𝐺 (𝑦 + 𝑐)]
𝑔󸀠 (𝑦 + 𝑐)
1−πœ†
× {𝜌 ∫
πœ†+𝑐
2
−𝑔 (𝑦) [
1 − 𝜌𝐺 (𝑦 + 𝑐)
] }.
𝑔 (𝑦 + 𝑐)
[1 − 𝜌 + 𝜌𝐺 (π‘₯ − 𝑐)] 𝑔 (π‘₯) 𝑑π‘₯
− [1 − 𝜌𝐺 (πœ† + 𝑐)]
(13)
× [1 − 𝜌 + 𝜌𝐺 (πœ†)] })
Equation (12) can be rewritten as
σΈ€ σΈ€ 
σΈ€ 
𝑉 (𝑦) β„Ž (𝑦) + 𝑉 (𝑦) β„Ž (𝑦) = −2𝜌 [1 − 𝜌𝐺 (𝑦 + 𝑐)] 𝑔 (𝑦) .
(14)
By integrating both sides of (14), we have
𝑉󸀠 (𝑦) = 𝑉󸀠 (πœ†)
−1
2
× ([1 − 𝜌𝐺 (πœ† + 𝑐)] [1 − 𝜌 + 𝜌𝐺 (πœ†)]) ,
σΈ€ 
(iii) 𝑉 (1 − πœ† − 𝑐)
= 𝐺 (πœ†) {( [1 − 𝜌 + 𝜌𝐺 (πœ†)] [1 − 𝜌𝐺 (πœ† + 𝑐)]
𝑦
2𝜌
β„Ž (πœ†)
−
∫ [1 − 𝜌𝐺 (𝑧 + 𝑐)] 𝑔 (𝑧) 𝑑𝑧.
β„Ž (𝑦) β„Ž (𝑦) πœ†
(15)
−𝜌 ∫
1−πœ†
πœ†+𝑐
[1 − 𝜌 + 𝜌𝐺 (π‘₯ − 𝑐)] 𝑔 (π‘₯) 𝑑π‘₯)
× ((1 − 𝜌) [1 − 𝜌𝐺 (πœ† + 𝑐)]
Therefore, we have the following theorem.
−1
× [1 − 𝜌 + 𝜌𝐺 (πœ†)]) } .
Theorem 1.
(i) 𝑉 (1 − πœ† − 𝑐) [1 − 𝜌 + 𝜌𝐺 (πœ†)] = [1 + πœŒπ‘‰ (πœ†)] 𝐺 (πœ†) ,
(ii) 𝑉󸀠 (πœ†) [1 − 𝜌𝐺 (πœ† + 𝑐)]
= 𝑔 (πœ† + 𝑐) {𝜌 [𝑉 (πœ†) − 𝑉 (1 − πœ† − 𝑐)] − 1} ,
(iii) 𝑉󸀠 (1 − πœ† − 𝑐) [1 − 𝜌 + 𝜌𝐺 (πœ†)]
= 𝑔 (πœ†) {𝜌 [𝑉 (1 − πœ† − 𝑐) − 𝑉 (πœ†)] − 1} ,
(17)
By Theorem 2, 𝑉󸀠 (πœ†) = 0 if and only if
𝜌∫
1−πœ†
πœ†+𝑐
[1 − 𝜌 + 𝜌𝐺 (π‘₯ − 𝑐)] 𝑔 (π‘₯) 𝑑π‘₯
(18)
= [1 − 𝜌𝐺 (πœ† + 𝑐)] [1 − 𝜌 + 𝜌𝐺 (πœ†)]
since 𝑔(πœ† + 𝑐) and [1 − 𝜌𝐺(πœ† + 𝑐)]2 [1 − 𝜌 + 𝜌𝐺(πœ†)] are positive.
For each 0 < 𝜌 < 1, let πœ†(𝜌) denote a solution of 𝑉󸀠 (πœ†) = 0.
4
Journal of Probability and Statistics
The next theorem indicates that when 𝜌 < 1 but close enough
to 1, πœ†(𝜌) is unique and 0 ≤ πœ†(𝜌) < (1 − 𝑐)/2.
Theorem 3. When 𝜌 < 1 but close enough to 1, πœ†(𝜌) is unique
and 0 ≤ πœ†(𝜌) < (1 − 𝑐)/2.
Proof. For all 0 ≤ πœ† < (1 − 𝑐)/2, let
𝐾 (πœ†) = 𝜌 ∫
1−πœ†
πœ†+𝑐
[1 − 𝜌 + 𝜌𝐺 (π‘₯ − 𝑐)] 𝑔 (π‘₯) 𝑑π‘₯
(19)
− [1 − 𝜌𝐺 (πœ† + 𝑐)] [1 − 𝜌 + 𝜌𝐺 (πœ†)] .
Then
𝐾 (πœ†) = − 𝜌 {𝑔 (1 − πœ†) [1 − 𝜌 + 𝜌𝐺 (1 − πœ† − 𝑐)]
+𝑔 (πœ†) [1 − 𝜌𝐺 (πœ† + 𝑐)]} < 0,
1−𝑐
)
2
= − [1 − 𝜌 + 𝜌𝐺 (
1−𝑐
1+𝑐
)] [1 − 𝜌𝐺 (
)] < 0,
2
2
1
2
𝐾 (0) = 𝜌2 ∫ 𝐺 (π‘₯ − 𝑐) 𝑔 (π‘₯) 𝑑π‘₯ − (1 − 𝜌) > 0
𝑐
Now we define these two strategies for the 𝑐-alternating
subsequence as follows:
(IσΈ€  ) the maximally 𝑐-timid strategy which can be described as follows: at the start, accept the first observation which is less than (1 − 𝑐)/2, accept the next one
which is greater than (1 + 𝑐)/2, accept the next one
which is less than (1 − 𝑐)/2, then accept the next one
which is greater than (1+𝑐)/2. Continue this way until
we observe 𝑛 observations, then we stop;
σΈ€ 
𝐾(
(II) the purely greedy strategy which can be described as
follows: at the start, accept the first observation, then
accept the next one which is greater than the first
one, accept the next one which is less than the second
selected one, then accept the next one which is greater
than the third selected one. Continue this way until
we observe 𝑛 observations, then we stop.
(20)
if 𝜌 is close enough to 1. Therefore, 0 ≤ πœ†(𝜌) < (1 − 𝑐)/2 and
πœ†(𝜌) is unique if 𝜌 is close enough to 1. This completes the
proof of Theorem 3.
A routine calculation is as follows:
(IIσΈ€  ) the purely 𝑐-greedy strategy which can be described
as follows: at the start, accept the first observation
which is less than 1 − 𝑐, accept the next one which is
greater than the first selected one +𝑐, accept the next
one which is less than the second selected one −𝑐, then
accept the next one which is greater than the third
selected one +𝑐. Continue this way until we observe
𝑛 observations, then we stop.
(22)
When 𝑐 = 0, the maximally 𝑐-timid strategy is the maximally timid strategy and the purely 𝑐-greedy strategy is the
purely greedy strategy. In fact, the maximally 𝑐-timid strategy
is the case when πœ† = (1−𝑐)/2 and the purely 𝑐-greedy strategy
is the case when πœ† = 0.
The asymptotic selection rate for the maximally 𝑐-timid
strategy is 𝐺((1 − 𝑐)/2) and the asymptotic selection rate for
1−𝑐
the purely 𝑐-greedy strategy is ∫0 𝐺(π‘₯)𝑔(π‘₯+𝑐)𝑑π‘₯/(1−𝐺(𝑐)).
if 𝜌 is close enough to 1. Therefore, (1 − 𝜌)𝑉(πœ†(𝜌), πœ†(𝜌), 𝜌) →
2𝐺(πœ†(1)) as 𝜌 ↑ 1, where
Example 6. When 𝑐 = 0, then the asymptotic selection rate
for both the maximally timid strategy and the purely greedy
strategy is 1/2. These results are the same as those in [1].
−2πœŒπ‘” (πœ† (𝜌)) 𝑔 (πœ† (𝜌) + 𝑐)
𝑉 (πœ† (𝜌)) =
< 0.
[1 − 𝜌 + 𝜌𝐺 (πœ† (𝜌))] [1 − 𝜌𝐺 (πœ† (𝜌) + 𝑐)]
(21)
σΈ€ σΈ€ 
So we have found the maximum of the function 𝑉. After the
simplification,
𝑉 (πœ† (𝜌) , πœ† (𝜌) , 𝜌) =
∫
1−πœ†(1)
πœ†(1)+𝑐
1 − 𝜌 + 2𝜌𝐺 (πœ† (𝜌))
𝜌 (1 − 𝜌)
𝐺 (π‘₯ − 𝑐) 𝑔 (π‘₯) 𝑑π‘₯ = 𝐺 (πœ† (1)) [1 − 𝐺 (πœ† (1) + 𝑐)] .
(23)
Example 4. When 𝑐 = 0, then 2𝐺(πœ†(1)) = 2 − √2.
Example 5. When 𝐺(π‘₯) = π‘₯ for all 0 ≤ π‘₯ ≤ 1, then
2𝐺(πœ†(1)) = (1 − 𝑐)(2 − √2).
In [1], the following two strategies are mentioned:
(I) the maximally timid strategy which can be described
as follows: at the start, accept the first observation
which is less than 1/2, then accept the next one which
is greater than 1/2, then accept the next one which is
less than 1/2. Continue this way until we observe 𝑛
observations, then we stop;
Example 7. When 𝐺(π‘₯) = π‘₯ for all 0 ≤ π‘₯ ≤ 1, then the
asymptotic selection rate for both the maximally 𝑐-timid
strategy and the purely 𝑐-greedy strategy is (1/2)(1 − 𝑐). It
is easy to see these results are consistent with the result of
Example 5.
If the random variables 𝑋1 , 𝑋2 , . . . , 𝑋𝑛 are independent,
identically distributed symmetric random variables over the
interval [π‘Ž, 𝑏], where π‘Ž < 𝑏 and π‘Ž, 𝑏 are finite, then
we can change 𝑋𝑖 into 𝑍𝑖 by 𝑍𝑖 = (𝑋𝑖 − π‘Ž)/(𝑏 − π‘Ž).
Then 𝑍1 , 𝑍2 , . . . , 𝑍𝑛 are independent, identically distributed
symmetric random variables over the interval [0, 1]. Let
𝑐󸀠 = 𝑐/(𝑏 − π‘Ž). Then selecting a 𝑐-alternating subsequence
from the random sample 𝑋1 , 𝑋2 , . . . , 𝑋𝑛 is exactly the same
to select a 𝑐󸀠 -alternating subsequence from the random
sample 𝑍1 , 𝑍2 , . . . , 𝑍𝑛 . So the asymptotic selection rate is still
Journal of Probability and Statistics
5
Table 1: The asymptotic optimal selection rates for various distributions and 𝑐.
𝑐
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
𝐺1 (π‘₯)
2 − √2
0.540492
0.509190
0.489881
0.466034
0.424335
0.366680
0.294696
0.209273
0.110936
0
𝐺2 (π‘₯)
2 − √2
0.545962
0.506516
0.466834
0.426295
0.384147
0.339473
0.290641
0.234738
0.164273
0
𝐺3 (π‘₯)
2 − √2
0.527208
0.468629
0.410051
0.351472
0.292893
0.234315
0.175736
0.117157
0.058579
0
the same. Or we can find πœ†(1) directly by solving the following
equation:
𝑏−π‘Ž−πœ†(1)
∫
π‘Ž+πœ†(1)+𝑐
𝐺 (π‘₯ − 𝑐) 𝑔 (π‘₯) 𝑑π‘₯
(24)
= 𝐺 (π‘Ž + πœ† (1)) [1 − 𝐺 (π‘Ž + πœ† (1) + 𝑐)] .
For 0 ≤ 𝑐 ≤ (𝑏−π‘Ž), π‘Ž ≤ 𝑦 ≤ (𝑏−𝑐), and π‘Ž ≤ πœ† < π‘Ž+(𝑏−π‘Ž−𝑐)/2
the threshold function 𝑓∗ is defined by 𝑓∗ (𝑦) = max(πœ†, 𝑦)+𝑐
for all π‘Ž ≤ 𝑦 ≤ (𝑏−𝑐). This time, we recursively define random
variables {π‘Œπ‘– : 𝑖 = 1, 2, . . .} by setting π‘Œ0 = 𝑦 and taking π‘Œπ‘– =
π‘Œπ‘–−1 if 𝑋𝑖 < 𝑓∗ (π‘Œπ‘–−1 ), π‘Œπ‘– = 𝑏 − 𝑋𝑖 if 𝑋𝑖 ≥ 𝑓∗ (π‘Œπ‘–−1 ). Here
π‘Ž ≤ 𝑦 ≤ (𝑏 − 𝑐),
{
{2π‘₯ (1 − π‘₯)
𝐺1 (π‘₯) = {
{1 − 2π‘₯ + 2π‘₯2
{
𝐺2 (π‘₯) =
1
if 0 ≤ π‘₯ ≤ ,
2
1
if < π‘₯ ≤ 1,
2
2 −1
sin (√π‘₯) ,
πœ‹
(25)
𝐺3 (π‘₯) = π‘₯,
𝐺4 (π‘₯) = 3π‘₯2 − 2π‘₯3 ,
2
{
{2π‘₯
𝐺5 (π‘₯) = {
{−2π‘₯2 + 4π‘₯ − 1
{
1
if 0 ≤ π‘₯ ≤ ,
2
1
if < π‘₯ ≤ 1.
2
From Table 1, it seems that when the distribution has
higher chances on the tails, then the asymptotic optimal
selection rate is higher. On the other hand, when the distribution has higher chance near the center, then the asymptotic
optimal selection rate is lower. However, we do not have a
proof for this statement.
We are now considering the case when 𝑋1 , 𝑋2 , . . . , 𝑋𝑛
have an arbitrary distribution. We have made some progress,
but it is still in the premature state. We hope to be able to find
the asymptotic optimal selection rate in a forth coming paper.
𝐺4 (π‘₯)
2 − √2
0.499942
0.414948
0.332852
0.255622
0.185176
0.123399
0.072153
0.033284
0.008624
0
𝐺5 (π‘₯)
2 − √2
0.481120
0.380891
0.291619
0.214252
0.148786
0.095223
0.053563
0.023806
0.005952
0
References
[1] A. Arlotto, R. W. Chen, L. A. Shepp, and J. M. Steele, “Online
selection of alternating subsequences from a random sample,”
Journal of Applied Probability, vol. 48, no. 4, pp. 1114–1132, 2011.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download