Hindawi Publishing Corporation Advances in Operations Research Volume 2013, Article ID 513918, 7 pages http://dx.doi.org/10.1155/2013/513918 Research Article A Newton-Type Algorithm for Solving Problems of Search Theory Liping Zhang Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China Correspondence should be addressed to Liping Zhang; lzhang@math.tsinghua.edu.cn Received 9 March 2012; Accepted 15 December 2012 Academic Editor: Chandal Nahak Copyright © 2013 Liping Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In the survey of the continuous nonlinear resource allocation problem, Patriksson pointed out that Newton-type algorithms have not been proposed for solving the problem of search theory in the theoretical perspective. In this paper, we propose a Newton-type algorithm to solve the problem. We prove that the proposed algorithm has global and superlinear convergence. Some numerical results indicate that the proposed algorithm is promising. 1. Introduction We consider the problem π max∑aπ (1 − exp (−bπ xπ )) , x∈π (1) π=1 where π = {x ∈ Rπ+ | eπx = π}, a, b ∈ Rπ++ , π > 0, and e ∈ Rπ is the vector of ones. The problem described by (1) is called the theory of search by Koopman [1] and Patriksson [2]. It has the following interpretation: an object is inside box π with probability aπ , and −bπ is proportional to the difficulty of searching inside the box. If the searcher spends xπ time units looking inside box π, then he/she will find the object with probability 1 − exp(−bπ xπ ). The problem described by (1) represents the optimum search strategy if the available time is limited to π time units. Such problems in the form of (1) arise, for example, in searching for a lost object, in distribution of destructive effort such as a weapons allocation problem [3], in drilling for oil, and so forth [2]. Patriksson [2] surveyed the history and applications as well as algorithms of Problem (1); see [2, Sections 2.1.4, 2.1.5, and 3.1.2]. Patriksson pointed out that Newton-type algorithms have not been theoretically analyzed for the problem described by (1) in the references listed in [2]. Recently, related problems and methods were considered in many articles, for example, [4–6]. For example, a projected pegging algorithm was proposed in [5] for solving convex quadratic minimization. However, the question proposed by Patriksson [2] was not answered in the literature. In this paper, we design a Newton-type algorithm to solve the problem described by (1). We show that the proposed algorithm has global and superlinear convergence. According to the Fischer-Burmeister function [7], the problem described by (1) can be transformed to a semismooth equation. Based on the framework of the algorithms in [8, 9], a smoothing Newton-type algorithm is proposed to solve the semismooth equation. It is shown that the proposed algorithm can generate a bounded iteration sequence. Moreover, the iteration sequence superlinearly converges to an accumulation point which is a solution to the problem described by (1). Numerical results indicate that the proposed algorithm has good performance even for π = 10000. The rest of this paper is organized as follows. The Newtontype algorithm is proposed in Section 2. The global and superlinear convergence is established in Section 3. Section 4 reports some numerical results. Finally, Section 5 gives some concluding remarks. The following notation will be used throughout this paper. All vectors are column ones, the subscript π denotes transpose, Rπ (resp., R) denotes the space of π-dimensional real column vectors (resp., real numbers), and Rπ+ and R++ denote the nonnegative and positive orthants of Rπ and R, respectively. Let ΦσΈ denote the derivative of the function Φ. We define π := {1, 2, . . . , π}. For any vector x ∈ Rπ , we denote 2 Advances in Operations Research by diag{xπ : π ∈ π} the diagonal matrix whose πth diagonal element is xπ and vec{xπ : π ∈ π} the vector x. The symbol β ⋅ β stands for the 2-norm. We denote by π the solution set of Problem (1). For any πΌ, π½ > 0, πΌ = π(π½) (resp., πΌ = π(π½)) means πΌ/π½ is uniformly bounded (resp., tends to zero) as π½ → 0. eπ x = π, xπ ≥ 0, π − aπ bπ exp (−bπ xπ ) ≥ 0, xπ (π − aπ bπ exp (−bπ xπ )) = 0, 2. Algorithm Description In this section, we formulate the problem described by (1) as a semismooth equation and develop a smoothing Newton-type algorithm to solve the semismooth equation. We first briefly recall the concepts of NCP, semismooth and smoothing functions [10–12]. Definition 1. A function π : R function if According to Kuhn-Tucker theorem, the problem described by (1) can be transformed to 2 π (π’, π£) = 0 ⇐⇒ π’ ≥ 0, (2) Definition 2. A locally Lipschitz function πΉ : Rπ → Rπ is called semismooth at x ∈ Rπ if πΉ is directionally differentiable at x and for all π ∈ ππΉ(x + d) and d → 0, πΉ (x + d) − πΉ (x) − πd = π (βdβ) , (3) where ππΉ is the generalized Jacobian of πΉ in the sense of Clarke [13]. πΉ is called strongly semismooth at x ∈ Rπ if πΉ is semismooth at x and for all π ∈ ππΉ(x + d) and d → 0, πΉ (x + d) − πΉ (x) − πd = π (βdβ2 ) . (4) Definition 3. Let π =ΜΈ 0 be a parameter. Function πΉπ (x) is called a smoothing function of a semismooth function πΉ(x) if it is continuously differentiable everywhere and there is a constant πΜ > 0 independent of π such that σ΅© σ΅©σ΅© σ΅©σ΅©πΉπ (x) − πΉ (x)σ΅©σ΅©σ΅© ≤ πΜπ, σ΅© σ΅© ∀x. (5) The Fischer-Burmeister function [7] is one of the wellknown NCP functions: π (π’, π£) = π’ + π£ − √π’2 + π£2 . (6) ∀π ∈ π, .. . Ψ (π , x) := (π (xπ , π − aπ bπ exp (−bπ xπ ))) .. . , (10) π∈π eπ x − π ). Ψ (π , x) π» (π , x) := ( According to the Fischer-Burmeister function defined by (6), we formulate (9) as the following semismooth equation: π» (π , x) = 0. (11) Based on the perturbed Fischer-Burmeister function defined by (7), we obtain the following smooth equation: π πΊ (y) := πΊ (π, π , x) := ( eπ x − π ) = 0, Φ (π, π , x) (12) where .. . Φ (π, π , x) := (π(π, xπ , π − aπ bπ exp(−bπ xπ ))) .. . . (13) π∈π Clearly, if y∗ = (0, π ∗ , x∗ ) is a solution to (12) then x∗ is an optimal solution to the problem described by (1). We give some properties of the function πΊ in the following lemma, which will be used in the sequel. Lemma 4. Let πΊ be defined by (12). Then πΊ is semismooth on Rπ+2 and continuously differentiable at any y = (π, π , x) ∈ R++ × Rπ+1 with its Jacobian 1 0 0 π πΊσΈ (y) = ( 0 0 e ) , σΈ σΈ Φπ Φπ ΦσΈ x Clearly, the Fischer-Burmeister function defined by (6) is not smooth, but it is strongly semismooth [14]. Let π : R3 → R be the perturbed Fischer-Burmeister function defined by (14) where π (π, π’, π£) = π’ + π£ − √π’2 + π£2 + π2 . (7) It is obvious that for any π > 0, π is differentiable everywhere and for each π ≥ 0, we have σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨π (π, π’, π£) − π (π’, π£)σ΅¨σ΅¨σ΅¨ ≤ π, (9) where π ∈ R. Define → R is called an NCP π£ ≥ 0, π’π£ = 0. ∀π ∈ π, ∀ (π’, π£) ∈ R2 . 2 ΦσΈ π := vec {− ΦσΈ π := vec {1 − (8) In particular, π(0, π’, π£) = π(π’, π£) for all (π’, π£) ∈ R . Namely, π defined by (7) is a smoothing function of π defined by (6). ΦσΈ x := diag {(1 − π : π ∈ π} , π€π (π − aπ bπ exp (−bπ xπ )) : π ∈ π} , π€π xπ ) + aπ b2π exp (−bπ xπ ) (ΦσΈ π )π : π ∈ π} , π€π (15) Advances in Operations Research 3 with π€π := √(xπ )2 + (π − aπ bπ exp(−bπ xπ ))2 + π2 , π ∈ π. Moreover, the matrix πΊσΈ (y) is nonsingular on R++ × Rπ+1 . The following theorem proves that Algorithm 5 is well defined. Proof. πΊ is semismooth on Rπ+2 due to the strong semismoothness of π(π’, π£). πΊ is continuously differentiable on R++ × Rπ+1 . For any π > 0, (14) can be obtained by a straightforward calculation from (12). Clearly, we have for any π > 0, x 0 < 1 − π < 2, 0 < (ΦσΈ π )π < 2, π€π (16) 2 aπ bπ exp (−bπ xπ ) > 0, ∀π ∈ π, Theorem 5. Algorithm 5 is well defined. If it finitely terminates at πth iteration then xπ is an optimal solution to the problem described by (1). Otherwise, it generates an infinite sequence {yπ = (ππ , π π , xπ )} with ππ > 0 and ππ ≥ ππ π0 . which implies that (ΦσΈ x )ππ > 0 for all π ∈ π. Let 1 0 0 π’ π ( 0 0 e ) (π£) = 0. z ΦσΈ π ΦσΈ π ΦσΈ x ∑zπ = 0, π=1 ∀π ∈ π. (18) (ΦσΈ x )ππ Δππ = −ππ + ππ π0 . (24) π£, ππ + πΌΔππ = (1 − πΌ) ππ + πΌππ π0 > 0. (25) From Lemma 4, πΊ is continuously differentiable around yπ . Thus, (23) implies that The second equality in (18) implies zπ = (23) Hence, for any 0 < πΌ ≤ 1, we have (ΦσΈ π )π π£ + (ΦσΈ x )ππ zπ = 0, −(ΦσΈ π )π π (πΌ) := πΊ (yπ + πΌΔyπ ) − πΊ (yπ ) − πΌπΊσΈ (yπ ) Δyπ . It follows from (21) that (17) Then, we have π’ = 0 and π Proof. If ππ > 0 then Lemma 4 shows that the matrix πΊσΈ (yπ ) is nonsingular. Hence, Step 2 is well defined at the πth iteration. For any 0 < πΌ ≤ 1, define π ∈ π. (19) βπ (πΌ)β = π (πΌ) . (26) Since (ΦσΈ π )π > 0 and (ΦσΈ x )ππ > 0 for π ∈ π, the first equality in (18) yields π£ = 0 and hence z = 0. Therefore, the matrix πΊσΈ (y) defined by (14) is nonsingular for π > 0. On the other hand, (20) yields We now propose a smoothing Newton-type algorithm for solving the smooth equation in (12). It is a modified version of the smoothing Newton method proposed in [8]. The main difference is that we add a different perturbed item in Newton equation, which allows the algorithm to generate a bounded iteration sequence. Let y = (π, π , x) ∈ Rπ+2 and πΎ ∈ (0, 1). Define a function π : Rπ+2 → R+ by σ΅© σ΅© σ΅© σ΅© π (y) := πΎ σ΅©σ΅©σ΅©πΊ (y)σ΅©σ΅©σ΅© min {1, σ΅©σ΅©σ΅©πΊ (y)σ΅©σ΅©σ΅©} . (20) Algorithm 5 Therefore, for any sufficiently small πΌ ∈ (0, 1], Step 0. Choose πΏ, π ∈ (0, 1) and π0 > 0. Let u := (π0 , 0, 0) ∈ R++ × R × Rπ . Let π 0 ∈ R and x0 ∈ Rπ be arbitrary points. Let y0 := (π0 , π 0 , x0 ). Choose πΎ ∈ (0, 1) such that πΎβπΊ(y0 )β < 1 and πΎπ0 < 1. Set π := 0. Step 1. If πΊ(yπ ) = 0, stop. Otherwise, let ππ := π(yπ ). Step 2. Compute Δyπ := (Δππ , Δπ π , Δxπ ) ∈ Rπ+2 by πΊ (yπ ) + πΊσΈ (yπ ) Δyπ = ππ u. (21) Step 3. Let ππ be the smallest nonnegative integer such that σ΅© σ΅© σ΅© σ΅©σ΅© σ΅©σ΅©πΊ (yπ + πΏππ Δyπ )σ΅©σ΅©σ΅© ≤ [1 − π (1 − πΎπ0 ) πΏππ ] σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅© . σ΅© σ΅© σ΅© σ΅© (22) Let π π := πΏππ . Step 4. Set yπ+1 := yπ + π π Δyπ and π := π + 1. Go to Step 1. σ΅© σ΅© ππ ≤ πΎ σ΅©σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅©σ΅© , σ΅© σ΅©2 ππ ≤ πΎσ΅©σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅©σ΅© . (27) σ΅© σ΅© σ΅© σ΅©σ΅© σ΅©σ΅©πΊ (yπ + πΌΔyπ )σ΅©σ΅©σ΅© ≤ βπ (πΌ)β + (1 − πΌ) σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅© + πΌππ π0 σ΅© σ΅© σ΅© σ΅© σ΅©σ΅© σ΅© π σ΅© 0σ΅© σ΅© σ΅© ≤ (1 − πΌ) σ΅©σ΅©σ΅©πΊ (y )σ΅©σ΅©σ΅© + πΌπΎπ σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅©σ΅© + π (πΌ) σ΅© σ΅© = [1 − (1 − πΎπ0 ) πΌ] σ΅©σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅©σ΅© + π (πΌ) , (28) where the first inequality follows from (21) and (23), and the second one follows from (26) and (27). Inequality in (28) implies that there exists a constant 0 < πΌΜ ≤ 1 such that σ΅©σ΅© σ΅© σ΅© σ΅© σ΅©σ΅©πΊ (yπ + πΌΔyπ )σ΅©σ΅©σ΅© ≤ [1 − π (1 − πΎπ0 ) πΌ] σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅© , σ΅© σ΅© σ΅© σ΅© ∀πΌ ∈ (0, πΌΜ] . (29) This inequality shows that Step 3 is well defined at the πth iteration. In addition, by (24), Steps 3 and 4 in Algorithm 5, we have ππ+1 = (1 − π π ) ππ + π π ππ π0 > 0 (30) holds since 0 < π π ≤ 1 and ππ > 0. Consequently, from π0 > 0 and the above statements, we obtain that Algorithm 5 is well defined. It is obvious that if Algorithm 5 finitely terminates at πth iteration then πΊ(yπ ) = 0, which implies that ππ = 0 and 4 Advances in Operations Research (π π , xπ ) satisfies (9). Hence, xπ is an optimal solution to the problem described by (1). Subsequently, we assume that Algorithm 5 does not finitely terminate. Let {yπ = (ππ , π π , xπ )} be the sequence generated by the algorithm. It follows that ππ > 0. We want to prove that {yπ } satisfies ππ ≥ ππ π0 through the induction method. Clearly, π(y0 ) ≤ πΎβπΊ(y0 )β < 1, which yields y0 ∈ Ω. Assume that ππ ≥ ππ π0 ; then (24) yields ππ+1 − ππ+1 π0 = ππ + π π Δππ = (1 − π π ) ππ + π π ππ π0 − ππ+1 π0 ≥ (1 − π π ) ππ π0 + π π ππ π0 − ππ+1 π0 (31) Hence, {βπΊ(yπ )β} also converges. Consequently, there exists a constant π > 0 such that βπΊ(yπ )β ≤ π for all π. This implies that {ππ } and {xπ } are bounded, and that for any π and π ∈ π, σ΅¨σ΅¨ σ΅¨σ΅¨ π π π σ΅¨σ΅¨xπ + (π − aπ bπ exp (−bπ xπ )) σ΅¨σ΅¨ (36) σ΅¨ 2 σ΅¨σ΅¨σ΅¨ π 2 π 2 π π √ − (xπ ) + (π − aπ bπ exp (−bπ xπ )) + (π ) σ΅¨σ΅¨ ≤ π. σ΅¨σ΅¨ This shows that {π π } is also bounded. Consequently, {yπ } is bounded. Let y∗ = (π∗ , π ∗ , x∗ ) be an accumulation point of {yπ }. We assume, without loss of generality, that {yπ } converges to y∗ . Then, we have σ΅© σ΅© σ΅© σ΅© lim σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅©σ΅© = σ΅©σ΅©σ΅©πΊ (y∗ )σ΅©σ΅©σ΅© , ≥ π0 (ππ − ππ+1 ) . lim ππ = π∗ . (37) σ΅© σ΅© σ΅© σ΅© lim ππ = π∗ := πΎ σ΅©σ΅©σ΅©πΊ (y∗ )σ΅©σ΅©σ΅© min {1, σ΅©σ΅©σ΅©πΊ (y∗ )σ΅©σ΅©σ΅©} . (38) π→∞ σ΅© π→∞ By (20), Clearly, σ΅© σ΅©2 σ΅© σ΅© πΎσ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅©σ΅© , if σ΅©σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅©σ΅© < 1, ππ = { σ΅©σ΅©σ΅©σ΅© σ΅© πΎ σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅©σ΅© , otherwise. (32) Suppose βπΊ(y∗ )β > 0 by contradiction. Then π∗ > 0. From Theorem 5 and (22), we have It follows from (20) and (22) that σ΅©σ΅© σ΅© σ΅© σ΅© σ΅©σ΅©πΊ (yπ+1 )σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅© , σ΅© σ΅© σ΅© σ΅© σ΅©σ΅© σ΅© ππ+1 ≤ πΎ σ΅©σ΅©σ΅©πΊ (yπ+1 )σ΅©σ΅©σ΅©σ΅© , σ΅© σ΅©2 ππ+1 ≤ πΎσ΅©σ΅©σ΅©σ΅©πΊ(yπ+1 )σ΅©σ΅©σ΅©σ΅© . π∗ ≥ π∗ π0 > 0, (33) Hence, combining (31), (32), and (33), we obtain that ππ+1 ≥ ππ+1 π0 which gives the desired result. lim π π = 0. π→∞ In this section we establish the convergence property for Algorithm 5. We show that the sequence {yπ = (ππ , π π , xπ )} generated by Algorithm 5 is bounded and its any accumulation point yields an optimal solution to the problem described by (1). Furthermore, we show that the sequence {yπ } is superlinearly convergent. Theorem 6. The sequence {yπ = (ππ , π π , xπ )} generated by Algorithm 5 is bounded. Let y∗ = (π∗ , π ∗ , x∗ ) denote an accumulation point of {yπ }. Then π∗ = 0, and x∗ is the optimal solution of the problem described by (1). Proof. By Theorem 5, ππ+1 = (1 − π π ) ππ + π π ππ π0 ≤ (1 − π π ) ππ + π π ππ = ππ , (34) which implies that {ππ } is monotonically decreasing and hence converges. It follows from (22) that ∀π. (35) (39) Hence, from Step 3 of Algorithm 5, we obtain σ΅© σ΅©σ΅© π π σ΅© σ΅© σ΅©σ΅© π π σ΅© 0 π σ΅© σ΅©σ΅©πΊ (y + ( π ) Δy )σ΅©σ΅©σ΅© > [1 − π (1 − πΎπ ) ( π )] σ΅©σ΅©σ΅©σ΅©πΊ (y )σ΅©σ΅©σ΅©σ΅© . σ΅© σ΅©σ΅© πΏ πΏ σ΅© (40) Taking π → ∞ in (40) and then combining with (21), we have π σ΅© σ΅©2 σ΅© σ΅©2 −σ΅©σ΅©σ΅©πΊ (y∗ )σ΅©σ΅©σ΅© + π∗ πΊ(y∗ ) u ≥ −π (1 − πΎπ0 ) σ΅©σ΅©σ΅©πΊ(y∗ )σ΅©σ΅©σ΅© , 3. Convergence Analysis σ΅© σ΅© σ΅© σ΅©σ΅© σ΅©σ΅©πΊ (yπ+1 )σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅© , σ΅© σ΅© σ΅© σ΅© π→∞ (41) which yields σ΅© σ΅© π∗ π0 ≥ [1 − π (1 − πΎπ0 )] σ΅©σ΅©σ΅©πΊ (y∗ )σ΅©σ΅©σ΅© . (42) Since π∗ ≤ πΎβπΊ(y∗ )β, (42) implies (1 − π) (1 − πΎπ0 ) ≤ 0, (43) which contradicts π < 1 and πΎπ0 < 1. Therefore, βπΊ(y∗ )β = 0 and then π∗ = 0. Consequently, x∗ is the optimal solution of the problem described by (1). We next analyze the rate of convergence for Algorithm 5. By Theorem 6, we know that Algorithm 5 generates a bounded iteration sequence {yπ } and it has at least one accumulation point. The following lemma will be used in the sequel. Lemma 7. Suppose that y∗ = (π∗ , π ∗ , x∗ ) is an accumulation point of the iteration sequence {yπ } generated by Algorithm 5. Let π be a matrix in ππ»(π ∗ , x∗ ). Then the matrix π is nonsingular. Advances in Operations Research 5 Proof. Define two index sets: 2 2 π1 := {π ∈ π : (xπ∗ ) + (π ∗ − aπ bπ exp (−bπ xπ∗ )) =ΜΈ 0} , π2 := {π ∈ π : xπ∗ = 0, π ∗ = aπ bπ } . (44) By a direct computation, we get 0 eπ π = (πΌ π΄ ) , π½ π΅ } { } { π ∗ − aπ bπ exp (−bπ xπ∗ ) : π ∈ π1 } , πΌ = vec {1 − } { 2 2 √(xπ∗ ) + (π ∗ − aπ bπ exp (−bπ xπ∗ )) } { + xπ∗ √(xπ∗ )2 aπ b2π 2 + (π ∗ − aπ bπ exp (−bπ xπ∗ )) exp (−bπ xπ∗ ) πΌπ , Obviously, π½π ≥ 0, π΅π > 0, aπ b2π ) , π ∈ π1 , π ∈ π2 . (47) π∈π2 πΌπ π€ + π΄ π zπ = 0, π ∈ π1 ; π½π π€ + π΅π zπ = 0, π ∈ π2 , (48) zπ = − π½π π΅π π€, π ∈ π2 . (49) σ΅©2 σ΅©2 σ΅© σ΅©σ΅© σ΅©σ΅©πΊ(yπ )σ΅©σ΅©σ΅© = π (σ΅©σ΅©σ΅©yπ − y∗ σ΅©σ΅©σ΅© ) , σ΅© σ΅© σ΅© σ΅© π½π πΌπ + ∑ ) π€ = 0, π∈π1 π΄ π π∈π2 π΅π (53) This inequality, together with (51) and (52), yields σ΅©σ΅© π σ΅© σ΅©σ΅©y + Δyπ − y∗ σ΅©σ΅©σ΅© σ΅© σ΅© σ΅©σ΅© −1 σ΅© σ΅© σ΅© σ΅© ≤ σ΅©σ΅©σ΅©πΊσΈ (yπ ) σ΅©σ΅©σ΅© (σ΅©σ΅©σ΅©σ΅©πΊ (yπ ) − πΊσΈ (yπ ) (yπ − y∗ )σ΅©σ΅©σ΅©σ΅© + ππ π0 ) σ΅© σ΅© (55) Following the proof of Theorem 3.1 in [15], we obtain βyπ − y∗ β = π(βπΊ(yπ )β) for all yπ sufficiently close to y∗ . This implies, from (55), that for all yπ sufficiently close to y∗ , σ΅© σ΅© σ΅© σ΅© = π (σ΅©σ΅©σ΅©σ΅©yπ − y∗ σ΅©σ΅©σ΅©σ΅©) = π (σ΅©σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅©σ΅©) . (56) Since limπ → ∞ βπΊ(yπ )β = 0, it follows from (56) that π π = 1 can satisfy (22) when yπ is sufficiently close to y∗ . Therefore, for all yπ sufficiently close to y∗ , we have Therefore, (∑ (52) Since πΊ is locally Lipschitz continuous near y∗ . Therefore, for all yπ sufficiently close to y∗ , σ΅© σ΅©σ΅© σ΅© σ΅© σ΅©σ΅©πΊ (yπ + Δyπ )σ΅©σ΅©σ΅© = π (σ΅©σ΅©σ΅©yπ + Δyπ − y∗ σ΅©σ΅©σ΅©) σ΅© σ΅© σ΅© σ΅© which implies π ∈ π1 ; σ΅© σ΅©σ΅© σ΅© σ΅© σ΅©σ΅©πΊ (yπ ) − πΊσΈ (yπ ) (yπ − y∗ )σ΅©σ΅©σ΅© = π (σ΅©σ΅©σ΅©yπ − y∗ σ΅©σ΅©σ΅©) . σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© = π (σ΅©σ΅©σ΅©σ΅©yπ − y∗ σ΅©σ΅©σ΅©σ΅©) . ∑ zπ + ∑ zπ = 0; πΌ zπ = − π π€, π΄π for all yπ sufficiently close to y∗ . From Lemma 4, we know that πΊ is semismooth at y∗ . Hence, for all yπ sufficiently close to y∗ , σ΅©σ΅© σ΅©σ΅© −1 = σ΅©σ΅©σ΅©yπ + πΊσΈ (yπ ) (−πΊ (yπ ) + ππ u) − y∗ σ΅©σ΅©σ΅© σ΅© σ΅© Let π(π€, z) = 0. Then we have π∈π1 (51) σ΅© σ΅©2 σ΅©2 σ΅©2 σ΅© σ΅© ππ π0 ≤ πΎπ0 σ΅©σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅©σ΅© = π (σ΅©σ΅©σ΅©σ΅©πΊ (yπ )σ΅©σ΅©σ΅©σ΅© ) = π (σ΅©σ΅©σ΅©σ΅©yπ − y∗ σ΅©σ΅©σ΅©σ΅© ) . (54) π ∈ π1 , π ∈ π2 . (46) 0 < π΄ π < 2 (1 + σ΅©σ΅© σΈ π −1 σ΅©σ΅© σ΅©σ΅©πΊ (y ) σ΅©σ΅© = π (1) σ΅©σ΅© σ΅©σ΅© which implies that (π½π , π΅π ) ∈ {(π’, π£ + aπ b2π π’) : (π’ − 1)2 + (π£ − 1)2 ≤ 1} , 0 < πΌπ < 2, Proof. By Theorem 6, {yπ } is bounded and then let y∗ = (π∗ , π ∗ , x∗ ) be its any accumulation point. Hence, πΊ(y∗ ) = 0 and all matrices π ∈ ππΊ(y∗ ) are nonsingular from Lemma 7. By Proposition 3.1 in [12], (45) where π΄ is an |π1 | × π matrix with all elements being zero except the (π, π)-th as π΄ π for π ∈ π1 , π΅ is an |π2 |×π matrix with all elements being zero except the (π, π)-th as π΅π for π ∈ π2 , and π΄π = 1 − Theorem 8. Let {yπ } be the iteration sequence generated by Algorithm 5. Then {yπ } superlinearly converges to y∗ , that is, βyπ+1 − y∗ β = π(βyπ − y∗ β). (50) which, together with (47), yields π€ = 0. Thus, (49) implies z = 0, and hence the matrix π is nonsingular. yπ+1 = yπ + Δyπ , (57) which, together with (55), proves βyπ+1 − y∗ β = π(βyπ − y∗ β). Namely, {yπ } superlinearly converges to y∗ . 6 Advances in Operations Research Table 1: Problem (1) with a, b, π randomly generated in different intervals. DIM 100 500 1000 5000 10000 Inter 7 13.6 16.1 22.6 26 Gval 4.0397π − 011 5.7544π − 010 9.3739π − 010 9.2203π − 010 9.6499π − 010 CPU (sec.) 0.0016 0.0078 0.0167 0.0844 0.2182 AT [5] 0.0008 0.0021 0.0290 0.0980 0.3946 Table 2: Problem (1) with with a, b, π randomly generated in [0, 1]. DIM 100 500 1000 5000 10000 Inter 12.3 15.7 16.8 19.2 21.6 Gval 2.0675π − 009 1.3929π − 009 1.2247π − 009 1.0316π − 009 1.9056π − 009 CPU (sec.) 0.006 0.015 0.037 0.119 0.246 AT [5] 0.0017 0.0021 0.0524 0.1082 0.4693 C 2 × 106 10−6 D 106 10−6 Table 3 Activity a b A 4 × 106 2 × 10−6 B 3 × 106 3 × 10−6 Table 4 Region a b I 0.1013 0.01 II 0.3205 0.02 4. Computational Experiments In this section, we report some numerical results to show the viability of Algorithm 5. First, we compare the numerical performance of Algorithm 5 and the algorithm in [5] on two randomly generated problems. Second, we apply Algorithm 5 to solve two real world examples. Throughout the computational experiments, the parameters used in Algorithm 5 were πΏ = 0.75, π = 0.25, π0 = 0.001, and πΎ = min{1/βπΊ(y0 )β, 0.99}. In Step 1, we used βπΊ(yπ )β ≤ 10−8 as the stopping rule. The vector of ones is all the starting points. Firstly, problems in the form of (1) with 100, 500, 1000, 5000, and 10000 variables were computed. In the first randomly generated example, aπ was randomly generated in the interval [10, 20], bπ was randomly generated in the interval [1, 2] for each π ∈ π, and π was randomly generated in the interval [50, 51]. In the second randomly generated example, aπ with ∑ππ=1 aπ = 1 and bπ was randomly generated in the interval [0, 1] for each π ∈ π, and π was also randomly generated in the interval [0, 1]. Each problem was run 30 times. The numerical results are summarized in Tables 1 and 2, respectively. Here, Dim denotes the number of variable; AT [5] denotes the average run time in seconds used by the algorithm in [5]. In particular, we list more items for Algorithm 5, Inter denotes the average number of iterations, CPU (sec.) is the average run time, and Gval denotes the average values of βπΊ(yπ )β at the final iteration. III 0.1323 0.01 IV 0.2730 0.02 V 0.1730 0.01 The numerical results reported in Tables 1 and 2 show that the proposed algorithm solves the test problems much faster than the algorithm in [5] when the size of problem is large. Secondly, we apply Algorithm 5 to solve two real world problems. The first example is described in [16]. This problem is how to allocate a maximum amount of total effort, π, among π independent activities, where aπ (1−exp(−bπ xπ )) is the return from the πth activity, that is, effort π₯π , to yield the maximum total return. Note that here aπ is the potential attainable and ππ is the rate of attaining the potential from effort xπ . When no effort is devoted to the πth activity, the value of xπ is zero. This example usually arises in the marketing field; the activities may correspond to different products, or the same product in different marketing areas, in different advertising media, and so forth. In this example we wish to allocate one million dollars among four activities with values of aπ and bπ as in Table 3. For this problem Algorithm 5 obtained the maximum total return 4.8952 × 106 after 25 iterations and elapsing 0.0625 second CPU time. The total effort one million dollars was allocated, 0.5764 million and 0.4236 million to activities A and B, respectively. The second example is to search an object in 5 regions, where aπ is the prior probability with ∑5π=1 aπ = 1 of an object of search being in the πth region and 1 − exp(−bπ xπ ) is the probability of finding an object known to be in the πth region with xπ time units. The data of this example, listed in Table 4, come from Professor R. Jiang of Jiaozhou Bureau of Water Advances in Operations Research Conservancy: the available time is π = 30 time unite, and after 16 iterations and elapsing 0.0781 second CPU time Algorithm 5 computed the result x = (0, 16.6037, 0, 13.3963, 0). This shows that we will spend 17 time units in Region II and 13 time units in Region IV to find water. 5. Conclusions In this paper we have proposed a Newton-type algorithm to solve the problem of search theory. We have shown that the proposed algorithm has global and superlinear convergence. Some randomly generated problems and two real world problems have been solved by the algorithm. The numerical results indicate that the proposed algorithm is promising. Acknowledgment This work was supported by NSFC Grant (11271221). References [1] B. O. Koopman, Search and Screening. General Principles with Historical Applications, Military Operations Research Society, Alexandria, Va, USA, 1999. [2] M. Patriksson, “A survey on the continuous nonlinear resource allocation problem,” European Journal of Operational Research, vol. 185, no. 1, pp. 1–46, 2008. [3] V. V. Popovich, Y. A. Ivakin, and S. S. Shaida, “Theory of search for moving objects,” in Proceedings of the IEEE Ocean Conference and Exhibition, pp. 1319–1329, Biloxi, Miss, USA, October 2002. [4] W. Chen, Y. J. Shi, H. F. Teng, X. P. Lan, and L. C. Hu, “An efficient hybrid algorithm for resource-constrained project scheduling,” Information Sciences, vol. 180, no. 6, pp. 1031–1039, 2010. [5] S. M. Stefanov, “Convex quadratic minimization subject to a linear constraint and box constraints,” Applied Mathematics Research Express, vol. 2004, no. 1, pp. 17–42, 2004. [6] C. L. Sun, J. C. Zeng, and J. S. Pan, “An improved vector particle swarm optimization for constrained optimization problems,” Information Sciences, vol. 181, no. 6, pp. 1153–1163, 2011. [7] A. Fischer, “A special Newton-type optimization method,” Optimization, vol. 24, no. 3-4, pp. 269–284, 1992. [8] L. Zhang, S.-Y. Wu, and T. Gao, “Improved smoothing Newton methods for π0 nonlinear complementarity problems,” Applied Mathematics and Computation, vol. 215, no. 1, pp. 324–332, 2009. [9] L. P. Zhang and Y. Zhou, “A new approach to supply chain network equilibrium models,” Computers & Industrial Engineering, vol. 63, pp. 82–88, 2012. [10] R. Mifflin, “Semismooth and semiconvex functions in constrained optimization,” SIAM Journal on Control and Optimization, vol. 15, no. 6, pp. 959–972, 1977. [11] L. Qi, “Regular pseudo-smooth NCP and BVIP functions and globally and quadratically convergent generalized Newton methods for complementarity and variational inequality problems,” Mathematics of Operations Research, vol. 24, no. 2, pp. 440–471, 1999. [12] L. Q. Qi and J. Sun, “A nonsmooth version of Newton’s method,” Mathematical Programming, vol. 58, no. 3, pp. 353–367, 1993. [13] F. H. Clarke, Optimization and Nonsmooth Analysis, John Wiley & Sons, New York, NY, USA, 1983. 7 [14] X. Chen, L. Qi, and D. Sun, “Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities,” Mathematics of Computation, vol. 67, no. 222, pp. 519–540, 1998. [15] L. Q. Qi, “Convergence analysis of some algorithms for solving nonsmooth equations,” Mathematics of Operations Research, vol. 18, no. 1, pp. 227–244, 1993. [16] C. Wilkinson and S. K. Gupta, “Allocation promotional effort to competing activities: a dynamic programming approach,” in Proceedings of the International Federation of Operational Research Societies Conference (IFORS ’69), pp. 419–432, Venice, Italy, 1969. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014