Heuristic and Special Case Algorithms for Dispersion Problems Author(s): S. S. Ravi, D. J. Rosenkrantz and G. K. Tayi Reviewed work(s): Source: Operations Research, Vol. 42, No. 2 (Mar. - Apr., 1994), pp. 299-310 Published by: INFORMS Stable URL: http://www.jstor.org/stable/171673 . Accessed: 08/08/2012 18:05 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. . INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Operations Research. http://www.jstor.org HEURISTICAND SPECIALCASE ALGORITHMS FOR DISPERSIONPROBLEMS and G. K. TAYI S. S. RAVI, D. J. ROSENKRANTZ Universityat Albany-SUNY,Albany,New York (ReceivedAugust 1991;revisionreceivedJune 1992;acceptedJune 1992) The dispersion problem arises in selecting facilities to maximize some function of the distances between the facilities.The problemalso arisesin selectingnondominatedsolutionsfor multiobjectivedecisionmaking.It is known to be NP-hardundertwo objectives:maximizingthe minimumdistance(MAX-MIN)betweenany pairof facilitiesand maximizingthe averagedistance (MAX-AVG).We consider the question of obtaining near-optimalsolutions. For MAX-MIN, we show that if the distancesdo not satisfythe triangleinequality,there is no polynomial-timerelative approximationalgorithmunless P = NP. When the distancessatisfythe triangleinequality,we analyze an efficient heuristicand show that it provides a performanceguaranteeof two. We also prove that obtaininga performance guaranteeof less than two is NP-hard.For MAX-AVG,we analyzean efficientheuristicand show that it providesa performanceguaranteeof four when the distancessatisfythe triangleinequality.We also presenta polynomial-time algorithmfor the 1-dimensionalMAX-AVGdispersionproblem.Using that algorithm,we obtain a heuristicwhich providesan asymptoticperformanceguaranteeof ir/2 for the 2-dimensionalMAX-AVGdispersionproblem. any problemsin locationtheorydeal with the placementof facilitieson a networkto minimize somefunctionof the distancesbetweenfacilities or betweenfacilitiesand the nodes of the network (Handlerand Mirchandani1979). Such problems model the placementof "desirable" facilitiessuch as warehouses,hospitals,and fire stations. However, therearesituationsin whichfacilitiesareto be located to maximizesome functionof the distancesbetween pairsof nodes. Such location problemsare referred to as dispersionproblems (Chandrasekharan and Daughety1981,Kuby1987,ErkutandNeuman1989, 1990,and Erkut1990)becausethey modelsituations in which proximityof facilitiesis undesirable.One example of such a situationis the distributionof businessfranchisesin a city (Erkut).Otherexamples of dispersionproblemsarisein the contextof placing "undesirable" (also called obnoxious)facilities,such as nuclearpowerplants,oil storagetanks,andammunitiondumps(Kuby 1987,Erkutand Neuman 1989, 1990, and Erkut 1990). Such facilitiesneed to be spreadout to the greatestpossibleextent so that an accidentat one of the facilitieswill not damageany of the others.The conceptof dispersionis also useful in the context of multiobjectivedecision making (Steuer 1986). When the numberof nondominated solutionsis large,a decisionmakermay be interested M in selectinga manageable collectionof solutionswhich are dispersedas far as possiblewith respectto the objectivefunctionvalues.Otherapplicationsof facility dispersionarediscussedin Erkut(1990)andErkut and Neuman(1989, 1990). Analytical models for the dispersion problem assumethatthe givennetworkis represented by a set V = {vI, v2, ..., Vn} of n nodes with nonnegative distance(alsocallededge weight)betweeneverypair of nodes.The distancesareassumedto be symmetric andso the networkcanbe thoughtof as an undirected completegraphon n nodeswitha nonnegativeweight on eachedge.The weightof the edge {vi,vjI(i $ j) is denoted by w(vi, vj). We assume that w(vi, vi) = 0 for 1 s i s n. The objectiveof the dispersionproblemis to locatep facilities(p s n) amongthe n nodesof the network,withat most one facilitypernode,suchthat some functionof the distancesbetweenfacilitiesis maximized.Two of the optimalitycriteriaconsidered in the literature(Kuby 1987, Erkut1990,and Erkut and Neuman 1989) are MAX-MIN (i.e., maximize the minimum distancebetweena pair of facilities) and MAX-AVG(i.e., maximizethe averagedistance betweena pair of facilities).Under either criterion, the problemis knownto be NP-hard,even whenthe distancessatisfythe triangleinequality(Wang and Kuo 1988,HansenandMoon 1988,andErkut1990). Subjectclassifications:Analysisof algorithms:computationalcomplexity.Facilities/equipment planning:discretelocation.Programming: heuristic. Areaof review:OPTIMIZATION. OperationsResearch Vol. 42, No. 2, March-April 1994 299 0030-364X/94/4202-0299$01.25 ? 1994 Operations Research Society of America 300 / RAVI, ROSENKRANTZAND TAYI Althoughmany researchershave studiedthe dispersion problem(see Erkut and Neuman 1989 for a survey and an extensive bibliography),except for White(1991), the questionof whetherthereare efficient heuristicswith provablygood performancehas not been addressed.This question forms the main focus of this paper.We showthat if the distancesdo not satisfythe triangleinequality,then for any constantK 3 1, no polynomialtime algorithmcan providea performance guaranteeof K forthe MAX-MIN dispersionproblemunless P = NP. When the distances satisfy the triangleinequality,we analyze a knownheuristicand provethat it providesa performanceguaranteeof 2 for the MAX-MINdispersion problem.This is an improvementover the performanceguaranteeof 3 provenin White(1991)usinga differentheuristic.(This improvementwas obtained independentlyby White 1992.)We also showthatno polynomial-timealgorithm can provide a performanceguaranteeof lessthan2 unlessP = NP. Wealso analyzean efficientheuristicfor the MAX-AVGdispersionproblemwith triangleinequality,and prove that it providesa performanceguaranteeof 4. An efficientalgorithmfor the 1-dimensionalMAX-MIN dispersion problem is presented in Wang and Kuo. We provide an efficient algorithm for the 1-dimensionalMAX-AVGdispersionproblem.We also showhow this algorithmcan be usedto obtaina heuristicwithan asymptoticperformance guaranteeof 1.571for the 2-dimensionalMAX-AVGdispersion problem. The remainderof this paperis organizedas follows. Section 1 containsthe formaldefinitionsand a discussionof the previousworkon the dispersionproblem. Sections2 and 3 addressthe dispersionproblem under the MAX-MIN and MAX-AVG criteria, respectively.Section 4 presentsour results for the 1- and 2-dimensionaldispersionproblems.Section5 containstables which summarizeprior results,our contributions,and open problems. 1. DEFINITIONSAND PREVIOUSWORK We begin with the specificationsof the MAX-MIN and MAX-AVGdispersionproblemsin the formatof Gareyand Johnson(1979). MAX-MIN FacilityDispersion(MMFD) Instance:A set V = Iv1, v2, ..., VnI of n nodes, a nonnegativedistancew(vi, v;) for each pair vi, v; of nodes,and an integerp suchthat 2 < p < n. Requirement: Find a subset P = Ivil, vi2,..., of V with IPI = p, such that the objectivefunction f(P) = minxpA w(x, y)} is maximized. MAX-AVGFacility Dispersion (MAFD) Instance:As in MMFD. Requirement:Find a subset P= {vil, vi2,..., vij of Vwith IPI = p, suchthatthe objectivefunction g(P) 2 = P(P - 1)XyeP W(X,A) is maximized. The objectivefunctionfor MAFD has the above formbecausethe numberof edgesamongthe nodes in P is p(p - 1)/2. Note thatmaximizingthe average distance is equivalentto maximizingthe sum of the distances.We point out that maximizingthe averagewould sometimesproducesolutionswhich are far from the optimum with respect to the MAX-MINcriterionand vice versa. The distancesspecifiedin an instanceof MMFDor MAFD satisfy the triangle inequality if for any three distinctnodes vi, vj, and Vk, w(vi, vJ) + w(vj, VOk) w(vi, Vk). The set P of nodes at which an algorithm places the p facilities is called a placement. Given a placementP for an MMFD instance,the quantity f(P) defined by f(P) = minIw(x, y)j (1) xYEP is called the solution value correspondingto P. Simi- larly,given an MAFD instanceand a placementP, to P is defined the solutionvalueg(P) corresponding by 2 (2) g(P) = i w(x, y). p - 1) XJYEP Both MMFD and MAFD are known to be NPhard,even when the edgeweightssatisfythe triangle inequality(Wangand Kuo 1988, Hansenand Moon 1988, and Erkut 1990). Much of the work on the dispersionproblemreportedin the literature(see the bibliographyin Erkutand Neuman 1989) falls into two categories.Papersin the firstcategorydeal with branch-and-boundalgorithms and heuristics (see Kuby 1987, Erkut, Baptie and von Hohenbalken 1990, Erkut 1990, Erkutand Neuman 1989, 1990, andthe referencescitedtherein).However,exceptfor White(1991), only experimentalstudiesof the performanceof the heuristicshavebeen reported.White (1991)presentsa heuristicforMMFDwhenthe nodes are points in d-dimensionalEuclideanspace and the distancebetweena pairof pointsis theirEuclidean AND TAYI RAVI, ROSENKRANTZ distance. He shows that the heuristic always produces a placement whose solution value is within a factor of 3 of the optimum solution value. In the next section, we improve that result by considering a different heuristic which guarantees a placement whose solution value is within a factorof 2 of the optimal solution value for any instance of MMFD in which the distances satisfy the triangle inequality. We also show that unless P = NP, no polynomial-time algorithm can provide a better performanceguarantee. Papers in the second category deal with restricted versions and variants of MMFD and MAFD. For example, 1- and 2-dimensional versions of MMFD were studied by Wang and Kuo. They present a polynomial algorithm for the 1-dimensional MMFD and prove that the 2-dimensional MMFD is NP-hard. We note that problems MMFD and MAFD defined above are discrete in nature because each facility must be placed at one of the given nodes. Researchers have also considered continuous versions of the dispersion problems (Church and Garfinkel 1978, Chandrasekharan and Daughety 1981, and Tamir 1991) where facilities may be placed at any point on the edges of a given network. In Chandrasekharanand Daughety a polynomial algorithm is presented for the continuous version of MMFD on tree networks. Churchand Garfinkelpresenta polynomial algorithm for locating one facility on an edge of a connected (but not necessarilycomplete) network to maximize a weighted sum of the distances from the facility to the nodes of the network. Tamir presents an improved algorithmfor the same problem. In addition, he establishes the NP-hardness of the continuous versions of MMFD and MAFD and presents results concerning performanceguaranteesfor the continuous version of MMFD. Dasarathy and White (1980) assume the nodes of the network to be points in k-dimensional space and consider the problem of finding a point within a given convex polyhedron to maximize the minimum Euclidean distance between the point and the nodes. They present polynomial algorithms for k = 2 and k = 3. For k = 2, this problem is referred to as the largestemptycircle(LEC)in the computational geometry literature (see subsection 6.4 of Preparataand Shamos 1985). The LEC problem can be solved in O(n log n) time, which is known to be optimal (Preparataand Shamos). A problem similar to LECbut with a differentdistance function is studied in Melachrinoudis and Cullinane (1986). A weighted version of the problem for k = 2 is studied in Erkut and Oncu (1991). For a discussion of other variants, we refer the reader to Erkut and Neuman (1989, 1990). / 301 In this paper, we consider only the discrete versions of the dispersion problems. Our focus is on the analysis of heuristicsfor MMFD and MAFD. By a heuristic we meana polynomial-time approximation algorithm which produces feasible, but not necessarily optimal, solutions. Heuristics are commonly classified as absolute or relativedepending on the types of performance guaranteesthat can be establishedfor them (Horowitz and Sahni 1984). An absolute approximation algorithm guaranteesa solution that is within an additive constant of the optimal value for every instance of the problem. A relative approximation algorithm guarantees a solution that is within a multiplicativeconstant of the optimal value for every instance of the problem. It is easy to show, using the technique presented in Garey and Johnson (pp. 138-139), that there are no absolute approximation algorithms for MAFD or for MMFD, unless P = NP. So we restrict our attention to the study of relative approximation algorithms. 2. NEAR-OPTIMALSOLUTIONSTO MAX-MIN FACILITYDISPERSION We first consider MMFD without requiring the distances to satisfy the triangle inequality and prove a negative result concerning relative approximation algorithms. Theorem 1. If thedistancesare not requiredto satisfy the triangleinequality,then thereis no polynomialtime relativeapproximationalgorithmfor MMFD unlessP = NP. Proof. Suppose that A is a polynomial-time approximation algorithmwhich provides a performanceguarantee of K , 1 for MMFD. We show that A can be used to devise a polynomial-time algorithm for a problem which is known to be NP-complete. This contradicts the assumption that P ? NP and, hence, will establish the theorem. The known NP-complete problem used here is CLIQUE, whose definition is as follows (Garey and Johnson). Problem CLIQUE Instance:An undirectedgraph G(N, E) and a positive integer J < INI. Question: Does G contain a clique of size : J (i.e., is there a subset N' C N such that IN' 1 > J and every pair of vertices in N' is joined by an edge in E)? Consider an arbitraryinstance of CLIQUE defined by a graph G(N, E) and integer J. Let n = INI and 302 / RAvI, ROSENKRANTZ AND TAYI {XI, x2, ..., Xn We constructan instanceof MMFD (without triangle inequality) as follows. The node set V= {v1, v2, . . . , Vn of the MMFD instance is in one-to-one correspondence with N. The number of facilities p is set equal to J and the distances are defined as follows. Let w(v,, vj) = K + 1 if {xi, xj; is in E; otherwise,let w(vi, v;)= 1. Clearly,this construction can be carried out in polynomial time. We will show that for the resulting MMFD instance, the solution value of a placement produced by A is greater than 1 iff G has a clique of size J. Suppose that G has a clique of size J. Let $xi, xi2, ..., xiJ denote the vertices of the clique. Consider the placement P = {Ivi, vi2,. . . , viJ. By our definition of the distances, the weight of every edge in P is equal to K + 1. Therefore, the solution value of the placement P is also K + 1. Since A provides a performanceguarantee of K, the solution value of the placement returned by A is at least (K + 1)/K, which is greater than 1. Now suppose that G does not have a clique of size J. In this case, notice that no matter which subset of p = J nodes is chosen as the placement, there will always be at least one pair of nodes vi and v; with w(vi, v>)= 1. Therefore,the solution value corresponding to any placement is at most 1. In particular,the solution value of a placement produced by A is also at most 1. Thus, by merely comparing the solution value of the placement produced by A with 1, we can solve an arbitraryinstance of the CLIQUE problem. This completes the proof. N= Even though Theorem 1 provides a strong negative result, it is not applicable in many practicalsituations because distances often satisfy the triangle inequality. Therefore,it is of interest whether there is an efficient relative approximation algorithm for MMFD when the distances satisfy the triangle inequality (MMFD-TI). The remaining theorems in this section precisely characterize the performance guarantees obtainable for MMFD-TI. A greedy heuristic (which we call GMM) for MMFD-TI is shown in Figure 1. This heuristic is essentially the same as the "furthest point outside the neighborhood" heuristic, described in Steuer (Chapter 11), using a different format. An experimental study of the performanceof this heuristic is carried out in Erkut and Neuman (1990). In describing this heuristic, we use P to denote the set of nodes at which GMM places the p facilities. The heuristic begins by initializing P to contain a pair of nodes in V which are joined by an edge of maximum weight. Subsequently, each iteration of GMM chooses a node v Step 1. Let vi and uj be the endpoints of an edge of maximum weight. Step 2. P - {xtu,vj}. Step 3. while ( P1 < p ) do begin a. Find a node v E V - P such that min {w( , v')} is maximum among the nodes in V - P. b. Pe- P tt} end Step 4. Output P. Figure 1. Details of heuristic GMM. from V - P such that the minimum distance from v to a node in P is the largest among all the nodes in V - P. In each step, ties are broken arbitrarily. Heuristic GMM terminates when IP I = p. The solution value of the placement P produced by GMM is equal to minx and w(x, y)I. We now present an example to illustratethe GMM heuristic. Consider an MMFD-TI instance with five nodes (denoted by Vt, V2, V3, V4, and V5)and let p = 3. The edge weights which satisfy the triangle inequality are: w(v1, v2)= 3, W(V2, V3) = W(V2, V4) = W(V2, V5) = 1, and all other edges are of weight 2. To begin, GMM will place two of the facilities at vt and v2 because w(v1, v2) = 3 is the maximum edge weight. Now, no matter where the third facility is placed, the solution value of the placement is 1 because each of the remaining nodes (V3, V4, and v5) has an edge of weight 1 to v2. However, an optimal placement consists of the three nodes V3,V4, and vs and has a solution value of 2. Thus, for this example, the solution value produced by GMM differs from the optimal value by a factor of 2. Our next theorem shows that the performance of GMM is never worse. (This result was obtained independentlyby White 1992.) Moreover,we will also show (Theorem 3) that unless P = NP, no polynomialtime heuristic can provide a better performance guarantee. Theorem2. Let I be an instanceof MMFD-TI. Let OPT(I)andGMM(I)denote,respectively, thesolution valuesof an optimalplacementand thatproducedby GMMforthe instanceL ThenOPT(I)/GMM(I)< 2. Proof. Consider the set-valued variable P in the description of GMM. Let f(P) = minxyp w(x, y)}. We will show by induction that the condition f(P) 3_:OPT(I)12 (3) holds after each addition to P. Since GMM(I) = f(P) after the last addition to P, the theorem then follows. Since the first addition inserts two nodes joined by AND TAYI / RAVI. ROSENKRANTZ 303 Our next theoremshows that if P ? NP, GMM provides the best possible performanceguarantee obtainablein polynomialtime for MMFD-TI. an edge of the largestweightinto P, (3) clearlyholds afterthe firstaddition.So, assumethat the condition holdsafterk additionsto P for some k - 1. We will prove that the condition holds after the (k + l)st additionto P as well. Theorem 3. If P $ NP, no polynomial-timerelative vI} denote an optimalplacement.For convenience,we use /* for guarantee of (2 - e)for any e > Ofor MMFDT-TI. To that end, let P* = {v*, v!, ..., OPT(I). The followingobservationis an immediate consequenceof the definitionof the solution value to a placementforan MMFDinstance. corresponding Observation1. For everypair vi, v7 of distinctnodes in P*, w(v', v*) > 1*. Let Pk = {x1, x2, ..., Xk+II denote the set P after k additions.(Note that IPk I = k + 1, becausethe first additioninsertstwo nodesinto P.) SinceGMM adds at leastone more node to P, the followingis a trivial observation. Observation 2. PkI =k+ 1 <p. For each vi E P* (I < i s p), define S*= { E VIw(v0', u) < 1*/2}.That is, SP'is the set of all nodeswhosedistancesfromvi*areless than1*/2.The followingclaimprovidestwo usefulpropertiesof these sets. ClaimI a. For 1 < i < p, SP'is nonempty. b. For i #1, SP'and Sj*aredisjoint. Proof of Claim 1 Part a: This is obvious, because vi*E S#' for 1 < i < p. f 0 for some i # j. Let Partb: SupposethatS# n Sj* u E SP'U Si*. Thus, w(v#, u) < 1*/2 and w(vj*,u) < 1*/2. By Observation1, w(v', vj*), 1*. These three inequalitiestogetherimplythatthe triangleinequality does not hold for the three nodes u, vi* and vj*. Claimlb follows. We now continuewiththe mainproof.SincePk has less than p nodes (Observation2) and there are p Sp*,there must be at least one set, say Sr*(for some r, 1 < r < p), such that Pk n Sr*- 0. Therefore,by the definitionof S*, we must have for each u E Pk, w(vr*,u) : 1*/2. Since is availablefor selectionby GMM, and GMM rV* selectsa node v E V - Pk for whichminced pkw(v, v') is a maximumamongthe nodesin V - Pk, it follows disjoint sets So*,St, ..., that (3) holds even afterthe (k + 1)stadditionto P. Thiscompletesthe proofof Theorem2. approximationalgorithmcan providea performance Proof. Weusea constructionsimilarto thatpresented in the proofof Theorem1,exceptthattheedgeweights are chosen as follows. Let w(vi, v;) = 2 - e/2 if {xi, xj; is in E; otherwise,let w(vi,vj)= 1. Usingthe factthat 0 < e < 1,it is easyto verifythatthe resultingdistances satisfythe triangleinequality.The proofthatthe solution value of a placement produced by A for MMFD-TI is greaterthan 1 if G has a clique of sizeJ is virtuallythe sameas thatof Theorem1. 3. NEAR-OPTIMALSOLUTIONSTO MAX-AVG FACILITYDISPERSION In this section,we discussa relativeapproximation algorithmfor MAFD under the triangleinequality assumption(MAFD-TI).Thisheuristic,whichwe call GMA, is shown in Figure2. It is identicalto the GMM heuristicof Figure1, exceptthat in Step 3a, we choose a node v E V - P for which Zv'eP w(v, v') is maximumamongall the nodesin V - P. Note that the solutionvalue of the placementP producedby GMA is equal to 2/p(p - 1) X,3EP w(x, y). An experimentalstudy of this heuristicis carried out in ErkutandNeuman(1990).We focuson determining the performanceguarantee provided by this heuristic.Our next theorem shows that GMA is indeed a relative approximationalgorithmfor MAFD-TI. Before presenting that theorem, we introducesome notationwhich will also be used in Section 4. Let A and B be disjoint subsets of V. Define W(A)= X,,y w(x, y) and W(A,B) = y). (Note that W(A, B) = W(B, A).) xeA,yEB w(x, Also, for x E A, let W(x, B) = Z,,B w(x, y). We now Step 1. Let v; and oj be the endpointsof an edge of maximumweight. Step 2. P <- {vi, o;}. Step 3. while ( JP1< p ) do begin a. Finda nodev E V - P suchthatX:w ( vo,v') is maximum EP amongthe nodes in V - P. b. PNP{P}. end Step 4. OutputP. Figure 2. Details of heuristic GMA. 304 / RAVI, ROSENKRANTZAND TAYI give two lemmaswhichare used severaltimes in the proofof the performance boundforthe GMAheuristic. The first is an immediateconsequenceof the pigeonhole principle(Roberts1984). Lemma 1. Let A and B be nonemptyand disjoint subsetsof V. Thenthereis a node x E A such that W(x,B) > W(A,B)/IA 1 Lemma 2. Given an instance of MAFD-TI, let A and B be nonemptyand disjointsubsets of V with IB I : 2. ThenW(A,B) : IAIW(B)/(IBI - 1). Proof. Let v be an arbitrarynode in A. Let IB I = t and B = lb,, b2, ..., b1[.Since t = IBI : 2, there is at least one edge in B. For each edge bin,bjl in B(i #1), we haveby the triangleinequality, w(v, bi)+ w(v, bj), w(b, bj) 1 < i< js t. (4) If we sum the inequalitiesshownin (4), the left-hand side of the sum is (t - 1)W(v,B) becauseeach edge weight w(v, bi)(1s i - t) appearsexactly (t - 1)times in the sum. The right-handside of the sum is simply W(B). Therefore,we get (t - 1) W(v, B) > W(B), or W(v,B) - W(B)/(t - 1) = W(B)/(IBI - 1). (5) Since inequality(5) holds for each v E A, we get W(A,B) -Al W(B)/(IBI -1). Theorem 4. Let I be an instanceof MAFD-TI. Let thesolution OPT(I)and GMA(I) denote,respectively, valuesof an optimalplacementand thatproducedby GMAfor the instanceL ThenOPT(I)/GMA(I)< 4. Proof. Weshowby inductionthataftereachaddition, the average weight of an edge in P is at least OPT(I)/4. The statementis clearlytrueafterthe firstaddition (whichbringstwo nodes into P) becausean edge of maximumweightis addedto P. So, assumethatp > 3 and the statementholds afterk additionsfor some k > 1. We will provethat the statementholds after the (k + 1)stadditionas well. For convenience,we use /* for OPT(I). Let Pk denotethe setP afterk additions.We havethe following two-partobservation.The firstpartis due to the factthatGMAaddsat leastone morenodeto P. The secondis an immediateconsequenceof the inductive hypothesis. Observation3 a. W(Pk)= k + /P12 */. b. W(Pk) :,; k(k + 1)/2 1*/4. To provetheinductivehypothesisforPk,1, it suffices to show that thereis a node x E V - Pk such that W(x, Pk) > (k + 1)1*/4,becausethis conditionin conjunctionwith Observation3b implies that the averageedgeweightis at least1*/4afterthe (k + 1)st additionto P as well. We now state this condition formallyas a claimand presentits proof. Claim 2. There is a node x E V - Pk such that W(x, Pk) > (k + 1)1*/4. Proofof Claim2. LetP* denotethe set of p nodesin an optimalplacement.Bythe definitionof 1*,we have - W(P*) = p(P 2 1)1* (6) We have two cases to consider, dependingupon whetheror not Pk andP* aredisjoint. Case1. (Pk andP* aredisjoint.)We applyLemma2 with Pk as the set A and P* as the set B. (We can do so becausePk and P* aredisjointand IP*I = p > 3.) We get W(Pk, P*) > IPk I W(P*)/(P = - 1) (7) (k + 1)pl*12(using 6). Since W(Pk,P*) = W(P*, Pk), we have W(P*,Pk) > (k + 1)pl*12. Now, by Lemma 1, there must be a nodex E P* suchthat W(x, Pk) > W(P*, Pk)/P > (k + 1)1*/2 (using7). Thus, the node x satisfiesa conditionwhichis even strongerthanthatrequiredby Claim2. Case 2. (Pk and P* are not disjoint.) Let Y = P* n Pk and let X = P* - Y. Since Y is nonempty, we must have: IXI <p - 1. (8) Since X and Y are disjoint and P* = X U Y, we have W(P*) = AP - 2 1) I* = W(X) + W(Y) + W(X, Y). (9) We havetwo subcasesdependingon IlXI. Case2a. (IXI < 1.) Note thatX ? 0, otherwise,Y = P* = Pk and so IPkl = p, contradicting Observation la. Therefore,in this subcase,we need to consider only the possibilitythat IXI = 1. Then, Pk = Y and so Pk consistsof p - 1 nodes from P*. Let x be the AND TAYI / RAVI, ROSENKRANTZ node in X. SinceGMA selectsa node v E V - Pk for whichXV'epk w(v, v') is a maximumamongthe nodes in V- Pk, and x is availablefor selection,it follows that GMA will producean optimalsolutionin this subcase. Case 2b. (IXI > 1 (i.e., IXI , 2).) Here, from (9), we can conclude that either W(X) : W(P*)/2 or W(Y) + W(X,Y) - W(P*)/2.We considerthesetwo possibilitiesseparately. 305 Let Q = Pk - Y. Note that Q and Y are disjointand so IQI = IPkI - Iyl = (k + 1)- IYI. We apply Lemma2 with Q as the set A and Y as set B (recall that IYI : 2). We get ( IYI)w(Y) W(Q, Y) (k + 1 -- |y)(2 (12) (IYI 1) SincePk = Q U Y and the sets Q and Y are disjoint, we also have, W(Pk)? W(Y) + W(Q, Y) Case2b.i. (W(X) > W(P*)/2)We use Lemma2 with Pk as set A and X as set B (we can do so because W(X,Pk) = W(Pk, X) - > (kl+ 1) W(X) - 1) ( ] (from 12) l) W((Y) I (13) > XI W(Pk)/(IPkI- 1) (10) From Lemma 1, there is a node x E X such that W(x, Pk)> W(X,Pk)/ lXI. Combiningthis observation with (10), we get APp- 1) l W(X, Pk) (k + 1) pp 1)1*/4 (using 6). IXI(IXI IY-l We now applyLemma2 with X as the set A and Pk as the set B. We get, (k + 1 IXI -) W(P*)/2 W(X, Pk) + W(Y) IXI > 2) toget = IXl W(Pk)/k(fromObservationla) > IXIW(Y)/(IYI - 1) (from 13) (IY1 [ W(P*)/2 - W(X,Pk)] (from(ll) )*4 )1*14. (14) Rearranging (14) we get, Sincep - 1 > IXI (from 8), the quantityp(p - 1)/ (IXI(IXI - 1)) is greaterthan 1. Therefore,we get W(x, Pk) > (k + 1)1*/4 as required. Case b.ii. (W(Y) + W(X, Y) > W(P*)/2)Firstnote that if IYi = 1, then W(Y) = 0 and so W(X, Y) > W(P*)/2. Since Y C Pk and the edge weightsare nonnegative,we must have W(X, Pk) > W(X, Y). 1X + WKXPk) JYJ I 1 >1 |XYl W(P*)12. Noting that IXI + IY = P*I = p, substitutingfor W(P*)from(6),andeliminatingthecommondenominatorIYI - 1, we get W(X, Pk)(p - 1) IXI p(p - 1)1*/4. Therefore, W(X, Pk) 2 W(P*)/2 = p(p - 1)1*/4, and That is, W(X, Pk) by Lemma1, theremustbe a nodex E X suchthat and Lemma 1, we concludethat theremust be node x E X suchthat W(x,Pk) , pl*/4 > (k + 1)1*/4(from Observationla). This completesthe proofof Claim2 and also that of Theorem4. Theorem4 showsthat the performanceguarantee providedby GMA is no worsethan 4. However,the guaranteemaywellbe less.Thefollowingresultshows thatthe guaranteecannotbe lessthan2. A W(X, Pk) |P - 1*/4 > pl*/4 (from8) > (k + 1)1*/4(fromobservationla) and so Claim2 holds when IYl = 1. Therefore,for the remainderof this proof,we assumethat IYi > 2. Sincein this subcasewe areassumingthat W(Y) + W(X, Y) > W(P*)/2, and as observed above, W(X, Y) < W(X,Pk), we have W( > W(P) 2 - W(X,Pk) (11) IXI pl*/4. From this inequality Theorem 5. For any instance I of MAFD-TI, let thesolution OPT(I)and GMA(I)denote,respectively, valuesof an optimalplacementand thatproducedby GMA for the instanceL For any e > 0, thereis an instance I, for which OPT(IE)/GMA(IJ)> 2 - e. 306 / RAVI, ROSENKRANTZAND TAYI Proof. Consider the MAFD instance I described below. This instancehas a total of 2p nodes and p facilitiesareto be located.Forconveniencein description, we partitionthe set of 2p nodes into threesets calledX, Y,and Z. The setX hasp nodes(denotedby xi, x2, . . ., xp), the set Y has p - 2 nodes (denoted by Yi, Y2, . . ., YP-2), and Z has two nodes (denoted by z1 and z2).The edge weights are chosen as follows. For any distinctnodesxi and xj E X, w(xi,xj) = 2. Also w(z1, z2) = 2. All other edge weights are 1. It is to verifythat the distancessatisfythe straightforward triangleinequality.The setX is an optimalplacement and its solutionvalue (OPT(I)) is 2 (becauseevery edge in X has a weightof 2). We can forceGMA to place the first two facilities at z1 and z2 because w(zI, Z2)= 2 is a maximumedgeweight.It is easy to verifythatin the subsequent(p - 2) steps,GMA can be forcedto chooseall the (p - 2) nodesfromthe set Y. Thus, we force GMA to return Z U Y as the placement.The solutionvalue(GMA(I))corresponding to this placementis givenby 4.1. A Polynomial-Time Algorithm for 1D-MAFD Ourpolynomial-time algorithmfor 1D-MAFDis also basedon a dynamicprogramming approach.It runs in O(max{nlog n, pnj) time. In the developmentof the dynamicprogramming formulationfor this problem, we use the notationintroducedin Section3. In studying the formulation,the reader should bear in mindthatmaximizingtheaveragedistanceis equivalentto maximizingthe sum of the distances. Webeginby sortingthepointsinto increasingorder. Let V = {x,, x2, .. ., x"} denote the points in sorted order. Consider a point x; and an integer k s min(j,p). Foreachsuchcombinationof xj and k, the dynamicprogramming algorithmconsiderschoosing k points from {xl, x2, . . ., xjI to maximize a certain quantitydescribedbelow.Let C = A U B be a set of p points consisting of a subset A C {xi, . . ., xj such that IAl = k, and a subset B C Ixjl, . . ., x"} such that IB I = p - k. For each pairof pointsxu in A andx, in B, we have w(xU,xv) = w(xj,xU)+ W(xj,xe). GMA(I)= P p- 1)2 2 +) because in the placement,one edge (namely, that between z, and z2) is of weight 2 and each remaining edge is of weight 1. A bit of simplificationshows that GMA(I) = 1 + 2/(p(p - 1)). The ratio OPT(I)/GMA(I) is given by OPT(I) GMA(I) 2 1+ 2/(p(p -1)) closeto 2 by Clearly,the ratiocan be madearbitrarily choosingp to be sufficientlylarge. 4. DISPERSIONPROBLEMSIN ONE AND TWO DIMENSIONS The 1-dimensionaldispersionproblemsarerestricted versions of MMFD and MAFD, where the node set V consistsof a set of n points (denotedby xl, x2, ... , xn) on a line. Thus w(xi, xj) = Ixi - xjl. We denote these problems by 1D-MMFD and 1D-MAFD,respectively.Similarly,in the case of the 2-dimensional dispersion problems (denoted by 2D-MMFDand 2D-MAFD,respectively),the node set V is a set of n points in ,2 and the distance betweena pairof pointsis the Euclideandistance.It is knownthat1D-MMFDcanbe solvedin polynomial time using a dynamic programmingapproachand that2D-MMFDis NP-hard(WangandKuo).Accordingly,we consider1D-MAFDand 2D-MAFDin this subsection. (15) If we sum (15) overthe pointsin B, we get W(xu, B) = (p - k)w(xj, xu) + W(xj, B). (16) If we sum ( 16) overthe pointsin A, we get W(A, B) = (p - k)W(xj,A) + kW(xj,B). (17) Hence, W(C) = W(A U B) = W(A) + W(B) + W(A, B) = W(A) + (p-k)W(xj, A) + W(B) + kW(xj, B). (18) The optimizationgoal of 1D-MAFDis to maximize W(C).Supposethatfor a givenk, we wantto choose a setC maximizingW(C),butsubjectto theconstraint thatA containsk pointsand B containsp - k points. Equation(18) shows that underthis constraint,the choice of B has no effect on the choice of A. For a given subsetA 5 xi, . . , xjI such that IAI= k, define fkj(A) = W(A) + (p - k)W(xj, A). (19) Let OPTkj be a k-element subset of lx1, . . ., xj4 that maximizesfkj.Note that OPTp,,is the solutionto the 1D-MAFDprobleminstance.The dynamicprogramming algorithm computes all the values of and then uses these values to find an fkj(OPTkj), optimalsolutionto the 1D-MAFDinstance.Thealgorithm finds the values of fk,j(OPTk,j) by considering AND TAYI RAVI, ROSENKRANTZ increasingvalues of j. For a given value of j, the algorithmconsidersincreasingvaluesof k. Now considerhow to computefkj(OPTkj). Note that OPTkj eitherincludesthe pointxj or excludesit. if xi 4 OPTk,j, then OPTkj must be OPTk,j]1, for whichwe have,using(19), fkj(OPTk,j} I) = W(OPTkj1l) + (p k) W(x,, - OPTk,j11) = W(OPTk,j-1) + (p - k) W(x,-1, OPTk,j X) + k(p - k)(xj - xj- ) = fk,j-,(OPTk,j-l) + k(p - k)(xj - x-1). (20) If xj E OPTk,j,then OPTk,jmustbe OPTk1,,j1l Ufjxf, and againusing(19) we have, fk,j(OPTk-l,j-l U fxj) = W(OPTk-l,j1,)+ (p - k)(W(xj-1,OPTklj_1) / 307 Theorem 6. An optimalsolutionto any instanceof 1D-MAFD givenby a set V of n pointsandan integer p < n can be obtainedin O(maxfn log n, pnj) time. 4.2. A Heuristic for 2D-MAFD It is open whether2D-MAFDis NP-hard.Note that GMA (Section3) providesa performanceguarantee of 4 for 2D-MAFD.Here,we firstpresenta heuristic for 2D-MAFDwhich providesa performanceguaranteeof 4(,I2 - 1) - 1.657and then show how this heuristiccan be modifiedto obtainanotherheuristic guarantee whichprovidesan asymptoticperformance of r/2 - 1.571.Theseheuristicsuse our polynomial algorithmfor 1D-MAFD. We assumethatan instanceof 2D-MAFDis given by a set V = lVI, v2, . . ., v") of n points(whereeach point vi is specifiedby a pair of coordinates(xi, yi)) andan integerp < n. The stepsof thisheuristic(called PROJECTL4) are shown in Figure 4. The performance guaranteeprovidedby PROJECTL4 is indicatedin the followingtheorem. + (k - 1)(x -xj-x)) + W(xj-1,OPTklj_1) Theorem 7. Let I be an instanceof 2D-MAFD. Let ? (k - )(xj - xj,) the denote,respectively, OPT(I)and PROJECTA4(I) solutionvaluesof an optimalplacementand thatproduced by PROJECTL4 for the instance I. Then = W(OPTk-l,1j_) + (p - k + 1)W(Xj1, OPTkjlj1l) OPT(I)/PROJECTI4(I) < 4(V/ + (k - l)(p - k + 1)(xj- x,-,) =fk-,1j-l(OPTk-1,j-l) + (k - l1)(p - k + l1)(x, - xj-,). (21) Equations (20) and (21) show that given the and the values of sets OPTkjI and OPTklj-1, we can comfk,j-l(OPTk,j-1) and fk-,j-l(OPTk-lj-l), puteOPTkjandfkj(OPTkj). To completethedynamic formulation,we notethatthe boundary programming conditions are OPToj= 0, foj(OPToj) = 0(1 j <n), OPT1,I = Ixi1, and fi,I(OPT,,1) = 0. The detailsof the algorithmareshownin Figure3. The algorithmfirst computes (Step 4) the entries of the arrayF (which correspondsto the function f in the above dynamicprogrammingformulation) and then uses these entriesto constructan optimal placementP (Step 5). This implementationobviates the need for the sets OPTkj used in the formulation. The runningtime of the algorithmis O(n log n) for sortingplus O(pn) to carryout the dynamicprogramming (thereare O(pn) entriesto compute,and each entrycan be computedin constanttime). Thus, the overallrunningtime is O(max(nlog n, pn)). The abovediscussionis summarizedin the followingtheoremwhichwillbe usedin the nextsubsection. - 1). Proof. Recall that maximizingaveragedistance is equivalentto maximizingthe sum of the distances.So we will presentthe proofin termsof the sum of the distancesbetweenpairsof points. Let P* be an optimalplacement.For convenience, we use the term edge to referto the line segment between a pair of (distinct) points in P*. For an edge e E P*, let l(e) denote its length (i.e., the Euclidean distancebetween the end points of e). Note that OPT(I) = Zeep* l(e). Given an edge e, let lx(e),ly(e),l,(e), and lw(e)denote the magnitudesof the projectionsof e on the X, Y, V and W axes, respectively.We havethe followingclaim. Claim 3. lx(e)+ ly(e)+ lQ(e)+ lw(e), (1 + V2)l(e). Proofof Claim3. Sincewe areconsideringthe sum of the projections,assumewithoutloss of generalitythat the angle0 of e with respectto the X axis is between 00 and 450, as shown in Figure 5. From that figure,we have lx(e)= l(e)cos0, ly(e)= l(e)sin 0, lQ(e)= l(e)cos(45? - 0), and lw(e)= l(e)sin(45? - 0). 308 / RAVI, ROSENKRANTZAND TAYI (*- - In the following, arrayF represents the function f in the formulation. - -*) Step 1. Sort the given points, and let {x, x2, x,} denote the points in increasing order. Step 2. for j:= I to n do F [OJ] -- 0; Step 3. F [1,1] - 0. Step 4. (*- - Compute the value of an optimal placement forj =2 to n do for k:= I to min (p,j) do begin 1] + k(p - k)(xj- xj-); F[k- I,j-I] + (k - I)(p - k + I)(xj-xj tI - F[k,j- t2 if tl > I); then (*- - do not include xj - t2, F[k,J] <t- else (*- - Includexj F[k,J] <- t2; end; Step 5. (*- - Construct an optimal placement P - {xx}; k - p; j -- n; while k > I do begin if F[k,J] = F[k- I, j-1] + (k- 1)(p-k+ I)(xj-xj-1), then (*- - xj to be included in optimal placement begin P <- P u {xj; k - k - I; end; j*-j- I; end; Step 6. Output P. Figure 3. Details of the algorithm for 1D-MAFD. Step 1. Obtain the projections of the given set V of points on each of the four axes defined by the equations y = 0 (X axis), y = X(Vaxis), {= 0 (Yaxis), and y = -{(W~axis). Step 2. Find optimal solutions to each of the four resulting instances of ID-MAFD. Step 3. Return the placement corresponding to the best of the four solutions found in Step 2. Figure 4. Details of heuristic PROJECT_4. v Y W ____ _______ 0 - Figure 5. Proof of claim 1. Let s(e) = l,(e) + ly(e) + l,(e) + lw(e). Using well known trigonometric identities and the fact that sin 450 = cos 45? = I/N/, it is not difficult to verify that s(e) = l(e)[sin 0 + (1 + ./2)cos 0]. (22) From (22), it is easy to verify that the minimum value of s(e) occurs when 6 = 00 or 0 = 450 and that this minimum value is (1 + 2I5)I(e).This completes the proof of Claim 3. We now continue with the main proof. Note that Claim 3 holds for each edge in P*. So if we sum up RAVI, ROSENKRANTZ AND TAYI the resultof Claim3 overall the edgesin P*, we get E Ix(e) + ly(e) + I(e) + lw(e) eEP* (1+ A) + V)OPT(I). (23) If we let Sx = Caps lx(e), and use analogousdefinitions for S, So, and SW,we get from (23), Sx + Sy + S, + S, , (1 + V;2)OPT(I). Fromthe lastinequality,it followsthat (24) + N/2) OPT(I). max(Sx, Sy, SunS) (25) SincePROJECU4 choosesthe best placementfrom optimalsolutionsforthe four1D-MAFDinstances,it followsthat PROJECT14(I), max (Sx, Sy, Sv, Sw). We thushavefrominequality(25), OPT(I) PROJECT.A(I) (O < 0 < ir/k) be the angleof e with respectto the X1axis.Let li(e) denotethe magnitudeof the projection of e on the Xi axis (1 < i < k), and let s(e) = 0 Xk=, li(e). It is easy to verify that -k12 k12- 1- I(e) eEP* -(1 / 309 s(e) =I(e) [ cos(rr/k -0) + E cos(rir/k + 0)]. (27) calculationsshowthat Tedious,but straightforward, s(e) = 1(e) [sin 0 + sin(r/k- 0)] (28) From(28),it is easyto verifythatthe minimumvalue of s(e) occurswhen 0 = 0 or 0 = ir/k and that this minimumvalue is l(e)/tan(r/2k). Thus s(e) ; I(e)/ tan(r/2k). The remainderof the proof to show that OPT(I)/P,(I) < k tan(r/2k) is virtuallythe same as of Theorem7 (keepingin mind that the numberof axesis k insteadof 4). We note that Theorem8 generalizesthe boundof Theorem 7 in that choosing e = 4(V2 - 1) - ir/2 4/(1 + VJ) = 4(12 - 1) as indicatedin the statementof the theorem. The performance guarantee provided by PROJECT4 is approximately1.657. By projecting the given set of pointson more axes,it is possibleto achieve an asymptotic performanceguaranteeof ir/2 ~ 1.571, as shownbelow. Theorem8. Let I be an instanceof 2D-MAFD and let OPT(I) denote the solutionvalue of an optimal placementfor L Then,for anyfixed e > 0 thereis a polynomial-timeapproximationalgorithmP, which producesa placementwith solutionvaluePE(I)such that OPT(I)/PE(I) < (Xr/2+ e). Proof. Givene > 0, let k be the smallestevenpositive integersatisfyingthe condition k tan(r/2k) s r/2 + E. Such a k exists because limka (26) k tan(r/2k) = r/2, and this limit is approachedfrom above. AlgorithmP, firstprojectsthe givenset of pointson k axes (denoted by X1, X2, .. . , Xk) such that the angle betweenany pair of successiveaxes is ink. It then solvesthe 1D-MAFDproblemon each of the k axes andoutputsthe bestof the placementsfound.In view of (26), Theorem 8 would follow by provingthat OPT(I)/Pe(I)is bounded by k tan(r/2k). The proofis verysimilarto that of Theorem7. Let P* denotean optimalplacementandconsideran edge e of lengthl(e) in P*. Withoutloss of generality,let (~0.0861) leadsto projectionon k = 4 axes.Also, to achievea performance guaranteeof 1.571(anapproximate value of gr/2), 80 projectionsare needed;in general,the number of projectionsgoes to oo, as e approaches 0. 5. CONCLUSIONS The results of this paper, prior results,and open problemsaresummarizedin twotables.TableI shows the complexityresultsfor solvingtheseproblemsoptimally, while Table II shows performanceguarantee results for heuristics.In these tables, prior results are indicated through appropriatecitations; our resultsare indicatedby specifyingthe corresponding theorems. ACKNOWLEDGMENT An extendedabstractcontainingsome of the results in this paperappearsin the Proceedingsof the 1991 Workshop on Algorithms and Data Structures (WADS-91), Ottawa, Canada, August 1991. The researchof the first authorwas supportedby NSF grantCCR-89-05296,and the researchof the second authorwas supportedby NSF grantsCCR-88-03278 and CCR-90-06396.The researchof the thirdauthor was supported by the Faculty Research Award Programof the Universityat Albany. We thankProfessorBinayK. Bhattacharya (Simon FraserUniversity,Burnaby,Canada)for bringingthe paperby Wangand Kuo (1988)to our attention.We 310 / AND TAYI RAVI, ROSENKRANTZ Table I ComplexityResults for DispersionProblems Problem General TriangleInequality MAX-AVG MAX-MIN NP-hard (Hansenand Moon 1988) NP-hard (Hansenand Moon 1988) NP-hard (Erkut1990) NP-hard (Erkut1990) ID Version 0 (max Ipn, n log nJ) 0 (max Ipn, n log n ) 2D Version (Wangand Kuo 1988) NP-hard (Wangand Kuo 1988) (Theorem6) Open Table II PerformanceGuaranteeResults for NP-hardDispersionProblems MAX-AVG MAX-MIN UpperBound (BestKnown Guarantee) LowerBound (IntrinsicLimit, AssumingP # NP) General - TriangleInequality 2 (Theorem2) 2 (Theorem2) No guaranteedratio (Theorem1) 2 (Theorem3) Open Problem 2D Version areindebtedto the refereesfortheircarefulreadingof our manuscriptandtheirvaluablecomments.In particular,one of the refereessuggestedpresentingthe conclusionsin Section5 in the formof tables. REFERENCES R. L., AND R. S. GARFINKEL.1978. Locating CHURCH, an ObnoxiousFacilityon a Network.Trans.Sci. 12, 107-118. R., AND A. DAUGHETY.1981. LoCHANDRASEKHARAN, cationon TreeNetworks:p-Centreand n-Dispersion Problems.Math. Opns.Res. 6, 50-57. B., AND L. J. WHITE. 1980. A Maxmin DASARATHY, LocationProblem.Opns.Res. 28, 1385-1401. ERKUT,E. 1990. The Discrete p-Dispersion Problem. Eur. J. Opnl.Res. 46, 48-60. ERKUT,E., ANDS. NEUMAN.1989. Analytical Models for LocatingUndesirableFacilities.Eur. J. Opnl.Res. 40, 275-291. ERKUT,E., AND S. NEUMAN.1990. Comparison of Four Models for DispersingFacilities.INFOR 29, 68-85. ERKUT,E., AND T. S. ONCU. 1991. A Parametric 1Maximin LocationProblem.J. Opnl.Res., Soc. 42, 49-55. ERKUT, E., T. BAPTIEAND B. VON HOHENBALKEN.1990. The Discretep-MaxianLocationProblem.Comput. and Opns.Res. 17, 51-61. GAREY,M. R., AND D. S. JOHNSON. 1979. Computers and Intractability:A Guide to the Theory of NPCompleteness.W. H. Freeman,San Francisco. UpperBound (BestKnown Guarantee) LowerBound (IntrinsicLimit, AssumingP # NP) Open Open 4 (Theorem4) 7r/2asymp. (Theorem8) Open Open HANDLER, G. Y., AND P. B. MIRCHANDANI. 1979. Loca- tion on Networks: Theory and Algorithms. MIT Press,Cambridge,Mass. HANSEN, P., AND I. D. MOON.1988. DispersingFacilities on a Network. Presentationat the TIMS/ORSA Joint National Meeting,Washington,D.C. HOROWITZ, E., AND S. SAHNI. 1984. Fundamentalsof Computer Algorithms. Computer Science Press, Rockville,Maryland. KUBY, M. J. 1987. ProgrammingModels for Facility Dispersion:The p-Dispersionand MaxisumDispersion Problems.Geog.Anal. 19, 315-329. MELACHRINOUDIS, E., AND T. P. CULLINANE. 1986. Locatingan UndesirableFacility With a Minimax Criterion.Eur. J. Opnl.Res. 24, 239-246. PREPARATA,F. P., AND SHAMOS,M. I. 1985. Computational Geometry:An Introduction.Springer-Verlag, New York. ROBERTS,F. S. 1984. Applied Combinatorics.PrenticeHall, EnglewoodCliffs,New Jersey. STEUER, R. E. 1986. Multiple Criteria Optimization: Theoryand Application.John Wiley,New York. TAMIR, A. 1991.ObnoxiousFacilityLocationon Graphs. SIAMJ. Disc. Math. 4, 550-567. WHITE, D. J. 1991. The Maximal Dispersion Problem and the 'First Point Outside the Neighborhood' Heuristic.Comput.and Opns.Res. 18, 43-50. WHITE, D. J. 1992. The Maximal DispersionProblem. J. Applic.Math. in Bus. and Ind. (to appear). WANG, D. W., AND Y. S. Kuo. 1988. A Study of Two GeometricLocationProblems.Infor.Proc.Letts.28, 281-286.