Parallel Techniques for Linear Programming Problems Using Multiprogramming and RSM S. Sathya

advertisement

International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Number 5 – Jul 2014

Parallel Techniques for Linear Programming

Problems Using Multiprogramming and RSM

S. Sathya

1

, R. Hema

2

, M. Amala

3

1

Asst. Professor, Department of IT, Sri Manakula Vinayagar Engineering College, Puducherry, India.

2

Asst. Professor, Department of ECE, Sri Manakula Vinayagar Engineering College, Puducherry, India.

3

Asst. Professor, Department of ECE, Sri Manakula Vinayagar Engineering College, Puducherry, India.

Abstract-The simplex method is maybe the most broadly utilized method for settling linear programming (LP) problems. Parallelizing simplex type algorithms is one of the most challenging problems. The aim of this paper is to implement parallel techniques for matrix multiplication refers to the concept of speeding-up the execution of a program by dividing the program into multiple fragments generally we call single and multiple threads, that can execute simultaneously each on its own processor.

A program being executed across n processors might execute n times faster than it would use a single processor. The concert of matrix multiplication is deliberated with one of the effective methods like optimization techniques having three comparisons like Matrix Multiplication using

Single Thread, Multithread and Revised Simplex Method in

Linear Programming Problem (LPP). The time required to find out the computation of matrix multiplication using the aforementioned terms is a noticeable amount of execution time. Eliminating this overhead further reduces our execution time. It greatly increases the throughput of the processor.

Keywords: Parallel Processing, Single Thread, Multithread,

Revised Simplex Method, Linear Programming Problem,

Optimization Techniques.

I. INTRODUCTION operation in Numerical Computation. Accelerating matrix multiplication brings about a comparing accelerate build in different numerical computations. Matrix multiplication is normally utilized within the regions of graph theory, numerical algorithms, digital control, and signal processing.

Parallel programming [8] is a good practice for solving computationally intensive problems in various fields. In operations research [5] for instance, solving maximization problems with simplex method is an area where parallel algorithms [14], [15], [16]. are being developed Parallelizing simplex type algorithms is one of the most challenging problems.

Due to very dense matrices and very heavy communication, the ratio of computation to communication is extremely low. It becomes necessary to carefully select parallel techniques, partitioning patterns and communication optimization in order to achieve a speedup. A popular approach for the implementation of parallel algorithms [9] is to configure a cluster or a network of personal computers. With the advances made in computer hardware and software, it is now quite a simple matter to configure a computer network.

The primary reasons for using parallel computing are:

(i) decreasing execution time

Parallel computing is a type of computing in which numerous directions are done at the same time.

Parallel computing works on the standard that extensive issues can practically dependably be isolated into more modest ones, which may be completed simultaneously ("in Parallel"). Parallel processing [1], [7],

[11], has been utilized for a long time, chiefly in High

Performance Computing, however enthusiasm toward it has gotten more noteworthy as of late because of physical stipulations averting recurrence scaling. Parallel computing has as of late turned into the prevailing ideal model in workstation construction modeling, fundamentally as multi core processors.

A high-performance computing environment is vital for numerical computations like physical science and earth environment recreations which require huge computational force. Grid multiplication is a critical

(ii) memory utilization is less

(iii)solve large volume of data computations

(iv) afford concurrency

(v) take advantage of non-local resources – using available computer resources on a wide area network.

One of the earliest parallel tableau simplex methods on a small-scale distributed memory Multiple-

Instruction Multiple-Data (MIMD) machines is by Finkel

[10]. His study showed that the overhead for distributing matrix elements for pivot operations, and for inter-process synchronization did not allow for any significant speedup.

Wu and Lewis [17] presented two parallelization of the revised simplex algorithm [4] with explicit form of the basis inverse on a shared memory MIMD machine.

ISSN: 2231-5381 http://www.ijettjournal.org

Page 200

International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Number 5 – Jul 2014

II. PARALLEL MATRIX MULTIPLICATION

Multiplication of vast frameworks obliges a great deal of calculation time, where n is the measurement of the framework. Since most present applications oblige higher computational throughputs, numerous specialists have

The profundity of a calculation is the amount of operations in the longest chain of operations from any data to any yield. On the off chance that we are given a boundless number of processors to perform our algorithms, then to minimize processing time, we would search for a calculation which minimizes the profundity. attempted to enhance the execution of matrix multiplication [12]. Framework grid multiplication A * B =

C with frameworks A, B, and C decayed in two measurements. The segments of A, B, and C distributed to a task.

It is defined between two matrices only if the number of columns of the first matrix is the same as the number of rows of the second matrix. If A is an m -byn matrix and B is an n -byp matrix, then their product is an m -byp matrix denoted by AB (or sometimes A · B). The product is given by

Then again, on the off chance that we have a set number of processors, then we have to keep the size little and we would utilize a calculation which minimizes the size. The assessment of these algorithms is focused upon speedup and effectiveness.

Speedup measures the build or abatement of computational time of the parallel calculation contrasted with a known consecutive calculation. Productivity measures the expand or decline of computational time for every processor. for each pair i and j with 1 ≤ i ≤ m and 1 ≤ j ≤ p.

III. PARALLELIZATION USING LINEAR

PROGRAMMING PROBLEM

In arithmetic, Linear Programming Problems [6] include the enhancement of a straight goal capacity, subject to direct correspondence and disparity requirements. Put casually, LPP is about attempting to get the best conclusion (e.g. most extreme benefit, minimum exertion, and so on) provided for some arrangement of stipulations (e.g. just meeting expectations 30 hours a week, not doing anything unlawful, and so forth), utilizing a linear mathematical model [2].

Linear programs are problems that can be expressed in canonical form:

Maximize

Subject to

Where

The inherent issue for parallel systems architects is the means by which to parallelize the methods considering a given number of processors while minimizing the time or the profundity of the network. The time taken of the systems like network multiplication utilizing single string, multithread and Linear

Programming Problems are identifies with the amount of operations which must be performed for a craved result.

IV. ANALYSIS AND DISCUSSION

Matrix Multiplication basically includes taking the rows of one matrix and multiplying and adding corresponding columns in a second matrix. Each component of the resultant matrix might be computed autonomously, that is to say by an different thread [13] [3].

This is accomplished by handling any programming languages. For the implementation of thread, java is very compatible and flexible programming language. Java is to calculate the computation time for performing matrix multiplication using single thread and multithread. Finally both are compared with the throughput of matrix multiplication using RSM in MATLAB.

Fig. 1: Calculation of Matrix Multiplication

The picture to the left demonstrates to compute the (1,2) element and the (3,3) element of AB if A will be a

4×2 matrix, and B is a 2×3 matrix. Elements from each matrix are matched off in the direction of the arrows; each one set is multiplied and the products are added. The area

ISSN: 2231-5381 http://www.ijettjournal.org

Page 201

International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Number 5 – Jul 2014 of the ensuing number in AB relates to the column and V. OUTPUT SCREEN segment that were considered.

A. Implementation of RSM using MATLAB

MATLAB (Matrix Laboratory) is a tool to do numerical computations, show data graphically in 2D and

3D, and tackle numerous different issues in designing and science. MATLAB incorporates calculation, graphics, and programming in an adaptable, nature's turf. Known for its exceedingly advanced network and vector estimations,

MATLAB offers you a natural dialect for communicating issues and their answers both numerically and outwardly. It is exceptionally helpful to solve Matrix turned operations and calculations.

MATLAB has advanced over a time of years with input from numerous clients. In college situations, it is the standard instructional tool for starting and progressed courses in mathematics, engineering, and science. In industry, MATLAB is the apparatus of decision for high productivity research, development, and analysis.

B. Revised Simplex Method (LPP)

The Revised Simplex Method portrays linear programs as matrix entities and presents the Simplex

Method as an arrangement of linear algebra computations.

Instead of spending time updating dictionaries at the end of each iteration, the Revised Simplex Method [5] does its heavy calculation [5] at the beginning of each iteration, resulting in much less at the end.

First of all, a linear program needs to be expressed in terms of matrices. For instance, the revised simplex procedure in linear program stated previously: VI. OUTPUT TABLE ASSESSMENT

Matrix

Dimensio n

100 x 100

200 x 200

300 x 300

400 x 400

Single

Thread

15

312

1652

6578

Multi

Thread-A

Multi

Thread - B

RSM

31

235

2562

6625

16

171

2000

6609

12

89

986

4192 can instead be recast in the form:

500 x 500 17172 17203 17093

Note: Units are in terms of Ms

ISSN: 2231-5381 http://www.ijettjournal.org

Page 202

8016

International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Number 5 – Jul 2014

VII. CONCLUSION

In Revised Simplex method, for problems whose matrix of coefficients contains a large number of zero elements the total amount of computation is reduced. The

Revised simplex procedure always deals with the original coefficients, and because the computer codes can be developed to multiply only non-zero elements, the total

processing time is greatly reduced when comparing to single and multithread. Also the original non-zero elements can be compactly stored in the computer memory; the original simplex procedure transforms the zero elements to non-zeros as the computational progresses. The total number of computations in the revised simplex method is, in general less than the number in the original method.

REFERENCES

[1] M. Baker, R. Buyya and D. Laforenza, “Grids and Grid technologies for wide-area istributed computing”, Software-Practice and Experience ,

Vol. 32(15), pp. 1437-1466. 2002.

[2] Alexander Schrijver, Theory of Linear and Integer Programming .

John Wiley & sons, 1998.

[3] Jim Beveridge, Robert Wiener: Multithreading Applications in Win32 ,

Addison-Wesley.

[4] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and

Clifford Stein. Introduction to Algorithms , Second Edition. MIT Press and

McGraw-Hill, 2001. Chapter 29: Linear Programming, pp.770–821.

[5] Hamdy A. Taha, Operations Research: An Introduction , Prentice Hall.

Eighth edition, 2006.

[6] Michael J. Todd (February 2002). " The many facets of linear programming ". Mathematical Programming.

[7] J. C. Cunha, O. F. Rana and P. D. Medeiros, “Future trends in distributed applications and problem-solving environments”, Future

Generation Computer Systems , Vol. 21(6), pp. 843-855,

2005.

[8] Introduction to parallel computing: design and analysis of algorithms

– V.Kumar, A .Grama, A.Gupta, G.Karypis – 1994.

[9] Parallel optimization: Theory, algorithms, and applications - Y Censor

– 1997.

[10] R. A. Finkel, “Large-Grain Parallelism: Three Case Studies”, in the

Characteristics of Parallel Algorithms , Ed. L. H. Jamieson, The MIT

Press, 1987.

[11] I. Foster, C. Kesselman and S. Tuecke, “The Anatomy of the Grid:

Enabling Scalable Virtual Organizations”, The International Journal of

High Performance Computing Applications , Vol. 15(3), pp. 200-222,

2001.

[12] G. Fox, G. Johnson, S. Otto, J. Salmon and D. Walker, “Solving

Problems on Concurrent Processors”, Volume 1, Prentice Hall,

Englewood Cliffs, NJ, 1988.

[13] Paul Hyde: Java Thread Programming , Sams.

[14] G. Karypis and V. Kumar, Performance and Scalability of the

Parallel Simplex Method for Dense Linear Programming Problems an

Extended Abstract , Technical Report, Computer Science Department,

University of Minessota, 1994.

[15] A. Kilgore, Very Large-scale Linear Programming: A Case Study

Exploiting Both Parallelism and Distributed Memory , MSc Thesis, Center for Research on Parallel Computation, Rice University, 1993.

[16] I. Maros and G. Mitra, “Investigating the sparse simplex algorithm on a distributed memory multiprocessor”, Parallel Computing , Vol.

26(1), pp. 151-170, 2000.

[17] Y. Wu and T. G. Lewis, Performance of Parallel Simplex

Algorithms , Tech. Report, Dep't of Computer Science, Oregon State U.,

1988.

ISSN: 2231-5381 http://www.ijettjournal.org

Page 203

Download