<< Chapter < Page Chapter >> Page >
This chapter covers principles of the simplex method to Linear Programming. After completing this chapter students should be able to: solve linear programming maximization problems using the simplex method and solve the minimization problems using the simplex method.

Chapter overview

In this chapter, you will learn to:

  1. Solve linear programming maximization problems using the simplex method.
  2. Solve the minimization problems using the simplex method.

Maximization by the simplex method

In the last chapter, we used the geometrical method to solve linear programming problems, but the geometrical approach will not work for problems that have more than two variables. In real life situations, linear programming problems consist of literally thousands of variables and are solved by computers. We can solve these problems algebraically, but that will not be very efficient. Suppose we were given a problem with, say, 5 variables and 10 constraints. By choosing all combinations of five equations with five unknowns, we could find all the corner points, test them for feasibility, and come up with the solution, if it exists. But the trouble is that even for a problem with so few variables, we will get more than 250 corner points, and testing each point will be very tedious. So we need a method that has a systematic algorithm and can be programmed for a computer. The method has to be efficient enough so we wouldn't have to evaluate the objective function at each corner point. We have just such a method, and it is called the simplex method .

The simplex method was developed during the Second World War by Dr. George Dantzig. His linear programming models helped the Allied forces with transportation and scheduling problems. In 1979, a Soviet scientist named Leonid Khachian developed a method called the ellipsoid algorithm which was supposed to be revolutionary, but as it turned out it is not any better than the simplex method. In 1984, Narendra Karmarkar, a research scientist at AT&T Bell Laboratories developed Karmarkar's algorithm which has been proven to be four times faster than the simplex method for certain problems. But the simplex method still works the best for most problems.

The simplex method uses an approach that is very efficient. It does not compute the value of the objective function at every point, instead, it begins with a corner point of the feasibility region where all the main variables are zero and then systematically moves from corner point to corner point, while improving the value of the objective function at each stage. The process continues until the optimal solution is found.

To learn the simplex method, we try a rather unconventional approach. We first list the algorithm, and then work a problem. We justify the reasoning behind each step during the process. A thorough justification is beyond the scope of this course.

We start out with an example we solved in the last chapter by the graphical method. This will provide us with some insight into the simplex method and at the same time give us the chance to compare a few of the feasible solutions we obtained previously by the graphical method.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Applied finite mathematics. OpenStax CNX. Jul 16, 2011 Download for free at http://cnx.org/content/col10613/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Applied finite mathematics' conversation and receive update notifications?

Ask