The Revised Simplex Method

Advanced Linear Programming Optimization

Introduction and Motivation

Why Revised Simplex Method?

  • Standard Simplex Limitations:

    • Stores and updates entire simplex tableau

    • Memory intensive for large problems

    • Many zero entries in tableau (sparse matrices)

    • Computational inefficiency

  • Revised Simplex Advantages:

    • Works with basis inverse \(B^{-1}\) only

    • Exploits sparsity of constraint matrix

    • Reduced storage requirements

    • Better numerical stability

    • More efficient for large-scale problems

Key Concept

Central Idea

Instead of maintaining the full simplex tableau, the revised simplex method:

  1. Maintains only the basis inverse matrix \(B^{-1}\)

  2. Computes required tableau elements on demand

  3. Uses matrix operations efficiently

Memory Comparison

  • Standard: Store \((m \times (n+m))\) tableau matrix

  • Revised: Store \((m \times m)\) basis inverse matrix

Mathematical Foundation

Standard Form LP Problem

Linear Programming Problem

\[\begin{aligned} \text{Minimize } & c^T x \\ \text{Subject to } & Ax = b \\ & x \geq 0 \end{aligned}\]

Where:

  • \(A\) is \(m \times n\) constraint matrix

  • \(x\) is \(n \times 1\) decision variable vector

  • \(c\) is \(n \times 1\) cost coefficient vector

  • \(b\) is \(m \times 1\) right-hand side vector

Basis Matrix Decomposition

Matrix Partitioning

Partition constraint matrix \(A\) as:

\[A = [B \quad N]\]
where:

  • \(B\): \(m \times m\) basis matrix (columns corresponding to basic variables)

  • \(N\): \(m \times (n-m)\) non-basis matrix

Variable Partitioning

Similarly partition variables and costs:

\[x = \begin{bmatrix} x_B \\ x_N \end{bmatrix}, \quad c = \begin{bmatrix} c_B \\ c_N \end{bmatrix}\]

Basic Feasible Solution

BFS Representation

The constraint \(Ax = b\) becomes:

\[Bx_B + Nx_N = b\]

For basic feasible solution with \(x_N = 0\):

\[x_B = B^{-1}b\]
\[x_N = 0\]

Objective Function Value

\[z = c_B^T x_B = c_B^T B^{-1} b\]

Revised Simplex Algorithm

Algorithm Overview

Main Steps

  1. Initialization: Find initial basic feasible solution

  2. Optimality Test: Check if current solution is optimal

  3. Entering Variable: Select variable to enter basis

  4. Leaving Variable: Determine variable to leave basis

  5. Pivot Operation: Update basis inverse \(B^{-1}\)

  6. Iteration: Repeat until optimal solution found

Step 1: Optimality Test

Reduced Costs Calculation

For non-basic variable \(x_j\), reduced cost is:

\[\bar{c}_j = c_j - c_B^T B^{-1} A_j\]

where \(A_j\) is the \(j\)-th column of matrix \(A\).

Optimality Condition

Current solution is optimal if:

\[\bar{c}_j \geq 0 \quad \forall j \in \text{non-basic variables}\]

Step 2: Entering Variable Selection

Entering Variable Rule

If not optimal, select entering variable \(x_s\) such that:

\[\bar{c}_s = \min\{\bar{c}_j : \bar{c}_j < 0\}\]

Alternative Rules

  • Most negative: Choose most negative reduced cost

  • First negative: Choose first negative reduced cost found

  • Steepest edge: Consider improvement per unit change

Step 3: Direction Vector

Simplex Direction

Calculate the simplex direction vector:

\[d = B^{-1} A_s\]

This represents how basic variables change when \(x_s\) increases by one unit.

Interpretation

  • \(d_i > 0\): Basic variable \(x_{B_i}\) increases

  • \(d_i < 0\): Basic variable \(x_{B_i}\) decreases

  • \(d_i = 0\): Basic variable \(x_{B_i}\) unchanged

Step 4: Leaving Variable Selection

Minimum Ratio Test

Calculate step size \(\theta\):

\[\theta = \min\left\{\frac{x_{B_i}}{d_i} : d_i > 0, i = 1,2,\ldots,m\right\}\]

The leaving variable \(x_r\) corresponds to the index achieving this minimum.

Unbounded Solution

If \(d_i \leq 0\) for all \(i\), then the LP problem is unbounded.

Step 5: Basis Update

New Basic Solution

Update the basic solution:

\[\begin{aligned} x_s^{\text{new}} &= \theta \\ x_{B_i}^{\text{new}} &= x_{B_i}^{\text{old}} - \theta \cdot d_i \quad \forall i \neq r \\ x_r^{\text{new}} &= 0 \end{aligned}\]

Basis Matrix Update

Replace column \(r\) of \(B\) with column \(A_s\):

\[B^{\text{new}} = B^{\text{old}} \text{ with column } r \text{ replaced by } A_s\]

Basis Inverse Updates

Efficient Basis Inverse Update

Challenge

Computing \(B^{-1}\) from scratch each iteration is expensive. Solution: Use rank-one update formulas.

Elementary Matrix Approach

Define elementary matrix \(E\):

\[E = I - \frac{1}{d_r}(d - e_r)e_r^T\]

where \(e_r\) is the \(r\)-th unit vector.

Update Formula

\[(B^{\text{new}})^{-1} = E \cdot B^{-1}\]

Elementary Matrix Construction

Matrix \(E\) Structure

\[E = \begin{bmatrix} 1 & 0 & \cdots & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \frac{d_1}{d_r} & \cdots & \frac{d_{r-1}}{d_r} \\ 0 & 0 & \cdots & \frac{1}{d_r} & \cdots & \frac{d_{r+1}}{d_r} \\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \frac{d_m}{d_r} & \cdots & 1 \end{bmatrix}\]

Only row \(r\) differs from identity matrix.

Computational Complexity

Operation Counts per Iteration

  • Reduced cost calculation: \(O(mn)\) operations

  • Direction vector: \(O(m^2)\) operations

  • Minimum ratio test: \(O(m)\) operations

  • Basis inverse update: \(O(m^2)\) operations

Comparison with Standard Simplex

  • Standard: \(O(mn)\) per iteration

  • Revised: \(O(m^2 + mn)\) per iteration

  • Advantage: Better when \(m \ll n\) (tall, thin matrices)

Numerical Example

Example Problem

LP Problem

\[\begin{aligned} \text{Minimize } & 2x_1 + 3x_2 \\ \text{Subject to } & x_1 + 2x_2 + x_3 = 8 \\ & 2x_1 + x_2 + x_4 = 10 \\ & x_1, x_2, x_3, x_4 \geq 0 \end{aligned}\]

Matrix Form

\[A = \begin{bmatrix} 1 & 2 & 1 & 0 \\ 2 & 1 & 0 & 1 \end{bmatrix}, \quad b = \begin{bmatrix} 8 \\ 10 \end{bmatrix}, \quad c = \begin{bmatrix} 2 \\ 3 \\ 0 \\ 0 \end{bmatrix}\]

Initial Solution

Initial Basis

Choose \(B = \{3, 4\}\) (slack variables):

\[B = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \quad B^{-1} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\]

\[x_B = B^{-1}b = \begin{bmatrix} 8 \\ 10 \end{bmatrix}\]

Initial BFS

\[x = (0, 0, 8, 10)^T, \quad z = 0\]

Iteration 1: Optimality Test

Reduced Costs

For \(x_1\): \(\bar{c}_1 = 2 - [0, 0] \begin{bmatrix} 1 \\ 2 \end{bmatrix} = 2\)

For \(x_2\): \(\bar{c}_2 = 3 - [0, 0] \begin{bmatrix} 2 \\ 1 \end{bmatrix} = 3\)

Decision

Both reduced costs are positive, but we’re minimizing! Actually, both are positive, so current solution is optimal. Wait - let me recalculate with proper signs...

Corrected Iteration 1

Reduced Costs (Corrected)

\(c_B = [0, 0]\) for basis \(\{x_3, x_4\}\)

For \(x_1\): \(\bar{c}_1 = 2 - [0, 0] \begin{bmatrix} 1 \\ 2 \end{bmatrix} = 2 > 0\)

For \(x_2\): \(\bar{c}_2 = 3 - [0, 0] \begin{bmatrix} 2 \\ 1 \end{bmatrix} = 3 > 0\)

Conclusion

Since we’re minimizing and all reduced costs are positive, the current solution \((0, 0, 8, 10)\) with \(z = 0\) is optimal.

Advanced Topics

Degeneracy Handling

Degeneracy Issues

  • Multiple variables can leave basis simultaneously

  • Can lead to cycling in simplex method

  • Basis inverse may become singular

Resolution Strategies

  • Lexicographic pivoting rule

  • Bland’s rule: Choose smallest index variable

  • Perturbation methods: Add small random values to RHS

Numerical Stability

Sources of Instability

  • Round-off errors accumulate in \(B^{-1}\)

  • Ill-conditioned basis matrices

  • Large pivot elements

Stability Measures

  • Periodic refactorization: Recompute \(B^{-1}\) from scratch

  • Condition number monitoring

  • Pivot element thresholds

  • LU decomposition updates

Sparse Matrix Techniques

Sparsity Exploitation

  • Real-world constraint matrices are often sparse

  • Standard dense matrix operations are inefficient

  • Specialized sparse matrix data structures needed

Sparse Techniques

  • Compressed storage formats

  • Sparse LU factorization

  • Fill-in minimization strategies

  • Iterative refinement

Implementation Considerations

Data Structures

Essential Data Structures

  • Basis inverse \(B^{-1}\): Dense or LU factored form

  • Basis index set: Current basic variable indices

  • Non-basis index set: Non-basic variable indices

  • Constraint matrix \(A\): Sparse format preferred

Memory Management

  • Dynamic memory allocation for variable-size problems

  • Efficient sparse matrix representations

  • Memory pool management for frequent allocations

Computational Efficiency

Performance Optimizations

  • BLAS/LAPACK: Use optimized linear algebra libraries

  • Cache-friendly algorithms: Consider memory hierarchy

  • Parallel processing: Matrix operations can be parallelized

  • Preconditioning: Improve numerical properties

Benchmark Results

  • Revised simplex typically 2-5x faster than standard

  • Memory usage reduced by 50-80% for sparse problems

  • Better scaling with problem size

Comparison and Applications

Method Comparison

Aspect Standard Revised
Memory usage \(O(mn)\) \(O(m^2)\)
Time per iteration \(O(mn)\) \(O(m^2 + mn)\)
Numerical stability Moderate Better
Implementation complexity Simple Moderate
Large-scale suitability Poor Good
Sparse matrix handling Poor Excellent

When to Use Revised Simplex

  • Large-scale problems (\(m \ll n\))

  • Sparse constraint matrices

  • Memory-constrained environments

Real-World Applications

Industries Using Revised Simplex

  • Transportation: Route optimization, fleet management

  • Manufacturing: Production planning, resource allocation

  • Finance: Portfolio optimization, risk management

  • Energy: Power generation scheduling, grid optimization

  • Telecommunications: Network flow, capacity planning

Problem Characteristics

  • Thousands to millions of variables

  • Hundreds to thousands of constraints

  • High sparsity (1-5% non-zero elements)

  • Real-time or near-real-time requirements

Conclusion

Key Takeaways

Advantages of Revised Simplex

  1. Memory efficient: Only stores \(m \times m\) basis inverse

  2. Computationally efficient: Better for sparse, large problems

  3. Numerically stable: Less accumulation of round-off errors

  4. Scalable: Handles industrial-size problems effectively

Mathematical Foundation

  • Based on matrix factorization \(A = [B \quad N]\)

  • Uses rank-one updates for basis inverse

  • Exploits sparsity through specialized data structures

Further Study

Advanced Topics to Explore

  • Dual revised simplex method

  • Primal-dual interior point methods

  • Decomposition methods (Dantzig-Wolfe, Benders)

  • Network simplex algorithms

  • Parallel simplex implementations

Software Tools

  • Open source: GLPK, CLP, SoPlex

  • Commercial: CPLEX, Gurobi, XPRESS

  • Research: Implementation in MATLAB, Python (SciPy)

The revised simplex method is the foundation of modern large-scale linear programming solvers.