Introduction and Motivation
Why Revised Simplex Method?
-
Standard Simplex Limitations:
-
Stores and updates entire simplex tableau
-
Memory intensive for large problems
-
Many zero entries in tableau (sparse matrices)
-
Computational inefficiency
-
-
Revised Simplex Advantages:
-
Works with basis inverse \(B^{-1}\) only
-
Exploits sparsity of constraint matrix
-
Reduced storage requirements
-
Better numerical stability
-
More efficient for large-scale problems
-
Key Concept
Central Idea
Instead of maintaining the full simplex tableau, the revised simplex method:
-
Maintains only the basis inverse matrix \(B^{-1}\)
-
Computes required tableau elements on demand
-
Uses matrix operations efficiently
Memory Comparison
-
Standard: Store \((m \times (n+m))\) tableau matrix
-
Revised: Store \((m \times m)\) basis inverse matrix
Mathematical Foundation
Standard Form LP Problem
Linear Programming Problem
Where:
-
\(A\) is \(m \times n\) constraint matrix
-
\(x\) is \(n \times 1\) decision variable vector
-
\(c\) is \(n \times 1\) cost coefficient vector
-
\(b\) is \(m \times 1\) right-hand side vector
Basis Matrix Decomposition
Matrix Partitioning
Partition constraint matrix \(A\) as:
-
\(B\): \(m \times m\) basis matrix (columns corresponding to basic variables)
-
\(N\): \(m \times (n-m)\) non-basis matrix
Variable Partitioning
Similarly partition variables and costs:
Basic Feasible Solution
BFS Representation
The constraint \(Ax = b\) becomes:
For basic feasible solution with \(x_N = 0\):
Objective Function Value
Revised Simplex Algorithm
Algorithm Overview
Main Steps
-
Initialization: Find initial basic feasible solution
-
Optimality Test: Check if current solution is optimal
-
Entering Variable: Select variable to enter basis
-
Leaving Variable: Determine variable to leave basis
-
Pivot Operation: Update basis inverse \(B^{-1}\)
-
Iteration: Repeat until optimal solution found
Step 1: Optimality Test
Reduced Costs Calculation
For non-basic variable \(x_j\), reduced cost is:
where \(A_j\) is the \(j\)-th column of matrix \(A\).
Optimality Condition
Current solution is optimal if:
Step 2: Entering Variable Selection
Entering Variable Rule
If not optimal, select entering variable \(x_s\) such that:
Alternative Rules
-
Most negative: Choose most negative reduced cost
-
First negative: Choose first negative reduced cost found
-
Steepest edge: Consider improvement per unit change
Step 3: Direction Vector
Simplex Direction
Calculate the simplex direction vector:
This represents how basic variables change when \(x_s\) increases by one unit.
Interpretation
-
\(d_i > 0\): Basic variable \(x_{B_i}\) increases
-
\(d_i < 0\): Basic variable \(x_{B_i}\) decreases
-
\(d_i = 0\): Basic variable \(x_{B_i}\) unchanged
Step 4: Leaving Variable Selection
Minimum Ratio Test
Calculate step size \(\theta\):
The leaving variable \(x_r\) corresponds to the index achieving this minimum.
Unbounded Solution
If \(d_i \leq 0\) for all \(i\), then the LP problem is unbounded.
Step 5: Basis Update
New Basic Solution
Update the basic solution:
Basis Matrix Update
Replace column \(r\) of \(B\) with column \(A_s\):
Basis Inverse Updates
Efficient Basis Inverse Update
Challenge
Computing \(B^{-1}\) from scratch each iteration is expensive. Solution: Use rank-one update formulas.
Elementary Matrix Approach
Define elementary matrix \(E\):
where \(e_r\) is the \(r\)-th unit vector.
Update Formula
Elementary Matrix Construction
Matrix \(E\) Structure
Only row \(r\) differs from identity matrix.
Computational Complexity
Operation Counts per Iteration
-
Reduced cost calculation: \(O(mn)\) operations
-
Direction vector: \(O(m^2)\) operations
-
Minimum ratio test: \(O(m)\) operations
-
Basis inverse update: \(O(m^2)\) operations
Comparison with Standard Simplex
-
Standard: \(O(mn)\) per iteration
-
Revised: \(O(m^2 + mn)\) per iteration
-
Advantage: Better when \(m \ll n\) (tall, thin matrices)
Numerical Example
Example Problem
LP Problem
Matrix Form
Initial Solution
Initial Basis
Choose \(B = \{3, 4\}\) (slack variables):
Initial BFS
Iteration 1: Optimality Test
Reduced Costs
For \(x_1\): \(\bar{c}_1 = 2 - [0, 0] \begin{bmatrix} 1 \\ 2 \end{bmatrix} = 2\)
For \(x_2\): \(\bar{c}_2 = 3 - [0, 0] \begin{bmatrix} 2 \\ 1 \end{bmatrix} = 3\)
Decision
Both reduced costs are positive, but we’re minimizing! Actually, both are positive, so current solution is optimal. Wait - let me recalculate with proper signs...
Corrected Iteration 1
Reduced Costs (Corrected)
\(c_B = [0, 0]\) for basis \(\{x_3, x_4\}\)
For \(x_1\): \(\bar{c}_1 = 2 - [0, 0] \begin{bmatrix} 1 \\ 2 \end{bmatrix} = 2 > 0\)
For \(x_2\): \(\bar{c}_2 = 3 - [0, 0] \begin{bmatrix} 2 \\ 1 \end{bmatrix} = 3 > 0\)
Conclusion
Since we’re minimizing and all reduced costs are positive, the current solution \((0, 0, 8, 10)\) with \(z = 0\) is optimal.
Advanced Topics
Degeneracy Handling
Degeneracy Issues
-
Multiple variables can leave basis simultaneously
-
Can lead to cycling in simplex method
-
Basis inverse may become singular
Resolution Strategies
-
Lexicographic pivoting rule
-
Bland’s rule: Choose smallest index variable
-
Perturbation methods: Add small random values to RHS
Numerical Stability
Sources of Instability
-
Round-off errors accumulate in \(B^{-1}\)
-
Ill-conditioned basis matrices
-
Large pivot elements
Stability Measures
-
Periodic refactorization: Recompute \(B^{-1}\) from scratch
-
Condition number monitoring
-
Pivot element thresholds
-
LU decomposition updates
Sparse Matrix Techniques
Sparsity Exploitation
-
Real-world constraint matrices are often sparse
-
Standard dense matrix operations are inefficient
-
Specialized sparse matrix data structures needed
Sparse Techniques
-
Compressed storage formats
-
Sparse LU factorization
-
Fill-in minimization strategies
-
Iterative refinement
Implementation Considerations
Data Structures
Essential Data Structures
-
Basis inverse \(B^{-1}\): Dense or LU factored form
-
Basis index set: Current basic variable indices
-
Non-basis index set: Non-basic variable indices
-
Constraint matrix \(A\): Sparse format preferred
Memory Management
-
Dynamic memory allocation for variable-size problems
-
Efficient sparse matrix representations
-
Memory pool management for frequent allocations
Computational Efficiency
Performance Optimizations
-
BLAS/LAPACK: Use optimized linear algebra libraries
-
Cache-friendly algorithms: Consider memory hierarchy
-
Parallel processing: Matrix operations can be parallelized
-
Preconditioning: Improve numerical properties
Benchmark Results
-
Revised simplex typically 2-5x faster than standard
-
Memory usage reduced by 50-80% for sparse problems
-
Better scaling with problem size
Comparison and Applications
Method Comparison
Aspect | Standard | Revised |
---|---|---|
Memory usage | \(O(mn)\) | \(O(m^2)\) |
Time per iteration | \(O(mn)\) | \(O(m^2 + mn)\) |
Numerical stability | Moderate | Better |
Implementation complexity | Simple | Moderate |
Large-scale suitability | Poor | Good |
Sparse matrix handling | Poor | Excellent |
When to Use Revised Simplex
-
Large-scale problems (\(m \ll n\))
-
Sparse constraint matrices
-
Memory-constrained environments
Real-World Applications
Industries Using Revised Simplex
-
Transportation: Route optimization, fleet management
-
Manufacturing: Production planning, resource allocation
-
Finance: Portfolio optimization, risk management
-
Energy: Power generation scheduling, grid optimization
-
Telecommunications: Network flow, capacity planning
Problem Characteristics
-
Thousands to millions of variables
-
Hundreds to thousands of constraints
-
High sparsity (1-5% non-zero elements)
-
Real-time or near-real-time requirements
Conclusion
Key Takeaways
Advantages of Revised Simplex
-
Memory efficient: Only stores \(m \times m\) basis inverse
-
Computationally efficient: Better for sparse, large problems
-
Numerically stable: Less accumulation of round-off errors
-
Scalable: Handles industrial-size problems effectively
Mathematical Foundation
-
Based on matrix factorization \(A = [B \quad N]\)
-
Uses rank-one updates for basis inverse
-
Exploits sparsity through specialized data structures
Further Study
Advanced Topics to Explore
-
Dual revised simplex method
-
Primal-dual interior point methods
-
Decomposition methods (Dantzig-Wolfe, Benders)
-
Network simplex algorithms
-
Parallel simplex implementations
Software Tools
-
Open source: GLPK, CLP, SoPlex
-
Commercial: CPLEX, Gurobi, XPRESS
-
Research: Implementation in MATLAB, Python (SciPy)
The revised simplex method is the foundation of modern large-scale linear programming solvers.