A set \( C \subseteq \mathbb{R}^n \) is convex if for any \( \mathbf{x}, \mathbf{y} \in C \), the line segment joining them lies in \( C \):
\[ \theta \mathbf{x} + (1-\theta) \mathbf{y} \in C \quad \forall \theta \in [0,1]. \]
A function \( f: C \to \mathbb{R} \) is convex if for all \( x, y \in C \) and \( \theta \in [0,1] \):
\[ f(\theta x + (1 - \theta)y) \leq \theta f(x) + (1 - \theta)f(y) \]
Key Insight: Convexity eliminates bad local minima, simplifying optimization.
\( f \) is convex if:
\[ f(\mathbf{y}) \geq f(\mathbf{x}) + \nabla f(\mathbf{x})^T (\mathbf{y} - \mathbf{x}) \quad \forall \mathbf{x}, \mathbf{y} \in C. \]
(Function lies above its tangent hyperplane.)
\( f \) is convex if its Hessian is positive semidefinite:
\[ \nabla^2 f(x) \succeq 0 \quad \forall x \in C. \]
For scalar functions: \( f''(x) \geq 0 \).
\[ \begin{aligned} \text{Minimize} \quad & f(\mathbf{x}) \\ \text{subject to} \quad & g_i(\mathbf{x}) \leq 0 \quad \text{(convex constraints)} \\ & h_j(\mathbf{x}) = 0 \quad \text{(affine constraints)}, \end{aligned} \]
where \( f \) and \( g_i \) are convex, \( h_j \) are affine.
\( f(x) = x^3 \): Not convex (Hessian \( 6x \) not always \( \geq 0 \)).
For convex \( f \), any \( \mathbf{x}^* \) with \( \nabla f(\mathbf{x}^*) = \mathbf{0} \) is a global minimum.
Exists a strictly feasible point: \( g_i(\mathbf{x}) < 0 \) for all inequality constraints.