Consider the set \(S\) of points \((x,y)\in\mathbb{R}^2\) which minimize the real-valued function \[f(x,y) = (x+y-1)^2 + (x+y)^2.\] Which of the following statements is true about the set \(S\)?
The number of elements in the set \(S\) is finite and more than one.
The number of elements in the set \(S\) is infinite.
The set \(S\) is empty.
The number of elements in the set \(S\) is exactly one.
We compute the stationary points by finding \(\nabla f = \mathbf{0}\). Let \(u=x+y\). Then \[f(x,y)= (u-1)^2 + u^2 = 2u^2 -2u +1.\] Differentiate with respect to \(x\) and \(y\) (noting \(u_x = 1\), \(u_y = 1\)): \[\frac{\partial f}{\partial x} = \frac{df}{du}\cdot\frac{\partial u}{\partial x} = (4u -2)\cdot 1 = 4u-2,\] \[\frac{\partial f}{\partial y} = \frac{df}{du}\cdot\frac{\partial u}{\partial y} = (4u -2)\cdot 1 = 4u-2.\] Setting the gradient equal to zero gives \[4u-2 = 0 \quad\Rightarrow\quad u = \frac{1}{2}.\] Thus all points \((x,y)\) satisfying \[x+y = \frac{1}{2}\] are stationary points. This is a straight line in the \((x,y)\)-plane, so there are infinitely many points that minimize \(f\) (all yield the same minimum value). Therefore the set \(S\) has infinitely many elements. Answer: (b).
Let \(v_{1}\) and \(v_{2}\) be the two eigenvectors corresponding to distinct eigenvalues of a \(3\times3\) real symmetric matrix. Which one of the following statements is true?
\(v_{1}^{T}v_{2}\ne0\)
\(v_{1}^{T}v_{2}=0\)
\(v_{1}+v_{2}=0\)
\(v_{1}-v_{2}=0\)
We know that the eigenvectors of a real symmetric matrix corresponding to distinct eigenvalues are always orthogonal (or pair-wise orthogonal). The orthogonality of vectors \(v_{1}\) and \(v_{2}\) is mathematically expressed as their dot product (or, for column vectors, the matrix product \(v_{1}^{T}v_{2}\)) being zero. \[v_{1}^{T}v_{2}=0\] Therefore, the statement \(v_{1}^{T}v_{2}=0\) is true. Answer: (b).
Let \[A=\begin{bmatrix}1&1&1\\ -1&-1&-1\\ 0&1&-1\end{bmatrix}, \quad\text{and}\quad b=\begin{bmatrix}1/3\\ -1/3\\ 0\end{bmatrix}.\] Then, the system of linear equations \(A\mathbf{x}=\mathbf{b}\) has
a unique solution.
infinitely many solutions.
a finite number of solutions.
no solution.
We consider the augmented matrix \((A|\mathbf{b})\) and reduce it to row echelon form. \[(A|\mathbf{b}) = \begin{bmatrix} 1 & 1 & 1 & | & 1/3 \\ -1 & -1 & -1 & | & -1/3 \\ 0 & 1 & -1 & | & 0 \end{bmatrix}\] Apply the row operation \(R_{2} \to R_{2} + R_{1}\): \[(A|\mathbf{b}) \sim \begin{bmatrix} 1 & 1 & 1 & | & 1/3 \\ -1+1 & -1+1 & -1+1 & | & -1/3+1/3 \\ 0 & 1 & -1 & | & 0 \end{bmatrix} = \begin{bmatrix} 1 & 1 & 1 & | & 1/3 \\ 0 & 0 & 0 & | & 0 \\ 0 & 1 & -1 & | & 0 \end{bmatrix}\] Interchange rows \(R_{2}\) and \(R_{3}\) (\(R_{2} \leftrightarrow R_{3}\)): \[(A|\mathbf{b}) \sim \begin{bmatrix} 1 & 1 & 1 & | & 1/3 \\ 0 & 1 & -1 & | & 0 \\ 0 & 0 & 0 & | & 0 \end{bmatrix}\] The rank of the coefficient matrix \(A\), denoted by \(\rho(A)\), is the number of non-zero rows in the row echelon form of \(A\), which is 2. The rank of the augmented matrix \((A|\mathbf{b})\), denoted by \(\rho(A|\mathbf{b})\), is also 2. The number of variables \(n\) is 3. Since \(\rho(A) = \rho(A|\mathbf{b}) = 2\) and this rank is less than the number of variables \(n=3\), the system \(A\mathbf{x}=\mathbf{b}\) has infinitely many solutions. Answer: (b).
Let \[P=\begin{bmatrix}2&1&0\\ -1&0&0\\ 0&0&1\end{bmatrix},\] and let \(I\) be the identity matrix. Then \(P^{2}\) is equal to
\(2P-I\)
\(P\)
\(I\)
\(P+I\)
First, calculate \(P^2\): \[P^{2}=P\cdot P = \begin{bmatrix}2&1&0\\ -1&0&0\\ 0&0&1\end{bmatrix}\begin{bmatrix}2&1&0\\ -1&0&0\\ 0&0&1\end{bmatrix} = \begin{bmatrix} (2)(2)+(1)(-1)+(0)(0) & (2)(1)+(1)(0)+(0)(0) & (2)(0)+(1)(0)+(0)(1) \\ (-1)(2)+(0)(-1)+(0)(0) & (-1)(1)+(0)(0)+(0)(0) & (-1)(0)+(0)(0)+(0)(1) \\ (0)(2)+(0)(-1)+(1)(0) & (0)(1)+(0)(0)+(1)(0) & (0)(0)+(0)(0)+(1)(1) \end{bmatrix}\] \[P^{2} = \begin{bmatrix}3&2&0\\ -2&-1&0\\ 0&0&1\end{bmatrix}\] Next, check the options against the calculated \(P^2\). The identity matrix is \(I=\begin{bmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{bmatrix}\). Check option (a): \(2P-I\) \[2P-I = 2\begin{bmatrix}2&1&0\\ -1&0&0\\ 0&0&1\end{bmatrix} - \begin{bmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{bmatrix} = \begin{bmatrix}4&2&0\\ -2&0&0\\ 0&0&2\end{bmatrix} - \begin{bmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{bmatrix}\] \[2P-I = \begin{bmatrix}4-1&2-0&0-0\\ -2-0&0-1&0-0\\ 0-0&0-0&2-1\end{bmatrix} = \begin{bmatrix}3&2&0\\ -2&-1&0\\ 0&0&1\end{bmatrix}\] Since \(P^2 = 2P-I\), option (a) is the correct answer. (For completeness, check option (d): \(P+I = \begin{bmatrix}2+1&1&0\\ -1&0+1&0\\ 0&0&1+1\end{bmatrix} = \begin{bmatrix}3&1&0\\ -1&1&0\\ 0&0&2\end{bmatrix}\), which is not \(P^2\).) Answer: (a).
Consider discrete random variables \(X\) and \(Y\) with probabilities as follows: \[\begin{aligned} P(X=0 \text{ and } Y=0)&=\frac{1}{4} \\ P(X=1 \text{ and } Y=0)&=\frac{1}{8} \\ P(X=0 \text{ and } Y=1)&=\frac{1}{2} \\ P(X=1 \text{ and } Y=1)&=\frac{1}{8} \end{aligned}\] Given \(X=1\), the expected value of \(Y\) is
\(\frac{1}{4}\)
\(\frac{1}{2}\)
\(\frac{1}{8}\)
\(\frac{1}{3}\)
We are asked to find the conditional expected value \(E(Y|X=1)\). The formula is: \[E(Y|X=1) = \sum_{y} y \cdot P(Y=y | X=1)\] Since \(Y\) can only take values \(0\) and \(1\): \[E(Y|X=1) = 0 \cdot P(Y=0 | X=1) + 1 \cdot P(Y=1 | X=1) = P(Y=1 | X=1)\] First, find the marginal probability \(P(X=1)\): \[\begin{aligned} P(X=1) &= P(X=1, Y=0) + P(X=1, Y=1) \\ &= \frac{1}{8} + \frac{1}{8} = \frac{2}{8} = \frac{1}{4} \end{aligned}\] Next, use the definition of conditional probability: \[P(Y=1 | X=1) = \frac{P(X=1 \cap Y=1)}{P(X=1)}\] Substituting the known joint and marginal probabilities: \[P(Y=1 | X=1) = \frac{P(X=1, Y=1)}{P(X=1)} = \frac{\frac{1}{8}}{\frac{1}{4}} = \frac{1}{8} \cdot 4 = \frac{4}{8} = \frac{1}{2}\] Therefore, the expected value is: \[E(Y|X=1) = P(Y=1 | X=1) = \frac{1}{2}\] Answer: (b).
Let \(X\) and \(Y\) be continuous random variables with probability density functions \(P_{X}(x)\) and \(P_{Y}(y)\), respectively. Further, let \(Y=X^{2}\) and \[P_{X}(x)=\begin{cases}1,&x \in (0,1]\\ 0,&otherwise\end{cases}\] Which one of the following options is correct?
\(P_{Y}(y)=\begin{cases}\frac{1}{2\sqrt{y}},&y\in(0,1]\\ 0,&otherwise\end{cases}\)
\(P_{Y}(y)=\begin{cases}1,&y\in(0,1]\\ 0,&otherwise\end{cases}\)
\(P_{Y}(y)=\begin{cases}1.5\sqrt{y},&y\in(0,1]\\ 0,&otherwise\end{cases}\)
\(P_{Y}(y)=\begin{cases}2y,&y\in(0,1]\\ 0,&otherwise\end{cases}\)
Given \(Y = g(X) = X^2\). The range of \(X\) is \((0, 1]\). Since \(Y=X^2\), the range of \(Y\) is also \((0^2, 1^2]\), i.e., \(y \in (0, 1]\). Since \(x \in (0, 1]\), the transformation \(Y=X^2\) is one-to-one, and the inverse transformation is \(X = h(Y) = \sqrt{Y}\). The probability density function (p.d.f.) of \(Y\) is given by the change of variable formula: \[P_{Y}(y) = P_{X}(h(y)) \cdot \left|\frac{dh}{dy}\right|\] where \(\frac{dh}{dy}\) is the derivative of the inverse function \(X = h(Y) = \sqrt{Y}\) with respect to \(Y\). 1. Find the derivative \(\frac{dx}{dy}\): \[\frac{dx}{dy} = \frac{d}{dy}(\sqrt{y}) = \frac{d}{dy}(y^{1/2}) = \frac{1}{2}y^{-1/2} = \frac{1}{2\sqrt{y}}\] 2. Evaluate \(P_{X}(h(y))\): Since \(y \in (0, 1]\), \(x = \sqrt{y} \in (0, 1]\), and for this range, \(P_{X}(x) = 1\). Thus, \(P_{X}(h(y)) = P_{X}(\sqrt{y}) = 1\). 3. Apply the formula: For \(y \in (0, 1]\): \[P_{Y}(y) = 1 \cdot \left|\frac{1}{2\sqrt{y}}\right| = \frac{1}{2\sqrt{y}}\] For \(y\) outside \((0, 1]\), \(P_{Y}(y) = 0\). Therefore, the p.d.f. of \(Y\) is: \[P_{Y}(y)=\begin{cases}\frac{1}{2\sqrt{y}},&y\in(0,1]\\ 0,&otherwise\end{cases}\] Answer: (a).
Consider ordinary differential equations given by \[\dot{x}_{1}(t)=2x_{2}(t) \quad \text{and} \quad \dot{x}_{2}(t)=r(t)\] with initial conditions \(x_{1}(0)=1\) and \(x_{2}(0)=0\). If \(r(t)=\begin{cases}1,&t\ge0\\ 0,&t<0\end{cases}\) then at \(t=1\), \(x_{1}(t)=\) (round off to the nearest integer).
The system of differential equations is: \[\begin{aligned} \label{eq:x1_dot} \frac{dx_{1}}{dt} &= 2x_{2} \\ \label{eq:x2_dot} \frac{dx_{2}}{dt} &= r(t) \end{aligned}\] We are given \(r(t)=1\) for \(t \ge 0\). We are interested in \(t=1\), so we use \(r(t)=1\) for \(t \ge 0\). The initial conditions are \(x_{1}(0)=1\) and \(x_{2}(0)=0\).
Step 1: Solve for \(x_{2}(t)\) From Equation ([eq:x2_dot]), for \(t \ge 0\): \[\frac{dx_{2}}{dt} = 1\] Integrating with respect to \(t\): \[x_{2}(t) = t + c\] Using the initial condition \(x_{2}(0)=0\): \[0 = 0 + c \quad\Rightarrow\quad c = 0\] So, \(x_{2}(t) = t\) for \(t \ge 0\).
Step 2: Solve for \(x_{1}(t)\) Substitute \(x_{2}(t) = t\) into Equation ([eq:x1_dot]): \[\frac{dx_{1}}{dt} = 2x_{2} = 2t\] Integrating with respect to \(t\): \[x_{1}(t) = \int 2t \, dt = \frac{2t^{2}}{2} + K = t^{2} + K\] Using the initial condition \(x_{1}(0)=1\): \[1 = (0)^{2} + K \quad\Rightarrow\quad K = 1\] So, \(x_{1}(t) = t^{2} + 1\).
Step 3: Evaluate \(x_{1}(t)\) at \(t=1\) \[x_{1}(1) = (1)^{2} + 1 = 1 + 1 = 2\] Rounding off to the nearest integer gives 2. Answer: 2.