2.7 Linear Mappings

Linear mappings are functions between vector spaces that preserve vector addition and scalar multiplication. That is, for vector spaces \(V\) and \(W\), a mapping \(\Phi : V \to W\) is linear if: \[ \Phi(x + y) = \Phi(x) + \Phi(y), \quad \Phi(\lambda x) = \lambda \Phi(x) \] for all \(x, y \in V\) and scalars \(\lambda \in \mathbb{R}\).

Definition 2.31 A linear mapping (or linear transformation) is a function \(\Phi : V \to W\) satisfying: \[ \Phi(\lambda x + \psi y) = \lambda \Phi(x) + \psi \Phi(y) \] for all \(x, y \in V\) and scalars \(\lambda, \psi \in \mathbb{R}\).

Example 2.52 A linear mapping \(T: \mathbb{R}^2 \to \mathbb{R}^2\) can be represented by a matrix. Consider the matrix \[ \mathbf{A} = \begin{pmatrix} 2 & -1 \\ 3 & 4 \end{pmatrix}. \] Define the linear map \(\Phi(\mathbf{x}) = \mathbf{A}\mathbf{x}\).

Let
\[ \mathbf{x} = \begin{pmatrix} 1 \\ 2 \end{pmatrix}. \] Then \[ \Phi(\mathbf{x}) = A\mathbf{x} = \begin{pmatrix} 2 & -1 \\ 3 & 4 \end{pmatrix} \begin{pmatrix} 1 \\ 2 \end{pmatrix} = \begin{pmatrix} 2(1) - 1(2) \\ 3(1) + 4(2) \end{pmatrix} = \begin{pmatrix} 0 \\ 11 \end{pmatrix}. \]

The map is linear because \[\Phi(\mathbf{v} + \mathbf{w}) = \mathbf{A}(\mathbf{v} + \mathbf{w}) = \mathbf{A}\mathbf{v} + \mathbf{A}\mathbf{w} = \Phi(\mathbf{v}) + \Phi(\mathbf{w}) \quad \text{and} \quad \phi(\lambda \mathbf{v}) = \mathbf{A}(\lambda \mathbf{v}) = \lambda \mathbf{A}\mathbf{v} = \lambda \Phi(\mathbf{v}).\]

Injective, surjective and bijective mappings are important in machine learning. Bijective maps are particularly useful because they are invertible.

Definition 2.32 Let \(\Phi : V \to W\) be a mapping. Then

  • \(\Phi\) is injective if \(\Phi(x) = \Phi(y) \Rightarrow x = y\).
  • \(\Phi\) is surjective if \(\Phi(V) = W\).
  • \(\Phi\) is bijective if it is both injective and surjective.

Example 2.53 The linear map \(\Phi(\mathbf{x}) = \mathbf{A}\mathbf{x}\), with \[ \mathbf{A} = \begin{pmatrix} 2 & -1 \\ 3 & 4 \end{pmatrix} \] is bijective. One way you can determine if a function is invertible is to determine if it can be inverted. In this case, we can find the inverse function by multiplying by the inverse of \(\mathbf{A}\) (if it exists). Since the determinant of \(\mathbf{A} \not = 0\), the matrix (and hence the function) is invertible. Therefore, the function is bijective.

Some other special cases of linear mappings include:

  • Isomorphism: Linear and bijective (\(\Phi : V \to W\))
  • Endomorphism: Linear map from \(V\) to itself (\(\Phi : V \to V\))
  • Automorphism: Linear and bijective endomorphism

Example 2.54 The function \[ f: \mathbb{R} \to \mathbb{R}, \qquad f(x) = 3x - 2, \] is a bijection from \(\mathbb{R}\) to \(\mathbb{R}\). Since it is also linear, it is an automorphism on \(\mathbb{R}\).

The function \[ g: \mathbb{R} \to \mathbb{R}, \qquad g(x) = x^2, \] is not a bijection. In fact, it is neither injective or surjective. It’s not injective since \[ g(2) = 4, \qquad g(-2) = 4. \] Since
\[ 2 \neq -2 \quad \text{but} \quad g(2) = g(-2), \] the function is not injective. Also, since the codomain is \(\mathbb{R}\), but \(g(x) = x^2\) only produces nonnegative outputs, negative numbers (e.g. \(-5\)) cannot be written as \(x^2\) for any real \(x\). Thus it does not hit every value in the codomain. Therefore, it is not surjective.

The identity mapping is denoted \(\text{id}_V(x) = x\).

Theorem 2.4 Finite-dimensional vector spaces \(V\) and \(W\) are isomorphic if and only if: \[ \dim(V) = \dim(W) \]

This means that spaces of equal dimension are structurally the same, as they can be related through a linear bijective map.

2.7.1 Matrix Representation of Linear Mappings

Every linear mapping between finite-dimensional vector spaces can be represented by a matrix.

Lemma 2.8 Given bases \(\mathbf{B} = (\mathbf{b}_1, \dots, \mathbf{b}_n)\) for \(V\) and \(\mathbf{C} = (\mathbf{c}_1, \dots, \mathbf{c}_m)\) for \(W\), the transformation matrix \(\mathbf{A}_\Phi\) of \(\Phi : V \to W\) is defined by: \[ \Phi(\mathbf{b}_j) = \sum_{i=1}^m \alpha_{ij} \mathbf{c}_i \] where \(\mathbf{A}_\Phi(i, j) = \alpha_{ij}\).

For coordinate vectors \(\hat{\mathbf{x}} \in V\) and \(\hat{\mathbf{y}} \in W\), \[ \hat{\mathbf{y}} = \mathbf{A}_\Phi \hat{\mathbf{x}} \]

Example 2.55 Let

  • \(V = \mathbb{R}^2\) with standard basis
    \[ \mathbf{B} = (\mathbf{b}_1, \mathbf{b}_2) = \left( \begin{bmatrix}1 \\ 0\end{bmatrix}, \begin{bmatrix}0 \\ 1\end{bmatrix} \right) \]

  • \(W = \mathbb{R}^2\) with a nonstandard basis
    \[ \mathbf{C} = (\mathbf{c}_1, \mathbf{c}_2) = \left( \begin{bmatrix}2 \\ 1\end{bmatrix}, \begin{bmatrix}1 \\ 3\end{bmatrix} \right). \]

Suppose the linear transformation \(\Phi : \mathbb{R}^2 \to \mathbb{R}^2\) is \[ \Phi(x,y) = (3x + y, \; x - 2y). \]

Since \(\mathbf{B}\) is the standard basis, the inputs are easy: \[\begin{align*} \mathbf{b}_1 = (1,0) \qquad & \Longrightarrow \qquad \Phi(\mathbf{b}_1) = \Phi(1,0) = (3, \; 1)\\ \mathbf{b}_2 = (0,1) \qquad & \Longrightarrow \qquad \Phi(\mathbf{b}_2) = \Phi(0,1) = (1, \; -2) \end{align*}\]

These must now be written in basis C.

We want to find scalars \(\alpha_{ij}\) such that \[ \Phi(\mathbf{b}_j) = \alpha_{1j}\mathbf{c}_1 + \alpha_{2j}\mathbf{c}_2. \]

For \(\Phi(\mathbf{b}_1) = (3,1)\), solve: \[ \alpha_{11}\begin{bmatrix}2\\1\end{bmatrix} + \alpha_{21}\begin{bmatrix}1\\3\end{bmatrix} = \begin{bmatrix}3\\1\end{bmatrix} \qquad \Longrightarrow \qquad \alpha_{11} = \frac{8}{5}, \qquad \alpha_{21} = -\frac{1}{5}. \]

For \(\Phi(\mathbf{b}_2) = (1,-2)\), we solve: \[ \alpha_{12}\begin{bmatrix}2\\1\end{bmatrix} + \alpha_{22}\begin{bmatrix}1\\3\end{bmatrix} = \begin{bmatrix}1\\ -2\end{bmatrix} \qquad \Longrightarrow \qquad \alpha_{12} = \frac{5}{7}, \qquad \alpha_{22} = -\frac{3}{7}. \]

Therefore, the transformation matrix is (by the lemma): \[ A_\Phi(i,j) = \alpha_{ij} \qquad \Longrightarrow \qquad A_\Phi^{\mathbf{C}\leftarrow\mathbf{B}} = \begin{bmatrix} \frac{8}{5} & \frac{5}{7} \\ -\frac{1}{5} & -\frac{3}{7} \end{bmatrix}. \]

This matrix converts B-coordinates of \(x\) into C-coordinates of \(\Phi(x)\):

\[ [\Phi(x)]_{\mathbf{C}} = A_\Phi^{\mathbf{C}\leftarrow\mathbf{B}} \,[x]_{\mathbf{B}}. \]

For an example with a specific vector, let \[ [x]_{\mathbf{B}} = \begin{bmatrix}2 \\ -1\end{bmatrix}. \]

Compute: \[ [\Phi(x)]_{\mathbf{C}} = \begin{bmatrix} \frac{8}{5} & \frac{5}{7} \\ -\frac{1}{5} & -\frac{3}{7} \end{bmatrix} \begin{bmatrix}2\\ -1\end{bmatrix} = \begin{bmatrix} \frac{16}{5} - \frac{5}{7} \\[4pt] -\frac{2}{5} + \frac{3}{7} \end{bmatrix} = \begin{bmatrix} \frac{87}{35} \\[4pt] \frac{1}{35} \end{bmatrix}. \]

So the coordinate vector of \(\Phi(x)\) in basis C is: \[ [\Phi(x)]_{\mathbf{C}} = \begin{bmatrix}87/35 \\ 1/35\end{bmatrix}. \]

If we simply want to convert a vector from \(\mathbf{B}\) coordinates to \(\mathbf{C}\) coordinates, use the same process with \(\Phi(\mathbf{x}) = \mathbf{x}\) (that is, omit the beginning portion of the calculation).

2.7.2 Coordinate Systems and Bases

A basis defines a coordinate system for a vector space.

Definition 2.33 Given an ordered basis \(\mathbf{B} = ( \mathbf{b}_1, \dots, \mathbf{b}_n)\), any vector \(x \in V\) can be uniquely represented as: \[ \mathbf{x} = \alpha_1 \mathbf{b}_1 + \cdots + \alpha_n \mathbf{b}_n \] The vector \(\alpha = [\alpha_1, \dots, \alpha_n]^T\) is the coordinate vector of \(x\) with respect to \(\mathbf{B}\).

Example 2.56 Let
\[ \mathbf{B} = (\mathbf{b}_1, \mathbf{b}_2) = \left( \begin{bmatrix}1 \\ 2\end{bmatrix}, \begin{bmatrix}3 \\ 1\end{bmatrix} \right). \]

Let \[ \mathbf{x} = \begin{bmatrix}7 \\ 5\end{bmatrix}. \] We want to express \(\mathbf{x}\) as a unique linear combination of the basis vectors: \[\begin{align*} \mathbf{x} &= \alpha_1 \mathbf{b}_1 + \alpha_2 \mathbf{b}_2\\ \begin{bmatrix}7 \\ 5\end{bmatrix} &= \alpha_1 \begin{bmatrix}1 \\ 2\end{bmatrix} + \alpha_2 \begin{bmatrix}3 \\ 1\end{bmatrix}. \end{align*}\]

This gives the system:

\[ \begin{cases} \alpha_1 + 3\alpha_2 = 7 \\ 2\alpha_1 + \alpha_2 = 5 \end{cases} \]

Solving: \[ \alpha_1 = \frac{8}{5}, \qquad \alpha_2 = \frac{9}{5} \qquad \Longrightarrow \qquad [\mathbf{x}]_{\mathbf{B}} = \begin{bmatrix} 8/5 \\[4pt] 9/5 \end{bmatrix}. \]

So, the coordinate vector of \(\mathbf{x} = \begin{bmatrix}7 \\ 5\end{bmatrix}\) with respect to the basis \(\mathbf{B} = \left(\begin{bmatrix}1 \\ 2\end{bmatrix}, \begin{bmatrix}3 \\ 1\end{bmatrix}\right)\) is \[[\mathbf{x}]_{\mathbf{B}} = \begin{bmatrix} 8/5 \\ 9/5 \end{bmatrix}.\]


2.7.3 Basis Change and Equivalence

When the bases of \(V\) and \(W\) are changed, the transformation matrix changes accordingly.

If \(S\) and \(T\) are the change-of-basis matrices for \(V\) and \(W\), then: \[ \tilde{\mathbf{A}}_\Phi = T^{-1} \mathbf{A}_\Phi S \]

Example 2.57 Let \(V=W=\mathbb{R}^2\) and let \(\Phi:V\to W\) be the linear map whose matrix in the standard basis is \[ \mathbf{A} = \begin{bmatrix} 2 & 1\\[4pt] 0 & 3 \end{bmatrix}, \qquad \text{i.e. }\; \Phi(x)=\mathbf{A}x. \]

Now pick new ordered bases for \(V\) and \(W\):

  • New basis of \(V\) (columns of \(S\)): \[ \mathbf{B} = (b_1,b_2),\qquad S = [\,b_1\; b_2\,] = \begin{bmatrix} 1 & 1\\ 1 & -1 \end{bmatrix}. \] (So \(b_1=(1,1)^\top,\; b_2=(1,-1)^\top\).)

  • New basis of \(W\) (columns of \(T\)): \[ \mathbf{C} = (c_1,c_2),\qquad T = [\,c_1\; c_2\,] = \begin{bmatrix} 2 & 0\\ 0 & 3 \end{bmatrix}. \] (So \(c_1=(2,0)^\top,\; c_2=(0,3)^\top\).)

Recall the change-of-basis formula: \[ \widetilde{\mathbf{A}}_\Phi \;=\; T^{-1}\,\mathbf{A}_\Phi\,S, \] where \(\widetilde{\mathbf{A}}_\Phi\) is the matrix of \(\Phi\) in the new bases \(\mathbf{B},\mathbf{C}\).

Compute \(\mathbf{A}S\): \[ \mathbf{A}S = \begin{bmatrix}2&1\\[4pt]0&3\end{bmatrix} \begin{bmatrix}1&1\\[4pt]1&-1\end{bmatrix} = \begin{bmatrix}3 & 1\\[4pt]3 & -3\end{bmatrix}. \]

Compute \(T^{-1}\) and then \(\widetilde{\mathbf{A}}_\Phi\): \[ T^{-1} = \begin{bmatrix}1/2 & 0\\[4pt]0 & 1/3\end{bmatrix},\qquad \widetilde{\mathbf{A}}_\Phi = T^{-1}(\mathbf{A}S) = \begin{bmatrix}1/2 & 0\\[4pt]0 & 1/3\end{bmatrix} \begin{bmatrix}3 & 1\\[4pt]3 & -3\end{bmatrix} = \begin{bmatrix}3/2 & 1/2\\[4pt]1 & -1\end{bmatrix}. \]

So in the new bases \(\mathbf{B},\mathbf{C}\) the transformation matrix is [

_= \[\begin{bmatrix}3/2 & 1/2\\[4pt]1 & -1\end{bmatrix}\]

. ]

We can check the same result by mapping the new domain basis vectors and expressing the results in the new codomain basis:

  • \(\Phi(b_1) = \mathbf{A} b_1 = \mathbf{A}\begin{bmatrix}1\\[2pt]1\end{bmatrix} = \begin{bmatrix}3\\[2pt]3\end{bmatrix}.\)
    Solve \(\begin{bmatrix}3\\3\end{bmatrix} = \alpha_1 c_1 + \alpha_2 c_2 = \alpha_1\begin{bmatrix}2\\0\end{bmatrix}+\alpha_2\begin{bmatrix}0\\3\end{bmatrix}\).
    This gives \(\alpha_1=3/2,\; \alpha_2=1\). So the first column of \(\widetilde{\mathbf{A}}_\Phi\) is \(\begin{bmatrix}3/2\\[2pt]1\end{bmatrix}\).

  • \(\Phi(b_2) = \mathbf{A} b_2 = \mathbf{A}\begin{bmatrix}1\\[2pt]-1\end{bmatrix} = \begin{bmatrix}1\\[2pt]-3\end{bmatrix}.\)
    Solve \(\begin{bmatrix}1\\-3\end{bmatrix} = \beta_1 c_1 + \beta_2 c_2\).
    This gives \(\beta_1=1/2,\; \beta_2=-1\). So the second column is \(\begin{bmatrix}1/2\\[2pt]-1\end{bmatrix}\).

These columns match \(\widetilde{\mathbf{A}}_\Phi\) above, confirming \[ \widetilde{\mathbf{A}}_\Phi = T^{-1}\mathbf{A}S. \]

2.7.4 Image and Kernel of a Linear Mapping

The image and kernel are important subspaces associated with a linear mapping.

Definition 2.34 For a linear mapping \(\Phi : V \to W\), the kernel / null space of a mapping is the set of values that map to \(0_W \in W\). \[ \ker(\Phi) := \{ \mathbf{v} \in V : \Phi(\mathbf{v}) = \mathbf{0}_W \}. \] The image / range is the set of value which get mapped to.
\[ \mathrm{Im}(\Phi) := \{ \mathbf{w} \in W : \exists \mathbf{v} \in \mathbf{V}, \Phi(\mathbf{v}) = \mathbf{w} \} \]

  • \(\mathbf{0}_V \in \ker(\Phi)\), so the null space is never empty.
  • \(\ker(\Phi) \subseteq V\) and \(\mathrm{Im}(\Phi) \subseteq W\) are subspaces.
  • \(\Phi\) is injective if and only if \(\ker(\Phi) = \{\mathbf{0}\}\).

Definition 2.35 For a matrix \(\mathbf{A} \in \mathbb{R}^{m \times n}\) representing \(\Phi : \mathbb{R}^n \to \mathbb{R}^m\), \(x \mapsto \mathbf{A}x\):

  • Image / Column space \[ \mathrm{Im}(\Phi) = \mathrm{span}\{\mathbf{a}_1, \dots, \mathbf{a}_n\} \subseteq \mathbb{R}^m \] where \(\mathbf{a}_i\) are the columns of \(\mathbf{A}\).

  • Kernel / Null space \[ \ker(\Phi) = \{ \mathbf{x} \in \mathbb{R}^n : \mathbf{A}\mathbf{x} = \mathbf{0} \} \subseteq \mathbb{R}^n \] represents all linear combinations of columns that yield zero.

Example 2.58 Consider \(\Phi : \mathbb{R}^4 \to \mathbb{R}^2\) with

\[ \mathbf{A} = \begin{bmatrix} 1 & 2 & -1 & 0 \\ 1 & 0 & 0 & 1 \end{bmatrix}, \quad \Phi(\mathbf{x}) = \mathbf{A}\mathbf{x} \]

  • Image: \[ \mathrm{Im}(\Phi) = \mathrm{span}\left\{ \begin{bmatrix}1\\1\end{bmatrix}, \begin{bmatrix}2\\0\end{bmatrix}, \begin{bmatrix}-1\\0\end{bmatrix}, \begin{bmatrix}0\\1\end{bmatrix} \right\} \]

  • Kernel: \[ \ker(\Phi) = \mathrm{span}\left\{ \begin{bmatrix}0\\1/2\\1\\0\end{bmatrix}, \begin{bmatrix}-1\\1/2\\0\\1\end{bmatrix} \right\} \]

The theorem below is known as the Rank-Nullity Theorem.

Theorem 2.5 For \(\Phi : V \to W\): \[ \dim(\ker(\Phi)) + \dim(\mathrm{Im}(\Phi)) = \dim(V) \]

Corollary 2.1 For \(\Phi : V \to W\), the following two facts are true:

  1. If \(\dim(\mathrm{Im}(\Phi)) < \dim(V)\), then \(\ker(\Phi)\) is non-trivial (\(\dim(\ker(\Phi)) \ge 1\)).

  2. If \(\dim(V) = \dim(W)\), then \[ \Phi \text{ injective } \iff \Phi \text{ surjective } \iff \Phi \text{ bijective.} \]

Exercises

Exercise 2.74 Consider \(\Phi : \mathbb{R}^4 \to \mathbb{R}^2\) with

\[ \mathbf{A} = \begin{bmatrix} 1 & 2 & -1 & 0 \\ 1 & 0 & 0 & 1 \end{bmatrix}, \quad \Phi(\mathbf{x}) = \mathbf{A}\mathbf{x} \] Verify that

  • Image: \[ \mathrm{Im}(\Phi) = \mathrm{span}\left\{ \begin{bmatrix}1\\1\end{bmatrix}, \begin{bmatrix}2\\0\end{bmatrix}, \begin{bmatrix}-1\\0\end{bmatrix}, \begin{bmatrix}0\\1\end{bmatrix} \right\} \]

  • Kernel: \[ \ker(\Phi) = \mathrm{span}\left\{ \begin{bmatrix}0\\1/2\\1\\0\end{bmatrix}, \begin{bmatrix}-1\\1/2\\0\\1\end{bmatrix} \right\} \]

Exercise 2.75 Prove that for \(\Phi : V \to W\), the following two facts are true:

  1. If \(\dim(\mathrm{Im}(\Phi)) < \dim(V)\), then \(\ker(\Phi)\) is non-trivial (\(\dim(\ker(\Phi)) \ge 1\)).

  2. If \(\dim(V) = \dim(W)\), then \[ \Phi \text{ injective } \iff \Phi \text{ surjective } \iff \Phi \text{ bijective.} \]

Exercise 2.76 Prove the Rank-Nullity Theorem.

Exercise 2.77

Let \(f: \bbR \rightarrow \bbR\) be defined as \(f(x) = x^3\). Show that \(f\) is bijective.

Exercise 2.78

Let \(f: \bbR \rightarrow \bbR\) be defined as \(f(x) = x^2\). Show that \(f\) is not bijective.

Exercise 2.79

Let \(f: \bbR^+ \rightarrow \bbR^+\) be defined as \(f(x) = \sqrt{x}\). Show that \(f\) is bijective.

Exercise 2.80

Show that there is an isomorphism between \(S = \set{1,2,3,4,5}\) and \(\bbZ_5 = \set{0,1,2,3,4}\).

Exercise 2.81

Write the coordinate \(\begin{bmatrix}2\\3\end{bmatrix}\) in terms of the standard basis vectors in \(\bbR^2\). Then write it in terms of the basis \(\bb_1=\begin{bmatrix}1\\-1\end{bmatrix}\) and \(\bb_2=\begin{bmatrix}1\\1\end{bmatrix}\).

Exercise 2.82

Write the coordinate \(\begin{bmatrix}4\\-1\end{bmatrix}\) in terms of the standard basis vectors in \(\bbR^2\). Then write it in terms of the basis \(\bb_1=\begin{bmatrix}0\\-1\end{bmatrix}\) and \(\bb_2=\begin{bmatrix}-1\\0\end{bmatrix}\).

Exercise 2.83

On graph paper, create a grid with a standard basis. Then, in a different colour, create a grid using the basis \(\bb_1 = \begin{bmatrix}2\\1\end{bmatrix}\) and \(\bb_2 = \begin{bmatrix}-1\\1\end{bmatrix}\). If \(\ba = \begin{bmatrix}1\\3\end{bmatrix}\) is a vector in the second basis, what are the coordinates of that vector in the standard basis. Draw the vector on the grid.

Exercise 2.84

Prove that if \(V\) is a vector space with basis \(\{\bb_1, \ldots, \bb_n\}\), then every vector \(\mathbf{v} \in V\) can be written uniquely as a linear combination of \(\bb_1, \ldots , \bb_n\).

Exercise 2.85

Find the change of basis matrix from \(B\) to \(C\) and from \(C\) to \(B\). Show that they are inverses of each other. \ Here, \(V = \bbR^2\); \(B = \set{\begin{bmatrix}9\\2\end{bmatrix}, \begin{bmatrix}4\\−3\end{bmatrix}}\); \(C = \set{\begin{bmatrix}2\\1\end{bmatrix}, \begin{bmatrix}−3\\1\end{bmatrix}}\).

Exercise 2.86

Find the change of basis matrix from \(B\) to \(C\) and from \(C\) to \(B\). Show that they are inverses of each other. \ Here, \(V = \bbR^3\); \(B=\set{\begin{bmatrix}2\\−5\\0\end{bmatrix},\begin{bmatrix}3\\0\\5\end{bmatrix}, \begin{bmatrix}8\\−2\\−9\end{bmatrix}}; C=\set{\begin{bmatrix}1\\−1\\1\end{bmatrix}, \begin{bmatrix}2\\0\\1\end{bmatrix},\begin{bmatrix}0\\1\\3\end{bmatrix}}\).

Exercise 2.87

Define \(\Phi(\bA) = \bA + \bA^T\). What is the \(\ker(\Phi)\)?

Exercise 2.88

Define \(\Phi(\bA) = \begin{bmatrix}0&1\\0&0 \end{bmatrix}\bA\). What is the \(\ker(\Phi)\)?

Exercise 2.89

Define \(\Phi: \bbR^3 \rightarrow \bbR^4\) given by \(\Phi\left(\begin{bmatrix}x\\y\\z\end{bmatrix} \right) = \begin{bmatrix}x\\x\\y\\y\end{bmatrix}\). What is the \(\ker(\Phi)\)?

Exercise 2.90

Define \(\Phi: \bbR^{2 \times 2} \rightarrow \bbR^{2 \times 2}\) given by \(\Phi\left(\begin{bmatrix}a & b \\ c & d\end{bmatrix}\right) = \begin{bmatrix}a+b & b+c \\ c+d & d + a \end{bmatrix}\). What is the \(\ker(\Phi)\)?

Exercise 2.91

he matrix \(\bM = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}\) rotates a vector clockwise 90 degrees. Determine the matrix that rotates a vector 90 degrees clockwise in the basis \(\bB = \set{\begin{bmatrix}1\\3 \end{bmatrix}, \begin{bmatrix}2\\-1 \end{bmatrix}}\).

Exercise 2.92

Suppose Alice has standard basis vectors \(e_1\) and \(e_2\). Let Bob have basis vectors given by \(\bb_1 = \begin{bmatrix}1\\2 \end{bmatrix}\) and \(\bb_2 = \begin{bmatrix} -1\\1 \end{bmatrix}\).

Exercise 2.93

Let \(X\) be a random variable with PDF given by \[f_X(x)= \begin{cases} cx^2 & |x| \leq 1\\0 & otherwise \end{cases}.\]

Exercise 2.94

Let \(X\) be a positive continuous random variable. Prove that \(E[X] = \int_{0}^{\infty} P(X \geq x) dx\).

Exercise 2.95

Show that \(cov[x,y] = E[xy] -E[x]E[y]\).