This page covers section 4.1 (“Elementary Basic Concepts” [of vector spaces and modules]).
Definition: Let $V$ be a non-empty set, let $F$ be a field, and let $+:V\times V\to V$ and $\cdot :F\times V\to V$ be binary operations such that
Then $V$ is said to be a vector space over $F$. The dot for multiplication will generally be omitted in what follows.
Example: If $F\subset K$ are both fields, then $K$ may be viewed as a vector space over $F$.
Example: If $F$ is a field, then ${F}^{n}=\{({\alpha}_{1},\dots ,{\alpha}_{n})\mid {\alpha}_{i}\in F\}$, with the obvious operations, is a vector space over $F$.
Example: If $F$ is a field, then $F[x]$ is a vector space over $F$.
Example: If $F$ is a field, then ${P}_{n}(F)\subset F[x]$ is a vector space over $F$, where ${P}_{n}(F)$ is the set of polynomials over $F$ with degree less than $n$.
Definition: If $V$ is a vector space over $F$ and $W\subset V$ forms a vector space using the same operations of $V$, then $W$ is a subspace of $V$. This is equivalent to the condition that $\alpha w+{\alpha}^{\mathrm{\prime}}{w}^{\mathrm{\prime}}\in W$ for all $w,{w}^{\mathrm{\prime}}\in W$ and $\alpha ,{\alpha}^{\mathrm{\prime}}\in F$.
Definition: Let $V,W$ be vector spaces over $F$. A homomorphism of vector spaces is a map $\varphi :W\to V$ such that $\varphi (v+{v}^{\mathrm{\prime}})=\varphi (v)+\varphi ({v}^{\mathrm{\prime}})$ and $\varphi (\alpha w)=\alpha \varphi (w)$ for all $w,{w}^{\mathrm{\prime}}\in W$ and $\alpha \in F$.
The set of all homomorphisms between vector spaces $V$ and $W$ will be denoted $\mathrm{H}\mathrm{o}\mathrm{m}(V,W)$.
Lemma 4.1.1: Let $V$ be a vector space over $F$. Then
Lemma 4.1.2: Let $V$ be a vector space over $F$ and let $W\subset V$ be a subspace. Then $V\mathrm{/}W=\{v+W\mid v\in V\}$ is a vector space over $F$, called the quotient space of $V$ by $W$.
Theorem 4.1.1: Let $V,W$ be vector spaces and let $\varphi :V\to W$ be a surjective homomorphism with kernel $K$. Then $W\cong V\mathrm{/}K$. Conversely, if $V$ is a vector space and $W\subset V$ a subspace, then there exists a homomorphism $\psi :V\to V\mathrm{/}W$. (TODO: am I transcribing this correctly?)
Definition: Let $V$ be a vector space over $F$ and let ${W}_{1},\dots ,{W}_{n}\subset V$ be subspaces. If any $v\in V$ admits a unique representation $v={w}_{1}+\cdots +{w}_{n}$ with ${w}_{i}\in {W}_{i}$ for each $i$, then $V$ is the internal direct sum of the $\{{W}_{i}\}$.
Definition: Let ${V}_{1},\dots ,{V}_{n}$ be vector spaces over $F$. The external direct sum of the $\{{V}_{i}\}$ is the set $\{({v}_{1},\dots ,{v}_{n})\mid {v}_{i}\in {V}_{i}\}$.
Theorem 4.1.2: The internal and external direct sums of $\{{V}_{1},\dots ,{V}_{n}\}$ are isomorphic. Hence we can refer to simply a direct sum having both of the above properties.
The problems below are paraphrased from/inspired by those given in Topics in Algebra by Herstein. The solutions are my own unless otherwise noted. I will generally try, in my solutions, to stick to the development in the text. This means that problems will not be solved using ideas and theorems presented further on in the book.
We have $$\alpha (v-w)=\alpha (v+(-w))=\alpha v+\alpha (-w)=\alpha v-\alpha w$$
by Lemma 4.1.1.
The map $\varphi :V\to W$ given by $\varphi (({\alpha}_{0},\dots ,{\alpha}_{n-1}))={\alpha}_{n-1}{x}^{n-1}+\dots +{\alpha}_{0}$ is an isomorphism.
For $\varphi :V\to W$ a homomorphism between vector spaces $V,W$, $\mathrm{ker}\varphi =\{v\in W\mid \varphi (v)=0\}$. Let $v,{v}^{\mathrm{\prime}}\in \mathrm{ker}\varphi $ and $\alpha \in F$, with $F$ the base field. We have $\varphi (v+\alpha {v}^{\mathrm{\prime}})=\varphi (v)+\alpha \varphi ({v}^{\mathrm{\prime}})$ because $\varphi $ is a homomorphism, and so $\varphi (v+\alpha {v}^{\mathrm{\prime}})=0$ and $v+\alpha {v}^{\mathrm{\prime}}\in \mathrm{ker}\varphi $. Thus $\mathrm{ker}\varphi $ is a subspace of $V$.
(a) Let $f,g\in V$ and $\alpha \in \mathbb{R}$. We have $f+\alpha g\in V$ because sums and scalar products of continuous functions are again continuous. The function $0:x\mapsto 0$ is the additive identity in this vector space. Other details can be taken for granted.
(b) The set of $n$-times differentiable functions is a subset of the continuous functions, so it’s just necessary to check if the set is closed under linear combinations. Indeed, the sum of a differentiable function is again differentiable, so the set in question is a subspace of $V$.
(a) Because $\mathbb{R}$ is closed under addition and multiplication, and operations on $V$ are defined componentwise, all vector space axioms hold for $V$.
(b) If $({a}_{i})$ and $({b}_{i})$ are two elements of $W$ and $\alpha \in \mathbb{R}$, then $({a}_{i}+\alpha {b}_{i})\in W$ because $$\underset{i\to \mathrm{\infty}}{\mathrm{lim}}({a}_{i}+\alpha {b}_{i})=\underset{i\to \mathrm{\infty}}{\mathrm{lim}}{a}_{i}+\alpha \underset{i\to \mathrm{\infty}}{\mathrm{lim}}{b}_{i}=0.$$
(c) Let $({a}_{i}),({b}_{i})\in U$ and let $\alpha \in \mathbb{R}$. We have that $$\sum _{i}({a}_{i}+\alpha {b}_{i}{)}^{2}=\sum _{i}{a}_{i}^{2}+{\alpha}^{2}\sum _{i}{b}_{i}^{2}+2\alpha \sum _{i}{a}_{i}{b}_{i}\mathrm{.}$$
The first two terms are finite by assumption. The third term can be bounded: for real numbers $x,y$, rearrange $(x-y{)}^{2}\ge 0$ to give $$xy\le \frac{1}{2}{x}^{2}+\frac{1}{2}{y}^{2}$$
so that $$\sum _{i}{a}_{i}{b}_{i}\le \frac{1}{2}\sum _{i}{a}_{i}^{2}+\frac{1}{2}\sum _{i}{b}_{i}^{2}<\mathrm{\infty}\mathrm{.}$$
Hence $({a}_{i}+\alpha {b}_{i})\in U$ and $U$ is a subspace of $V$.
To show that $U$ is contained in $W$, we must show that ${\sum}_{i=1}^{\mathrm{\infty}}{a}_{i}^{2}<\mathrm{\infty}$ implies ${\mathrm{lim}}_{i\to \mathrm{\infty}}{a}_{i}=0$. Define the partial sums ${s}_{n}={\sum}_{i=1}^{n}{a}_{i}^{2}$; we have that ${\mathrm{lim}}_{n\to \mathrm{\infty}}{s}_{n}={\mathrm{lim}}_{n\to \mathrm{\infty}}{s}_{n-1}=L$ for some $L<\mathrm{\infty}$. Therefore, $$0=\underset{n\to \mathrm{\infty}}{\mathrm{lim}}({s}_{n}-{s}_{n-1})=\underset{n\to \mathrm{\infty}}{\mathrm{lim}}{a}_{n}^{2}\mathrm{.}$$
For any $\u03f5>0$, there exists $N\in \mathbb{N}$ such that $n>N$ implies that ${a}_{n}^{2}<\u03f5$. Let ${\u03f5}^{\mathrm{\prime}}>0$ be given: by the previous statement, there exists $N\in \mathbb{N}$ so that $n>N$ implies ${a}_{n}^{2}<{\u03f5}^{\mathrm{\prime}2}$. Thus for $n>N$, we also have $\mathrm{\mid}{a}_{n}\mathrm{\mid}<{\u03f5}^{\mathrm{\prime}}$. This proves that ${\mathrm{lim}}_{n\to \mathrm{\infty}}{a}_{n}=0$, i.e. that $({a}_{i})\in W$.
$\mathrm{H}\mathrm{o}\mathrm{m}(U,V)$ is the set of homomorphisms $U\to V$. Given $\varphi ,\psi \in \mathrm{H}\mathrm{o}\mathrm{m}(U,V)$ and $\alpha \in F$, we can define a third homomorphism pointwise, i.e. by $$(\varphi +\alpha \psi )(u)=\varphi (u)+\alpha \psi (u)\mathrm{.}$$
It is straightforward to see that $\varphi +\alpha \psi $ is again a homomorphism. $\mathrm{H}\mathrm{o}\mathrm{m}(U,V)$ is a vector space under this pointwise addition and scalar multiplication.
Let ${e}_{i}^{(n)}\in {F}^{n}$ be the vector with a $1$ in the $i$-th index and zeroes elsewhere. Similarly, let ${e}_{i}^{(m)}\in {F}^{m}$ be the analogous thing. Given $\varphi \in \mathrm{H}\mathrm{o}\mathrm{m}({F}^{n},{F}^{m})$, we have $\varphi ({e}_{i}^{(n)})={\sum}_{j}{\alpha}_{ij}^{(\varphi )}{e}_{j}^{(m)}$, defining a matrix of coefficients ${\alpha}_{ij}^{(\varphi )}\in F$ for each $\varphi $. Now define $f:\mathrm{H}\mathrm{o}\mathrm{m}({F}^{n},{F}^{m})\to {F}^{mn}$ by $$f(\varphi )=({\alpha}_{11}^{(\varphi )},\dots ,{\alpha}_{1m}^{(\varphi )},{\alpha}_{21}^{(\varphi )},\dots ,{\alpha}_{nm}^{(\varphi )})\mathrm{.}$$
That $f$ respects linear combinations is a rote computation. The kernel of $f$ is trivial, so it is an isomorphism.
Define $\varphi :{F}^{n}\to {F}^{m}$ by $({a}_{1},\dots ,{a}_{m},{a}_{m+1},\dots ,{a}_{n})\mapsto ({a}_{1},\dots ,{a}_{m})$. This is a surjective homomorphism. The kernel of $\varphi $ is the set $\{(0,\dots ,0,{a}_{m+1},\dots ,{a}_{n})\in {F}^{n}\}$; a similar projection mapping establishes the isomorphism $\mathrm{ker}\varphi \cong {F}^{n-m}$.
Let $m$ be the index of the first non-zero entry in $v$, and let $\varphi $ be the projection of ${F}^{n}$ onto its $m$-th entry. Then $\varphi \in \mathrm{H}\mathrm{o}\mathrm{m}({F}^{n},F)$ and $\varphi (v)\mathrm{\ne}0$.
This is (a special case of) the result that a vector space is isomorphic to its double-dual.
With the result 4.1.7, we have $$\mathrm{H}\mathrm{o}\mathrm{m}(\mathrm{H}\mathrm{o}\mathrm{m}({F}^{n},F),F)\cong \mathrm{H}\mathrm{o}\mathrm{m}({F}^{n},F)\cong {F}^{n}\mathrm{.}$$
Given $u,{u}^{\mathrm{\prime}}\in U$, $w,{w}^{\mathrm{\prime}}\in W$ and $\alpha \in F$, we have that $$(u+w)+\alpha ({u}^{\mathrm{\prime}}+{w}^{\mathrm{\prime}})=(u+\alpha {u}^{\mathrm{\prime}})+(w+\alpha {w}^{\mathrm{\prime}})={u}^{\mathrm{\prime}\mathrm{\prime}}+{w}^{\mathrm{\prime}\mathrm{\prime}}\in U+W,$$
where the last step is justified because $U$ and $W$ are each subspaces. Therefore $U+W$ is a subspace of $V$.
Let $U,W$ be subspaces of $V$ over the field $F$. If $v,{v}^{\mathrm{\prime}}\in U\cap W$ and $\alpha \in F$, then $v+\alpha {v}^{\mathrm{\prime}}\in U$ because $U$ is a subspace and $v+\alpha {v}^{\mathrm{\prime}}\in W$ because $W$ is a subspace. Hence $v+\alpha {v}^{\mathrm{\prime}}\in U\cap W$, and $U\cap W$ is a subspace.
This is the second isomorphism theorem.
The elements of $(U+W)\mathrm{/}W$ look like $u+w+W=u+W$ where $u\in U$ and $w\in W$. The elements of $U\mathrm{/}(U\cap W)$ look like $u+U\cap W$ with $u\in U$. In both cases, elements of $U\cap W$ get turned into the zero coset.
Define the map $\varphi :(U+W)\mathrm{/}W\to U\mathrm{/}(U\cap W)$ by $\varphi (u+W)=u+U\cap W$. To see that this is well-defined, consider $u+w,{u}^{\mathrm{\prime}}+{w}^{\mathrm{\prime}}\in U+W$ that belong to the same coset: $u+w+W={u}^{\mathrm{\prime}}+{w}^{\mathrm{\prime}}+W$, so that their difference is $u-{u}^{\mathrm{\prime}}\in W$ which then implies that $u-{u}^{\mathrm{\prime}}\in U\cap W$. We have $\varphi (u+w+W)-\varphi ({u}^{\mathrm{\prime}}+{w}^{\mathrm{\prime}}+W)=(u-{u}^{\mathrm{\prime}})+U\cap W=U\cap W$; thus any representative of a coset in the domain gets mapped to the same coset in the codomain.
$\varphi $ is a homomorphism: for $u,{u}^{\mathrm{\prime}}\in U$, $w,{w}^{\mathrm{\prime}}\in W$, we have $\varphi (u+w+\alpha ({u}^{\mathrm{\prime}}+{w}^{\mathrm{\prime}})+W)=u+\alpha {u}^{\mathrm{\prime}}+U\cap W$ while $\varphi (u+w+W)+\alpha \varphi ({u}^{\mathrm{\prime}}+{w}^{\mathrm{\prime}}+W)=(u+U\cap W)+\alpha ({u}^{\mathrm{\prime}}+U\cap W)=u+\alpha {u}^{\mathrm{\prime}}+U\cap W$.
The kernel of $\varphi $ contains those elements which map to $0$ in the codomain, i.e. those elements of the domain where the $U$ component belongs to $U\cap W$. We have that $\{u+w\mid u\in U\cap W,w\in W\}\subset W$ so $\mathrm{ker}\varphi =W$, i.e. the kernel is trivial so that $\varphi $ is injective.
$\varphi $ is surjective because, given $u+U\cap W\in U\mathrm{/}(U\cap W)$, we have $\varphi (u+W)=u+U\cap W$.
Therefore $\varphi :(U+W)\mathrm{/}W\to U\mathrm{/}(U\cap W)$ is an isomorphism.
This is the fourth (“lattice”) isomorphism theorem.
There are a couple of natural-looking ways to map the objects in question (I tried $W\mapsto \varphi (W)$, $W\mapsto U\mathrm{/}W$, etc.). However, the first isomorphism theorem (theorem 4.1.1) states that $V\cong U\mathrm{/}\mathrm{ker}\varphi $, so the subspaces of $V$ should probably look like $W\mathrm{/}\mathrm{ker}\varphi $ where $W$ is a subspace of $V$. Naturally, $W\mathrm{/}\mathrm{ker}\varphi $ only makes sense if $W$ contains $\mathrm{ker}\varphi $. Therefore, the map we define is $f:\mathcal{B}\to \mathcal{A}$ given by $f(W)=W\mathrm{/}\mathrm{ker}\varphi $, and it makes sense because of the way we have chosen $\mathcal{B}$ (i.e. only considering subspaces that contain $\mathrm{ker}\varphi $).
The map $f$ is injective: let $W,{W}^{\mathrm{\prime}}\in \mathcal{B}$ be mapped the same by $f$, i.e. $W\mathrm{/}\mathrm{ker}\varphi ={W}^{\mathrm{\prime}}\mathrm{/}\mathrm{ker}\varphi $. We would like to show that this implies $W={W}^{\mathrm{\prime}}$. If $w\in W$, then $w+\mathrm{ker}\varphi \in W\mathrm{/}\mathrm{ker}\varphi ={W}^{\mathrm{\prime}}\mathrm{/}\mathrm{ker}\varphi $ so there exists ${w}^{\mathrm{\prime}}\in {W}^{\mathrm{\prime}}$ with $w+\mathrm{ker}\varphi ={w}^{\mathrm{\prime}}+\mathrm{ker}\varphi $. This implies that $w-{w}^{\mathrm{\prime}}\in \mathrm{ker}\varphi \subset {W}^{\mathrm{\prime}}$, so that $$w=(w-{w}^{\mathrm{\prime}})+{w}^{\mathrm{\prime}}\in {W}^{\mathrm{\prime}}\mathrm{.}$$
This proves that $W\subset {W}^{\mathrm{\prime}}$. The argument, made in reverse, gives also that ${W}^{\mathrm{\prime}}\subset W$, so we have proven that $f$ is injective.
$f$ is also surjective: if $X$ is a subspace of $V$, then we can realize it as the image of a subspace of $U$. Consider $Y={\varphi}^{-1}(X)=\{u\in U\mid \varphi (u)\in X\}$. It remains to show that $\mathrm{ker}\varphi \subset Y$, that $Y$ is a subspace of $U$, and that $f(Y)=X$. Because $X$ is a subspace, $0\in X$ and $\varphi (\mathrm{ker}\varphi )=\{0\}\subset X$, so that $\mathrm{ker}\varphi \subset Y$. If $y,{y}^{\mathrm{\prime}}\in Y$ and $\alpha $ is a scalar, then $\varphi (y+\alpha {y}^{\mathrm{\prime}})=\varphi (y)+\alpha \varphi ({y}^{\mathrm{\prime}})\in X$, so $Y$ is a subspace of $U$. Finally, $f(Y)=\{u+\mathrm{ker}\varphi \mid u\in U,\varphi (u)\in X\}$, which is the same thing as $X$ (TODO: fill in details without making notation worse?).
Therefore, $f$ is a bijection between $\mathcal{A}$ and $\mathcal{B}$.
To say that $V$ is the internal direct sum of the ${V}_{i}$ is to say that $v\in V$ has exactly one expression $v={v}_{1}+\cdots +{v}_{n}$ with each ${v}_{i}\in {V}_{i}$.
Because $V\subset {\sum}_{i}{V}_{i}$, we have that $v\in V$ has at least one such expression. It remains to show that this expression is unique. Therefore, suppose that $v={v}_{1}+\cdots +{v}_{n}={v}_{1}^{\mathrm{\prime}}+\cdots +{v}_{n}^{\mathrm{\prime}}$ with ${v}_{i},{v}_{i}^{\mathrm{\prime}}\in {V}_{i}$ for each $i$. Then we have $$({v}_{1}-{v}_{1}^{\mathrm{\prime}})+\cdots +({v}_{n}-{v}_{n}^{\mathrm{\prime}})=0,$$
and, rearranging, $${v}_{i}-{v}_{i}^{\mathrm{\prime}}=-\sum _{j\mathrm{\ne}i}({v}_{j}-{v}_{j}^{\mathrm{\prime}})\mathrm{.}$$
The left hand side belongs to ${V}_{i}$ while the right hand side belongs to ${\sum}_{j\mathrm{\ne}i}{V}_{j}$. By assumption, those two spaces intersect trivially, so that ${v}_{i}-{v}_{i}^{\mathrm{\prime}}=0$ for each $i$. Hence the two representations are identical, and we are done.
$V$ is the external direct sum of the ${V}_{i}$, so it looks like $$V=\{({v}_{1},\dots ,{v}_{n})\mid {v}_{i}\in {V}_{i}\}\mathrm{.}$$
The subspaces ${\stackrel{\u02c9}{V}}_{i}$ which allow the $i$-th entry to range over ${V}_{i}$, while fixing the non-$i$ entries as zero, are the desired subspaces of $V$ isomorphic to ${V}_{i}$. The conditions of exercise 4.1.15 are easily satisfied, so that $V$ is the internal direct sum of the ${\stackrel{\u02c9}{V}}_{i}$.
(a) That $T$ is a homomorphism is a straightforward exercise
(b) In the language of matrices, this is the familiar question of when a matrix is invertible; the answer is “when the determinant is non-zero”. How does that come about from direct computation?
Let ${y}_{1},{y}_{2}\in F$ and consider the simultaneous equations $$\alpha {x}_{1}+\beta {x}_{2}={y}_{1},$$
$$\gamma {x}_{1}+\delta {x}_{2}={y}_{2}\mathrm{.}$$
Multiplying the first equation by $\delta $ and the second by $\beta $, and then subtracting the second from the first, we find $$(\alpha \delta -\beta \gamma ){x}_{1}=\delta {y}_{1}-\beta {y}_{2}\mathrm{.}$$
Performing a similar computation, we also find $$(\alpha \delta -\beta \gamma ){x}_{2}=\alpha {y}_{2}-\gamma {y}_{1}\mathrm{.}$$
In order for $T$ to be injective, it must have a trivial kernel. If $({x}_{1},{x}_{2})\in \mathrm{ker}T$, then $$(\alpha \delta -\beta \gamma ){x}_{1}=(\alpha \delta -\beta \gamma ){x}_{2}=0.$$
These equations have non-trivial solutions $({x}_{1},{x}_{2})$ if and only if $\alpha \delta -\beta \gamma =0$. Thus a necessary and sufficient condition for $T$ to be injective is that $\alpha \delta -\beta \gamma \mathrm{\ne}0$. As a result, this is also a necessary condition for $T$ to be an isomorphism.
The same condition is also sufficient for $T$ to be surjective, because the equations $(\alpha \delta -\beta \gamma ){x}_{1}=\delta {y}_{1}-\beta {y}_{2}$ and $(\alpha \delta -\beta \gamma ){x}_{2}=\alpha {y}_{2}-\gamma {y}_{1}$ are solvable for any ${y}_{1},{y}_{2}$ by dividing by $\alpha \delta -\beta \gamma $.
Therefore the necessary and sufficient condition for $T$ to be an isomorphism is that $\alpha \delta -\beta \gamma $ be non-zero.
I haven’t done this exercise, but I would be surprised if it is different from 4.1.17 in a meaningful way.
Put another way, the exercise is to show that a homomorphism between vector spaces $V$ and $W$ induces a natural homomorphism between their dual spaces ${V}^{\ast}=\mathrm{H}\mathrm{o}\mathrm{m}(V,F)$ and ${W}^{\ast}=\mathrm{H}\mathrm{o}\mathrm{m}(W,F)$.
A diagram helps:
$\begin{array}{ccc} V & \rightarrow & F \ & \searrow & \uparrow \ & & W \end{array}$
Here, the map $V\to W$ is provided by $T$, the map $W\to F$ is some representative ${w}^{\ast}\in \mathrm{H}\mathrm{o}\mathrm{m}(W,F)$, and the desired map $V\to F$ can be made in a natural way by composition. That is, we define a map ${T}^{\ast}:\mathrm{H}\mathrm{o}\mathrm{m}(W,F)\to \mathrm{H}\mathrm{o}\mathrm{m}(V,F)$ by $${T}^{\ast}({w}^{\ast})={w}^{\ast}\circ T\mathrm{.}$$
It is easy to check that (1) the resulting ${v}^{\ast}={w}^{\ast}\circ T$ is indeed an element of $\mathrm{H}\mathrm{o}\mathrm{m}(V,F)$, and (2) that ${T}^{\ast}$ is a homomorphism.
(a) Looking slightly ahead, the intuition here is that the image of $F$ under a homomorphism will be too low-dimensional. Therefore, consider a supposed isomorphism $\varphi :F\to {F}^{n}$. Because it is surjective, there are ${f}_{1},{f}_{2}\in F$ with $\varphi ({f}_{1})=(1,0,\dots ,0)$ and $\varphi ({f}_{2})=(0,1,0,\dots ,0)$. Now, there exists $\alpha \in F$ such that ${f}_{2}=\alpha {f}_{1}$, so we must have $$(0,1,0,\dots ,0)=\varphi ({f}_{2})=\varphi (\alpha {f}_{1})=\alpha \varphi ({f}_{1})=(\alpha ,0,\dots ,0)\mathrm{.}$$
This is a contradiction, so we conclude that no such $\varphi $ exists.
(b) Suppose $\varphi :{F}^{2}\to {F}^{3}$ is an isomorphism, $\varphi ((1,0))={v}_{1}\in {F}^{3}$ and $\varphi ((0,1))={v}_{2}\in {F}^{3}$. Then we have $\varphi ((\alpha ,\beta ))=\alpha {v}_{1}+\beta {v}_{2}$ for any $\alpha ,\beta \in F$. Because $\varphi $ is surjective, there must exist ${\alpha}_{i},{\beta}_{i}$ such that $${\alpha}_{1}{v}_{1}+{\beta}_{1}{v}_{2}=(1,0,0),$$
$${\alpha}_{2}{v}_{1}+{\beta}_{2}{v}_{2}=(0,1,0),$$
$${\alpha}_{3}{v}_{1}+{\beta}_{3}{v}_{2}=(0,0,1)\mathrm{.}$$
Taking the first and second equations, and eliminating the ${v}_{2}$ terms, we find that $$({\alpha}_{1}{\beta}_{2}-{\beta}_{1}{\alpha}_{2}){v}_{1}=({\beta}_{2},-{\beta}_{1},0)\mathrm{.}$$
However, taking the first and third equations, and eliminating the ${v}_{2}$ terms, we also find that $$({\alpha}_{1}{\beta}_{3}-{\beta}_{1}{\alpha}_{3}){v}_{1}=({\beta}_{3},0,-{\beta}_{1})\mathrm{.}$$
These two results are inconsistent unless ${\beta}_{1}=0$. In that case, we can explicitly solve for ${v}_{1}=({\alpha}_{1}^{-1},0,0)$ and derive a contradiction that ${\beta}_{2}{v}_{2}=(-\frac{{\alpha}_{2}}{{\alpha}_{1}},1,0)$ while ${\beta}_{3}{v}_{2}=(-\frac{{\alpha}_{3}}{{\alpha}_{1}},0,1)$.
Thus we have shown that the map $\varphi $ is not truly surjective, and therefore not an isomorphism.
The laborious arguments above make one appreciate (1) the elegance of doing linear algebra without explicit coordinates/choice of basis, and (2) the simplicity and utility of the concepts of linear independence, basis and dimension, which we eschew here because they are not introduced until the next section of the book.
Let ${V}_{1},\dots ,{V}_{n}$ be proper subspaces of $V$ such that ${\bigcup}_{i}{V}_{i}=V$. We can assume that each ${V}_{i}$ brings something of value to this union, i.e. that $${V}_{i}\subset \u0338\bigcup _{j\mathrm{\ne}i}{V}_{j}$$
In other words, for each $i$, there exists some ${v}_{i}$ which only belongs to ${V}_{i}$ and none of the other subspaces. If this is not the case, then we can omit this ${V}_{i}$: all of its elements are included elsewhere. In this sense, we can assume our set to have minimal size.
Because the subspaces are proper, we know that $n\ge 2$. Consider elements ${v}_{1}\in {V}_{1}$ with ${v}_{1}\in \u0338{\bigcup}_{i\mathrm{\ne}1}{V}_{i}$ and ${v}_{2}\in {V}_{2}$ with ${v}_{2}\in \u0338{\bigcup}_{i\mathrm{\ne}2}{V}_{i}$. Let $\alpha ,\beta \in F$ be distinct. The elements $x={v}_{1}+\alpha {v}_{2}$ and $y={v}_{1}+\beta {v}_{2}$ belong to $V={\bigcup}_{i}{V}_{i}$, so each belongs to some ${V}_{i}$. Suppose $x,y$ both belong to the same ${V}_{i}$; then so must their difference: $x-y=(\alpha -\beta ){v}_{2}\in {V}_{i}$. By assumption, ${v}_{2}$ only belongs to ${V}_{2}$, so that ${V}_{i}={V}_{2}$. Taking a step back, we see that this would force ${v}_{1}=x-\alpha {v}_{2}$ to also live in ${V}_{2}$, a contradiction. Thus $x$ and $y$ are forced to belong to different subspaces.
Now, we enumerate some infinite subset $\{{\alpha}_{1},{\alpha}_{2},\dots \}$ of $F$ and construct the elements ${x}_{i}={v}_{1}+{\alpha}_{i}{v}_{2}\in V$. Considering the $\{{x}_{i}\}$ pairwise, we see that every one must live in a different subspace from every other one: no finite number of subspaces will suffice. We conclude that no vector space over an infinite field can be realized as the union of finitely many of its proper subspaces.