### Topics in Algebra, Chapter 3.4

###### 2012-11-21

There are no exercises from 3.3, so this section covers both 3.3 (“Homomorphisms”) and 3.4 (“Ideals and Quotient Rings”).

Throughout, $R$ is a ring and $p$ is a prime integer.

### Topics covered: 3.3

• Definition: Let $R,{R}^{\mathrm{\prime }}$ be rings. A ring homomorphism is a mapping $\varphi :R\to {R}^{\mathrm{\prime }}$ such that $\varphi \left(a+b\right)=\varphi \left(a\right)+\varphi \left(b\right)$ and $\varphi \left(ab\right)=\varphi \left(a\right)\varphi \left(b\right)$ for all $a,b\in R$.

• Lemma 3.3.1: If $\varphi :R\to {R}^{\mathrm{\prime }}$ is a ring homomorphism, then $\varphi \left(0\right)=0$ and $\varphi \left(-a\right)=-\varphi \left(a\right)$.

• Definition: With $\varphi :R\to {R}^{\mathrm{\prime }}$ a ring homomorphism, the kernel of $\varphi$ is the set $\left\{a\in R\mid \varphi \left(a\right)=0\right\}$.

• Lemma 3.3.2: With $\varphi :R\to {R}^{\mathrm{\prime }}$ a ring homomorphism, kernel of $\varphi$ is a subgroup of $R$ under addition and, if $a\in \mathrm{ker}\varphi$ and $r\in R$, then both $ar$ and $ra$ are also in the kernel.

• The zero map is a ring homomorphism whose kernel is the entire domain.

• The identity map is a ring homomorphism with a trivial kernel.

• The set $\left\{m+n\sqrt{2}\mid m,n\in \mathbb{Z}\right\}$ is a ring under standard operations, and conjugation, $\varphi :m+n\sqrt{2}↦m-n\sqrt{2}$, is a ring homomorphism with trivial kernel.

• The natural map $\varphi :\mathbb{Z}\to \mathbb{Z}\mathrm{/}n\mathbb{Z}$ is a ring homomorphism whose kernel consists of the multiples of $n$.

• Definition: A ring isomorphism is a bijective ring homomorphism.

• Lemma 3.3.3: A ring homomorphism is an isomorphism if and only if its kernel is trivial.

### Topics covered: 3.4

• Ideals are motivated in analogy to normal subgroups.

• Definition: An ideal of $R$ is a subset $I$ of $R$ such that $I$ is a subgroup of $R$ under addition and, for any $r\in R$, both $ur$ and $ru$ are in $I$.

• The kernel of a ring homomorphism is an ideal. In fact, the definition of an ideal is modeled around kernels of homomorphisms.

• Given an ideal $I\subset R$, we consider an equivalence relation $a\sim b$ if $a-b\in I$. The cosets are of the form $r+I=\left\{r+i\mid i\in I\right\}$ for elements $r\in R$. The set of cosets is denoted $R\mathrm{/}I$, again in complete analogy to normal subgroups. In fact, $R\mathrm{/}I$ is a ring, and it is called a quotient ring.

• If $R$ is unital, then $R\mathrm{/}I$ is unital.

• Lemma 3.4.1: If $I$ is an ideal of $R$, then $\varphi :R\to R\mathrm{/}I$ given by $\varphi \left(r\right)=r+I$ is a ring homomorphism.

• Theorem 3.4.1: Let $\varphi :R\to {R}^{\mathrm{\prime }}$ be a ring homomorphism with kernel $I=\mathrm{ker}\varphi$. Then ${R}^{\mathrm{\prime }}\cong R\mathrm{/}I$. There is also a bijection between the set of ideals of ${R}^{\mathrm{\prime }}$ and the set of ideals of $R$ containing $I$: if ${J}^{\mathrm{\prime }}$ is an ideal of ${R}^{\mathrm{\prime }}$ the mapping is the preimage ${J}^{\mathrm{\prime }}↦{\varphi }^{-1}\left({J}^{\mathrm{\prime }}\right)$, and $R\mathrm{/}{\varphi }^{-1}\left({J}^{\mathrm{\prime }}\right)\cong {R}^{\mathrm{\prime }}\mathrm{/}{J}^{\mathrm{\prime }}$.

The problems below are paraphrased from/inspired by those given in Topics in Algebra by Herstein. The solutions are my own unless otherwise noted. I will generally try, in my solutions, to stick to the development in the text. This means that problems will not be solved using ideas and theorems presented further on in the book.

### Herstein 3.4.1: Let $I\subset R$ be an ideal such that $1\in I$. Prove that $I=R$.

For any $r\in R$, we have that $r1=r\in I$ so that $R\subset I$ and hence $I=R$.

### Herstein 3.4.2: Prove that the only ideals of a field $F$ are trivial, $\left\{0\right\}$ and $F$ itself.

Let $I\subset F$ be an ideal. If there exists a non-zero element $r\in I$, then ${r}^{-1}r=1\in I$ so that $I=F$ by problem 3.4.1. Otherwise, there is only the zero element, so $I=\left\{0\right\}$ is the zero ideal.

### Herstein 3.4.3: Prove that any ring homomorphism of a field is either the zero map or an isomorphism.

The kernel of such a homomorphism is an ideal of the field, and therefore it can only be trivial or the entire field, by problem 3.4.2. If the kernel is the entire field, then the homomorphism is the zero map. If the kernel is trivial, then the map is an isomorphism by Lemma 3.3.3.

### (b) Show that $aR$ is not necessarily an ideal if $R$ is not commutative.

(a) For $aR$ to be an ideal, it must be an additive subgroup of the ring and also closed under multiplication on the left or right by arbitrary ring elements. We have $ar+as=a\left(r+s\right)\in aR$ for any $r,s\in R$, so $aR$ forms a subgroup under addition by Lemma 2.4.2. Furthermore, $\left(ar\right)s=a\left(rs\right)\in aR$ and $s\left(ar\right)=a\left(sr\right)\in aR$, the latter because $R$ is commutative. Thus, $aR$ is an ideal.

(b) When $R$ is not commutative, the above reasoning will fail where we relied on commutativity, i.e. when we want $s\left(ar\right)\in aR$ for arbitrary $s$. We will look for an example in one of the simplest non-commutative rings, the $2×2$ matrices over $\mathbb{R}$. Put $a=\left(\begin{array}{cc}1& 0\\ 0& 0\end{array}\right)$

from which we easily see that the generic element in $aR$ is $\left(\begin{array}{cc}1& 0\\ 0& 0\end{array}\right)\left(\begin{array}{cc}x& y\\ z& w\end{array}\right)=\left(\begin{array}{cc}x& y\\ 0& 0\end{array}\right),$

a matrix whose bottom row is zero. However, consider left multiplication by $s=\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right)\mathrm{.}$

We find, for instance, $saI=\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right)\left(\begin{array}{cc}1& 0\\ 0& 0\end{array}\right)\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)=\left(\begin{array}{cc}1& 0\\ 1& 0\end{array}\right)\in aR\mathrm{.}$

Hence $aR$ is not a two-sided ideal. It is, of course, a one-sided ideal.

### Herstein 3.4.5: Let $I$ and $J$ be ideals of $R$. Define $I+J=\left\{i+j\mid i\in I,j\in J\right\}$. Prove that $I+J$ is an ideal of $R$.

Let $i,{i}^{\mathrm{\prime }}\in I$ and $j,{j}^{\mathrm{\prime }}\in J$ and consider $\left(i+j\right)+\left({i}^{\mathrm{\prime }}+{j}^{\mathrm{\prime }}\right)=\left(i+{i}^{\mathrm{\prime }}\right)+\left(j+{j}^{\mathrm{\prime }}\right)$. Because both $I$ and $J$ are closed under addition, this is again an element of $I+J$. Also, for any $r\in R$, we have $r\left(i+j\right)=\left(ri\right)+\left(rj\right)\in I+J$ and $\left(i+j\right)r=\left(ir\right)+\left(jr\right)\in I+J$. Therefore, $I+J$ is an ideal of $R$.

### Herstein 3.4.6: Let $I$ and $J$ be ideals of $R$. Define $IJ$ to be the set of all ring elements formed as finite sums of the form $ij$ with $i\in I$ and $j\in J$. Prove that $IJ$ is an ideal of $R$.

That $IJ$ is an additive subgroup is clear. Let $r\in R$ and consider $r\left({i}_{1}{j}_{1}+\dots +{i}_{n}{j}_{n}\right)=\left(r{i}_{1}\right){j}_{1}+\dots +\left(r{i}_{n}\right){j}_{n}\mathrm{.}$

Because $I$ is closed under such multiplications, this is again an element of $IJ$. Multiplication from the right also remains within $IJ$ because $J$ is closed under such multiplications. Therefore, $IJ$ is an ideal of $R$.

### Herstein 3.4.7: Let $I$ and $J$ be ideals of $R$. Show that $IJ\subset I\cap J$.

Consider ${i}_{1}{j}_{1}+\dots +{i}_{n}{j}_{n}\in I+J$. Because $I$ is an ideal, ${i}_{k}{j}_{k}={i}_{k}^{\mathrm{\prime }}\in I$ for each $k\in \left\{1,\dots ,n\right\}$ and therefore this element belongs to $I$. In precisely the same way, ${i}_{k}{j}_{k}={j}_{k}^{\mathrm{\prime }}\in J$ by virtue of $J$ being an ideal. Then, again, the element belongs to $J$, and so $IJ\subset I\cap J$.

### Herstein 3.4.8: Let $R=\mathbb{Z}$ and let $I$ be the ideal of $R$ made up of all multiples of $17$. Prove that any ideal $J$ of $R$, with $I\subset J\subset R$, must be either $J=I$ or $J=R$. How does this generalize?

The set $I$ is an ideal because $17m+17n=17\left(m+n\right)\in I$ and $\left(17m\right)n=17\left(mn\right)\in I$ for all $m,n\in \mathbb{Z}$. Suppose that the ideal $J$ contains some integer $a$ which is not a multiple of $17$. Then $a$ is coprime to $17$, so there exist $\alpha ,\beta \in \mathbb{Z}$ such that $a\alpha +17\beta =1$ and, by the closure properties of ideals, $J$ must contain it. Therefore, if $I$ is a proper subset of $J$ then $1\in J$ and $J=R$. Otherwise, $J=I$.

Of course, this argument holds with any prime integer in place of $17$. In the integers, a prime ideal is maximal.

### Herstein 3.4.9: Let $I$ be an ideal of $R$ and let $r\left(I\right)=\left\{x\in R\mid xi=0$ for each $i\in I\right\}$. Prove that $r\left(I\right)$ is an ideal of $R$.

Suppose $x,y\in r\left(I\right)$ and let $i\in I$ be arbitrary. Then $\left(x+y\right)i=xi+yi=0$ so that $x+y\in r\left(I\right)$. Also, for $r\in R$, $\left(rx\right)i=r\left(xi\right)=0$ and $\left(xr\right)i=x\left(ri\right)=0$ because $ri\in I$ so $x$ annihilates it just as well. Therefore $r\left(I\right)$ is an ideal of $R$.

### Herstein 3.4.10: Let $I$ be an ideal of $R$ and let $\left[R:I\right]=\left\{x\in R\mid rx\in I$ for each $r\in R\right\}$. Prove that $\left[R:I\right]$ is an ideal of $R$ and that $I\subset \left[R:I\right]$.

The second statement follows directly from the fact that $I$ is an ideal, so, for any $i\in I$ and $r\in R$, $ri\in I$ which means that $i\in \left[R:I\right]$ and hence $I\subset \left[R:I\right]$.

Let $x,y\in \left[R:I\right]$ and $r\in R$. Then $r\left(x+y\right)=rx+ry\in I$ since each of $rx$ and $ry$ are elements of $I$, which is an ideal. Therefore $\left[R:I\right]$ is a subgroup of $R$ under addition by Lemma 2.4.2. In addition, with $s\in R$, $r\left(sx\right)=\left(rs\right)x\in I$ and $r\left(xs\right)=is\in I$ where $i=rx\in I$ by assumption. Then $\left[R:I\right]$ is an ideal of $R$.

### (d) Show that $R\cong S$.

(a) That $S$ is closed under $\oplus$ and $\cdot$ follows from the fact that $R$ is a ring. Let $a,b,c\in S$. The new addition is commutative: $a\oplus b=a+b+1=b+a+1=b\oplus a$. It is associative: $\left(a\oplus b\right)\oplus c=\left(a+b+1\right)\oplus c=a+b+c+1=a+\left(b+c+1\right)=a\oplus \left(b+c\right)\mathrm{.}$

The multiplication is associative: \begin{aligned} (a\cdot b)\cdot c&=(ab+a+b)\cdot c=(ab+a+b)c+ab+a+b+c
&=a(bc+b+c)+a+(bc+b+c)=a\cdot(b\cdot c). \end{aligned}

The distributive law also holds, \begin{aligned} a\cdot(b\oplus c)&=a\cdot(b+c+1)=a(b+c+1)+a+b+c+1
&=(ab+a+b)+(ac+a+c)+1=(a\cdot b)\oplus(a\cdot c). \end{aligned}

In the next section, we will demonstrate the existence of a zero element and additive inverses.

(b) The zero element of $S$ is ${0}_{S}$ such that $a\oplus {0}_{S}=a$ for all $a\in S$. Then we must take ${0}_{S}=-1$ (i.e. the additive inverse in $R$ of the unit element of $R$). The additive inverse $\alpha$ of $a\in S$ is such that $a\oplus \alpha ={0}_{S}=-1$. Then $a+\alpha +1=-1$ so that $\alpha =-a-2$. This $\alpha$ is, of course, in $S$, so we have shown that $S$ is a ring with these new operations.

(c) The multiplicative identity element ${1}_{S}$ is such that $a\cdot {1}_{S}=a$ for all $a\in S$. This is satisfied by ${1}_{S}=0$, the zero element of $R$.

(d) We can explicitly construct a homomorphism $\varphi :R\to S$. We must have that $\varphi \left(a\right)=\varphi \left(a+0\right)=\varphi \left(a\right)\oplus \varphi \left(0\right)=\varphi \left(a\right)+\varphi \left(0\right)+1$

for all $a\in R$. This forces $\varphi \left(0\right)=-1$ and we may as well try the map $\varphi \left(a\right)=a-1$. In fact, with $a,b\in R$, $\varphi \left(a+b\right)=a+b-1=\left(a-1\right)+\left(b-1\right)+1=\varphi \left(a\right)\oplus \varphi \left(b\right)\mathrm{.}$

Also, $\varphi \left(ab\right)=ab-1=\left(a-1\right)\left(b-1\right)+\left(a-1\right)+\left(b-1\right)=\varphi \left(a\right)\cdot \varphi \left(b\right)\mathrm{.}$

Therefore this map is truly a homomorphism. Its kernel is trivial, containing just the identity element ${1}_{R}$, so it is also an isomorphism.

### Herstein 3.4.12*: Let $R={\mathrm{M}\mathrm{a}\mathrm{t}}_{2×2}\left(\mathbb{Q}\right)$, the ring of $2×2$ rational matrices. Prove that $R$ has no ideals besides the zero ideal and $R$ itself.

Let $I$ be an ideal of $R$. If there exists an invertible matrix $a\in I$, then ${a}^{-1}a\in I$, so that $I=R$. Therefore consider an ideal which consists of only non-invertible matrices. Suppose there is a non-zero matrix $a\in I$. The gist of the argument from here is that we can take $a$ and exhibit an invertible matrix in $I$ using the ideal’s closure under addition and external multiplication.

First note that we can make the following sorts of manipulations: $\left(\begin{array}{cc}\alpha & \beta \\ \gamma & \delta \end{array}\right)\left(\begin{array}{cc}1& 0\\ 0& -1\end{array}\right)=\left(\begin{array}{cc}\alpha & -\beta \\ \gamma & -\delta \end{array}\right),$

so (by taking the sum of the original matrix and the one we’ve produced) we can always isolate a column of a matrix and find it to still be in $I$. In completely analogous ways, we will always be able to isolate an element of a matrix.

By assumption, we have a matrix with at least one non-zero entry. Suppose we follow the above prescription to produce a matrix in $I$ with just one non-zero entry in isolation. For the sake of demonstration, assume that this entry is in the top-left corner (nearly identical methods work in the other three cases). Then we have $\left(\begin{array}{cc}\alpha & 0\\ 0& 0\end{array}\right)+\left(\begin{array}{cc}0& 0\\ 1& 0\end{array}\right)\left(\begin{array}{cc}\alpha & 0\\ 0& 0\end{array}\right)\left(\begin{array}{cc}0& 1\\ 0& 0\end{array}\right)=\alpha \left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)\in I\mathrm{.}$

This is the desired invertible matrix which is in $I$. Therefore if $I$ is to not be the entire ring, then it can contain nothing but the zero matrix.

### Herstein 3.4.13*: Consider the quaternions over $\mathbb{Z}\mathrm{/}p\mathbb{Z}$, with $p$ an odd prime, namely

$R=\left\{{\alpha }_{0}+{\alpha }_{1}i+{\alpha }_{2}j+{\alpha }_{3}k\mid {\alpha }_{0},{\alpha }_{1},{\alpha }_{2},{\alpha }_{3}\in \mathbb{Z}\mathrm{/}p\mathbb{Z}\right\},$

where ${i}^{2}={j}^{2}={k}^{2}=ijk=-1$.

### (b)** Prove that $R$ is not a division ring.

(a) Compute a generic product of elements in $R$: \begin{aligned} (\alpha_0+\alpha_1 i+\alpha_2 j+\alpha_3 k)&(\beta_0+\beta_1 i+\beta_2 j+\beta_3 k)
=&(\alpha_0\beta_0-\alpha_1\beta_1-\alpha_2\beta_2-\alpha_3\beta_3)+(\alpha_0\beta_1+\alpha_1\beta_0+\alpha_2\beta_3-\alpha_3\beta_2)i
&+(\alpha_0\beta_2+\alpha_2\beta_0+\alpha_3\beta_1-\alpha_1\beta_3)j+(\alpha_0\beta_3+\alpha_3\beta_0+\alpha_1\beta_2-\alpha_2\beta_1)k. \end{aligned}

This is again an element of $R$, as is a generic sum. Thus $R$ forms a ring, and we can easily exhibit ${p}^{4}$ elements, with $p$ choices for each of the four coefficients. There’s no way to break out of the ring using multiplication or addition, so there are at most ${p}^{4}$ elements. Conceivably there could be some elements that get duplicated, but we see that if ${\alpha }_{0}+{\alpha }_{1}i+{\alpha }_{2}j+{\alpha }_{3}k={\beta }_{0}+{\beta }_{1}i+{\beta }_{2}j+{\beta }_{3}k,$

then $\left({\alpha }_{0}-{\beta }_{0}\right)+\left({\alpha }_{1}-{\beta }_{1}\right)i+\left({\alpha }_{2}-{\beta }_{2}\right)j+\left({\alpha }_{3}-{\beta }_{3}\right)k=0+0i+0j+0k$

which forces ${\alpha }_{0}={\beta }_{0}$, etc., so there is no duplication and there are truly ${p}^{4}$ elements.

Again we would like to consider an ideal $I$ of $R$ and show that the existence of any non-zero element in the ideal necessarily leads to the conclusion that $I=R$. Let $a={\alpha }_{0}+{\alpha }_{1}i+{\alpha }_{2}j+{\alpha }_{3}k$ be a non-zero element, presumably with no inverse. We can play around with $a$, computing things such as $ia$, $aj$, and so forth. Eventually, we hit upon $iai=-{\alpha }_{0}-{\alpha }_{1}i+{\alpha }_{2}j+{\alpha }_{3}k$

which almost reminds us of complex conjugation. Of course, $jaj$ and $kak$ look similar. In fact, we prod a little more and find that the combination $a-iai-jaj-kak=4{\alpha }_{0}\in I\mathrm{.}$

Now, supposing ${\alpha }_{0}\mathrm{\ne }0$, this is the desired invertible element belonging to $I$ (thankfully, we have assumed that our prime $p$ is not $2$). On the other hand, if ${\alpha }_{0}=0$, we can rotate a non-zero coefficient into its place by multiplying appropriately by $i$, $j$ or $k$, and then performing the same trick. To sum up, we have shown a way to take any non-zero element of the ideal $I$ and manipulate it into an invertible element which also belongs to $I$, thus proving that a non-zero ideal is actually the entire ring.

(b) To show that $R$ is not a division ring, we must prove that there exists an element with no multiplicative inverse. Consider conjugation, $a\stackrel{ˉ}{a}=\left({\alpha }_{0}+{\alpha }_{1}i+{\alpha }_{2}j+{\alpha }_{3}k\right)\left({\alpha }_{0}-{\alpha }_{1}i-{\alpha }_{2}j-{\alpha }_{3}k\right)={\alpha }_{1}^{2}+{\alpha }_{2}^{2}+{\alpha }_{3}^{2}+{\alpha }_{4}^{2}\mathrm{.}$

This conjugation evidently offers a path to take a general quaternion and map it to an element of the base field. In particular, if we could find a non-zero element $a$ such that $a\stackrel{ˉ}{a}$ were zero modulo $p$, then we could show that $a$ is not invertible in the ring. Suppose to the contrary that there was an element ${a}^{-1}$ in the ring such that ${a}^{-1}a=1$. Then we would have $0={a}^{-1}a\stackrel{ˉ}{a}=\stackrel{ˉ}{a}$ which is a contradiction.

Therefore we seek an element $a\in R$ with $a\stackrel{ˉ}{a}=0$ (in other words, $a$ such that the sum of the squares of its coefficients is zero modulo $p$), and if we can produce it then we have shown that $R$ is not a division ring. As an example, with $p=3$, such a non-invertible element is $a=1+i+j$. Can we prove the existence in general? There is a powerful theorem due to Lagrange which says that any positive integer is expressible as the sum of four integer squares. While that surely solves this problem, our statement is considerably weaker, so we search for a more direct way of proving it.

As suggested in this math.stackexchange.com post, we narrow our attention to solutions with one coefficient fixed as $0$ and one fixed as $1$. By the lemma immediately below, we have the existence of ${\alpha }_{0}$ and ${\alpha }_{1}$ such that ${\alpha }_{0}^{2}+{\alpha }_{1}^{2}+1\equiv 0\phantom{\rule{0ex}{0ex}}\phantom{\rule{0.6666666666666666em}{0ex}}\mathrm{m}\mathrm{o}\mathrm{d}\text{\hspace{0.17em}}\text{\hspace{0.17em}}p$. Hence, for any odd prime $p$, there will always exist a non-invertible element of the form ${\alpha }_{0}+{\alpha }_{1}i+j$, proving that $R$ is not a division ring.

#### Lemma: For prime $p$, there exist integers $x,y\in \left\{0,\dots ,p-1\right\}$ such that

${x}^{2}+{y}^{2}+1\equiv 0\phantom{\rule{0ex}{0ex}}\phantom{\rule{1em}{0ex}}\mathrm{m}\mathrm{o}\mathrm{d}\text{\hspace{0.17em}}\text{\hspace{0.17em}}p\mathrm{.}$

The case $p=2$ is immediately satisfied by $x=1$, $y=0$. Therefore, let $p$ be an odd prime.

First consider the case of two distinct integers $m,n\in \left\{0,\dots ,p-1\right\}$ whose squares are to be congruent modulo $p$. We have ${m}^{2}\equiv {n}^{2}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0.6666666666666666em}{0ex}}\mathrm{m}\mathrm{o}\mathrm{d}\text{\hspace{0.17em}}\text{\hspace{0.17em}}p$ or $\left(m+n\right)\left(m-n\right)\equiv 0\phantom{\rule{0ex}{0ex}}\phantom{\rule{0.6666666666666666em}{0ex}}\mathrm{m}\mathrm{o}\mathrm{d}\text{\hspace{0.17em}}\text{\hspace{0.17em}}p$. As $p$ is a prime, it must divide one of the factors. This isn’t possible for $m-n$ because of the restricted domain for $m$ and $n$. Hence, for the two squares to be congruent, we must have $m+n=p$. The set $\left\{0,\dots ,p-1\right\}$ therefore pairs off into equivalence classes $\left\{1,p-1\right\}$, $\left\{2,p-2\right\}$, etc., with zero the odd man out, and there are exactly $1+\left(p-1\right)\mathrm{/}2=\left(p+1\right)\mathrm{/}2$ distinct values for ${m}^{2}$ modulo $p$.

Now we would like to show the existence of $x,y\in \left\{0,\dots ,p-1\right\}$ such that ${x}^{2}\equiv -1-{y}^{2}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0.6666666666666666em}{0ex}}\mathrm{m}\mathrm{o}\mathrm{d}\text{\hspace{0.17em}}\text{\hspace{0.17em}}p$. There are evidently $\left(p+1\right)\mathrm{/}2$ possible values for the left hand side, and $\left(p+1\right)\mathrm{/}2$ possible values for the right hand side, and both quantities reside in the interval $\left\{0,\dots ,p-1\right\}$ which accomodates a total of $p$ numbers. Thus there is no way to fit the values of ${x}^{2}$ modulo $p$ alongside the values of $-1-{y}^{2}$ modulo $p$, a total of $p+1$ numbers, without some overlap (this is the pigeonhole principle). This overlap is the desired solution to the congruence ${x}^{2}+{y}^{2}+1\equiv 0\phantom{\rule{0ex}{0ex}}\phantom{\rule{0.6666666666666666em}{0ex}}\mathrm{m}\mathrm{o}\mathrm{d}\text{\hspace{0.17em}}\text{\hspace{0.17em}}p$, so we are done.

### Herstein 3.4.14: With $a\in R$, let $Ra=\left\{xa\mid x\in R\right\}$. Show that $Ra$ is a left ideal of $R$.

Let $x,y\in R$. Then $xa+ya=\left(x+y\right)a\in Ra$ so that $Ra$ is an additive subgroup of $R$ by Lemma 2.4.2. Also, if we multiply on the left by arbitrary $r\in R$, we find $r\left(xa\right)=\left(rx\right)a\in Ra$ because $rx\in R$. Therefore $Ra$ is a left ideal. See also H 3.4.4.

### Herstein 3.4.15: Let $L,M\subset R$ be left ideals. Show that $L\cap M$ is a left ideal of $R$.

$L$ and $M$ are each additive subgroups of $R$ so their intersection is again an additive subgroup of $R$. If $x\in L\cap M$ and $r\in R$, then $rx\in L$ and $rx\in M$, hence $rx\in L\cap M$. Therefore, $L\cap M$ is a left ideal of $R$.

### Herstein 3.4.16: Let $L$ be a left ideal of $R$ and let $M$ be a right ideal of $R$. What can be said of $L\cap M$?

If $x,y\in L\cap M$, then $x+y\in L$ and $x+y\in M$, so $x+y\in L\cap M$. Thus $L\cap M$ is an additive subgroup of $R$. Letting $r\in R$, we have that $rx\in L$ and $xr\in M$, but there is no guarantee that either of those will be shared between the ideals, so we cannot say more in general.

To illustrate this, recall the example presented in problem 3.4.4b, with $R$ the set of $2×2$ rational matrices, $a=\left(\begin{array}{cc}1& 0\\ 0& 0\end{array}\right),$

$L=Ra$ and $M=aR$. We can easily see that $L$ consists of the matrices whose second column is zero, and $M$ consists of the matrices whose second row is zero. Their intersection $L\cap M$ is the set of matrices with the top-left corner free to vary but all other entries fixed as zero. This is not a left ideal: $\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)\left(\begin{array}{cc}x& 0\\ 0& 0\end{array}\right)=\left(\begin{array}{cc}0& 0\\ x& 0\end{array}\right)\in L\cap M\mathrm{.}$

We can see it’s also not a right ideal by taking the transpose of the above equation.

### Herstein 3.4.17: With $a\in R$, let $r\left(a\right)=\left\{x\in R\mid ax=0\right\}$. Show that $r\left(a\right)$ is a right ideal of $R$.

$r\left(a\right)$ is the set of elements which $a$ annihilates from the left. If $x,y\in r\left(a\right)$, then $a\left(x+y\right)=ax+ay=0$ so that $x+y\in r\left(a\right)$ and $r\left(a\right)$ is an additive subgroup of $R$. With $r\in R$, we also have $a\left(xr\right)=\left(ax\right)r=0$ so that $xr\in r\left(a\right)$. Thus $r\left(a\right)$ is a right ideal of $R$.

### Herstein 3.4.18: Let $L$ be a left ideal of $R$ and let $\lambda \left(L\right)=\left\{x\in R\mid xa=0$ for each $a\in L\right\}$. Show that $\lambda \left(L\right)$ is an ideal (two-sided) of $R$.

$\lambda \left(L\right)$ is the set of elements which annihilate the entire left ideal $L$. If $x,y\in \lambda \left(L\right)$ and $a\in L$, then $\left(x+y\right)a=xa+ya=0$ so that $\lambda \left(L\right)$ is an additive subgroup of $R$. Let $r\in R$ and consider $\left(rx\right)a=r\left(xa\right)=0$, which shows $\lambda \left(L\right)$ to be a left ideal, and $\left(xr\right)a=x\left(ra\right)=x{a}^{\mathrm{\prime }}=0$, which shows $\lambda \left(L\right)$ to be a right ideal, with ${a}^{\mathrm{\prime }}=ra\in L$ because $L$ is a left ideal. Therefore $\lambda \left(L\right)$ is a two-sided ideal.

### Herstein 3.4.19*: Suppose ${x}^{3}=x$ for every $x\in R$. Prove that $R$ is commutative.

This problem is pretty tricky, and I must give credit to this page for filling in some gaps in my argument that I couldn’t fill myself even after considerable effort.

First observe that $2x=\left(2x{\right)}^{3}=8{x}^{3}=8x$ so that $6x=0$ for every $x\in R$. This is not a statement about characteristic, because we do not assume $R$ to be an integral domain. We do not even assume that $R$ is unital, so recall that $6x$ is simply shorthand for $x+x+x+x+x+x$.

Consider $x+{x}^{2}=\left(x+{x}^{2}{\right)}^{3}={x}^{3}+3{x}^{4}+3{x}^{5}+{x}^{6}=4x+4{x}^{2},$

so that we have $3\left(x+{x}^{2}\right)=0$ for any $x\in R$. In particular, plugging in $x+y$ gives \begin{aligned} 0=3(x+y+(x+y)^2)&=3(x+y+x^2+xy+yx+y^2)
&=3(x+x^2)+3(y+y^2)+3(xy+yx)=3(xy+yx). \end{aligned}

By the above comment, we can subtract $6yx=0$ from the equation for free, and this leaves us with $3\left(xy-yx\right)=0.$

The simplest example of a ring with ${x}^{3}=x$ for all $x\in R$ is $R=\mathbb{Z}\mathrm{/}3\mathbb{Z}$ which has characteristic $3$, and we immediately see that this result is worthless in that case. Clearly we need an additional result to prove that $R$ is commutative even when we can’t “cancel” the three.

A natural thing to compute is $x±y=\left(x±y{\right)}^{3}=x±y±{x}^{2}y±y{x}^{2}+x{y}^{2}+{y}^{2}x±xyx+yxy\mathrm{.}$

Subtracting the second ($-$) equation from the first ($+$) gives $2{x}^{2}y+2y{x}^{2}+xyx=0$. If we multiply that equation on the left by $x$ we find $2xy+2xy{x}^{2}+2{x}^{2}yx=0,$

whereas if we multiply by $x$ on the right instead, $2{x}^{2}yx+2yx+2xy{x}^{2}=0.$

Subtracting one equation from the other again, we see that $2\left(xy-yx\right)=0.$

Finally, subtracting that from the previous result, we arrive at the desired statement that $xy=yx$ for all $x,y\in R$, i.e. that $R$ is commutative.

With a solution in hand, one still has to wonder why this problem is in section 3.4, ideals and quotient rings. It is worth noting that $\varphi :R\to R$ given by $\varphi \left(x\right)=3x$ is a ring homomorphism, in a not-entirely-trivial way. This may lead somewhere useful.

### Herstein 3.4.20: Let $R$ and $S$ be rings, with $R$ unital, and let $\varphi :R\to S$ be a surjective ring homomorphism. Prove that $\varphi \left(1\right)$ is the unit element of $S$.

For any $s\in S$, there exists $r\in R$ such that $s=\varphi \left(r\right)$, because $\varphi$ is surjective. Then we also must have that $s=\varphi \left(1r\right)=\varphi \left(1\right)\varphi \left(r\right)=\varphi \left(1\right)s$

and that $s=\varphi \left(r1\right)=\varphi \left(r\right)\varphi \left(1\right)=s\varphi \left(1\right)$

so that $\varphi \left(1\right)$ is the unit element of $S$.

Note that $\varphi \left(1\right)$ will be the unit element of the sub-ring $\varphi \left(R\right)\subset S$ regardless, but if $\varphi$ is not surjective then there may be elements of $S$ for which $\varphi \left(1\right)$ does not act as a unit.

### Herstein 3.4.21: Let $R$ and $S$ be rings with $R$ unital and $S$ an integral domain. Let $\varphi :R\to S$ be homomorphism such that $\mathrm{ker}\varphi \mathrm{\ne }R$. Prove that $\varphi \left(1\right)$ is the unit element of $S$.

Clearly there is a problem if $1\in \mathrm{ker}\varphi$, so first let $r\in R$ be outside the kernel of $\varphi$ and consider $0\mathrm{\ne }\varphi \left(r\right)=\varphi \left(1r\right)=\varphi \left(1\right)\varphi \left(r\right)\mathrm{.}$

This shows that $1\in \mathrm{ker}\varphi$. Now we also see from the above computation that $\varphi \left(1\right)$ acts as a unit element for $\varphi \left(R\right)$ (which is commutative by the definition of integral domain), and the question is whether it also acts as a unit element for the other elements of $S$.

Again letting $r\in \mathrm{ker}\varphi$ and letting $s\in S$ be arbitrary, we have that $\varphi \left(r\right)s=\varphi \left(r\right)\varphi \left(1\right)s$ so that $\varphi \left(r\right)\left(s-\varphi \left(1\right)s\right)=0.$

By assumption, $\varphi \left(r\right)\mathrm{\ne }0$, so, because $S$ is an integral domain, we conclude that $s=\varphi \left(1\right)s$ for all $s\in S$.