Topics in Algebra, Chapter 3 Supplementary Problems, Part 3
This page covers the supplementary problems at the end of chapter 3. This page is split into parts so as to not get excessively long. **See also**: [part 1](/journal/topics-in-algebra-chapter-3-supplementary-a) (3.1 through 3.9), [part 2](/journal/topics-in-algebra-chapter-3-supplementary-b) (3.10 through 3.18) and [part 4](/journal/topics-in-algebra-chapter-3-supplementary-d) (3.25 through 3.28). The problems below are paraphrased from/inspired by those given in Topics in Algebra by Herstein. The solutions are my own unless otherwise noted. I will generally try, in my solutions, to stick to the development in the text. This means that problems will not be solved using ideas and theorems presented further on in the book. ### Herstein 3.19: Prove that a non-zero ideal in the Gaussian integers $\mathbb{Z}[i]$ must contain a positive integer. Let $I\subset\mathbb{Z}[i]$ be a non-zero ideal and let $z=a+ib\in I$ with $a,b\in\mathbb{Z}$ not both zero. Then $$z\bar{z}=(a+ib)(a-ib)=a^2+b^2\in I$$ because $I$ is closed under external multiplication. Because one of $a$ or $b$ is non-zero, $a^2+b^2$ is a positive integer. ### Herstein 3.20: Let $R$ be a ring such that $x^4=x$ for every $x\in R$. Prove that $R$ is commutative. Like exercise 3.4.19 (where $x^3=x$ for all $x$ implies commutativity), this problem is hard. We follow the [great proof posted by Steve D.]( on First observe that $-x=(-x)^4=x^4=x$ so that $2x=0$ for any $x\in R$. Next, we consider the magic combination $x^2+x$, which was also of interest in 3.5.19. We can try to take the fourth power, but it gives no information. However, we have that $$(x^2+x)^2=x^4+x^2=x^2+x.$$ Now we make some subtle, and seemingly unrelated, statements. **(1)** If $x,y\in R$ have $xy=0$, then $yx=0$. Using the special property of $R$, we have $yx=(yx)^4=y(xy)^3x=0$. **(2)** If $x\in R$ has $x^2=x$, then $x$ commutes with every element of $R$: let $y\in R$ and consider $$0=xy-x^2y=x(y-xy)=(y-xy)x,$$ so that $yx=xyx$. In the final step, we used property (1). Now do this again, $$0=yx-yx^2=(y-yx)x=x(y-yx),$$ which gives $xy=xyx$. Combining the two, we see that if $x^2=x$ then $xy=xyx=yx$ for any $y\in R$. We are not done, because not every $x\in R$ satisfies $x^2=x$. Let $r,s\in R$ and expand the equality $$r\left((r+s)^2+(r+s)\right)=\left((r+s)^2+(r+s)\right)r$$ which holds because $t=(r+s)^2+(r+s)$ satisfies $t^2=t$. Now, canceling the identical terms, we are left with $$(r^2+r)s+rs^2=s(r^2+r)+s^2r,$$ but we know that $r^2+r$ commutes with $s$, so we have $rs^2=s^2r$ for arbitrary $r,s\in R$. Finally, we can make the statement that $$rs=(r+r^2)s-r^2s=s(r+r^2)-sr^2=sr$$ for any $r,s\in R$, so that $R$ is commutative. ### Herstein 3.21: Let $R,R'$ be rings and let $\phi:R\to R'$ with (1) $\phi(x+y)=\phi(x)+\phi(y)$ for all $x,y\in R$, and (2) $\phi(xy)=\phi(x)\phi(y)$ or $\phi(xy)=\phi(y)\phi(x)$ for all $x,y\in R$. Prove that one of these two options must hold uniformly over the entire ring. We follow Herstein's hint, which is to fix $a\in R$ and to consider the sets $$W_a=\{x\in R\mid\phi(ax)=\phi(a)\phi(x)\}$$ and $$V_a=\{x\in R\mid\phi(ax)=\phi(x)\phi(a)\}.$$ That is, $W_a$ is those $x$ that fall into the first category and $V_a$ is those $x$ that fall into the second category. We must have $W_a\cup V_a=R$ and, of course, $a$ belongs to both so that neither is empty. We seek to prove that one or both of the sets is equal to $R$. Suppose there exists $b\in W_a$ with $b\not\in V_a$, so that $\phi(ab)=\phi(a)\phi(b)$ while $\phi(ab)\ne\phi(b)\phi(a)$. If $c\in R$ is arbitrary, consider $$\phi(a(b+c))=\phi(ab)+\phi(ac)=\phi(a)\phi(b)+\phi(ac).$$ We can evaluate this quantity in another way: either $$\phi(a(b+c))=\phi(a)\phi(b+c)\qquad{\rm or}\qquad\phi(a(b+c))=\phi(b+c)\phi(a).$$ If the first holds, we would have $$\phi(a)\phi(b)+\phi(ac)=\phi(a)\phi(b)+\phi(a)\phi(c)$$ so that $c\in W_a$ as desired. The second case leads to $$\phi(ac)=\phi(c)\phi(a)+\phi(b)\phi(a)-\phi(a)\phi(b)\ne\phi(c)\phi(a)$$ where we use the fact that $\phi(b)\phi(a)-\phi(a)\phi(b)\ne 0$. Again, we must conclude that $\phi(ac)=\phi(a)\phi(c)$. Therefore if there exists $b\in W_a$ with $b\not\in V_a$, then $W_a=R$. If the circumstance is reversed, so there exists $b\in V_a$ with $b\not\in W_a$, the same argument gives that $V_a=R$ in that case. The only other case to consider is $W_a\subset V_a$ or vice versa. Say $W_a\subset V_a$, then we know that $R=W_a\cup V_a=V_a$. Therefore for any fixed $a\in R$, one of $W_a$ or $V_a$ is the entire ring. Now we know that, for fixed $a\in R$, either $W_a$ or $V_a$ is the whole of $R$. We must extend this to a global statement about $R$ itself. Following the hint of [this post](, we consider the two sets $$A=\{a\in R\mid W_a=R\}\qquad{\rm and}\qquad B=\{a\in R\mid V_a=R\}.$$ As we know, $A\cup B=R$. It is also easy to see that each of $A$ and $B$ is closed under addition: e.g. $a,a'\in A$ implies $$\phi((a+a')x)=\phi(ax)+\phi(a'x)=(\phi(a)+\phi(a'))\phi(x)=\phi(a+a')\phi(x)$$ so that $a+a'\in A$. Suppose that $A\ne R$ and $B\ne R$. If that is the case, then there exists $a\in A$ with $a\not\in B$, and there exists $b\in B$ with $b\not\in A$. Then to which set does $a+b$ belong? If $a+b\in A$, then $b=(a+b)-a\in A$ is a contradiction. If $a+b\in B$, then $a=(a+b)-b\in B$ is a contradiction. Therefore we must conclude that one (or both) of $A$ or $B$ is the entire ring, which is the desired result. **Note**: This very verbose "elementary" argument can be greatly simplified by the result mentioned in the linked hint. Specifically, if $G$ is a group and $G_1,G_2\le G$ satisfy $G_1\cup G_2=G$, then $G_1=G$ or $G_2=G$. The proof is essentially what was done in the preceding paragraph. Namely, this argument by contradiction: If $G_1\ne G$ and $G_2\ne G$ but $G_1\cup G_2=G$, then there exists $g_1\in G_1$ with $g_1\not\in G_2$ and there exists $g_2\in G_2$ with $g_2\not\in G_1$. Now if $g_1 g_2\in G_1$, then $g_2=g_1^{-1}(g_1 g_2)\in G_1$ is a contradiction. On the other hand, if $g_1 g_2\in G_2$, then $g_1=(g_1 g_2)g_2^{-1}\in G_2$ is also a contradiction. Hence one or both subgroups is the entire group $G$. In the context of this exercise, we can use this result twice. Note that $W_a$ and $V_a$ are additive subgroups of $R$ whose union is $R$, so one or the other must be the entire ring. Then $A$ and $B$ are also additive subgroups of $R$ whose union is $R$. ### Herstein 3.22: Let $R$ be a unital ring with $(ab)^2=a^2b^2$ for all $a,b\in R$. Show that $R$ is commutative. This problem is a standard follow-your-nose element manipulation exercise. In light of the subsequent exercises, it's clearly going to be important that $1\in R$, so we start by considering things like $a(1+b)$. Let $a,b\in R$ be arbitrary. We have $$[a(1+b)]^2=(a+ab)^2=a^2+a^2b+aba+(ab)^2$$ but also, using the problem stipulation, $$[a(1+b)]^2=a^2(1+b)^2=a^2+2a^2b+a^2b^2.$$ This simplifies to give $a^2b=aba$. Similarly, consideration of $[(1+a)b]^2$ gives $ab^2=bab$. Finally, if we expand $$[(1+a)(1+b)]^2=(1+a)^2(1+b)^2$$ and cancel the obvious terms, we are left with $$ba+bab+aba=ab+a^2b+ab^2.$$ Using the previous two results, this simplifies to $ab=ba$. Therefore $R$ is commutative. ### Herstein 3.23: Find a non-commutative ring $R$ in which $(ab)^2=a^2b^2$ for all $a,b\in R$. By exercise 3.22, the ring cannot be unital. The condition can be rewritten as $$a(ba-ab)a=0$$ so we want most or all of the elements of $R$ to be zero divisors (in some cases $ab-ba$ may be zero so we can't make a general statement). My go-to examples of non-commutative rings are the quaternions and rings of matrices. Making even the simplest computations in the quaternions, we have things like $(ij)^2=-1$ while $i^2j^2=+1$. It is unlikely that a subring of the quaternions will satisfy the condition of this problem, so we set it aside. I tried many things with $2\times 2$ matrices which all ultimately failed to pan out. For instance, rings generated by simple (single non-zero entry) matrices with entries from the even integers did not satisfy the condition of the problem, and those matrices which square to zero (which easily satisfy the condition of the problem) end up giving a commutative subring. The space of $3\times 3$ matrices is a big one, so it is natural to restrict attention to one famous subring, the upper triangular matrices. It's even a good idea to restrict further still, and only consider matrices with zeroes on the diagonal. It turns out that this works. Let $$A=\begin{pmatrix}0&1&0\\0&0&0\\0&0&0\end{pmatrix},\qquad B=\begin{pmatrix}0&0&1\\0&0&0\\0&0&0\end{pmatrix},\qquad C=\begin{pmatrix}0&0&0\\0&0&1\\0&0&0\end{pmatrix}.$$ Then we consider the set $$R=\{\alpha A+\beta B+\gamma C\mid\alpha,\beta,\gamma\in\mathbb{Z}\}.$$ Note that $AC=B$ while every other product of $A,B,C$ is zero. Thus it is trivial that $R$ is closed under multiplication, because $$(\alpha A+\beta B+\gamma C)(\alpha' A+\beta' B+\gamma' C)=\alpha\gamma' B\in R.$$ Furthermore, $R$ is non-commutative, because $$(\alpha' A+\beta' B+\gamma' C)(\alpha A+\beta B+\gamma C)=\alpha'\gamma B$$ will generally not be the same as the previous product. Of course, $R$ is closed under addition. Therefore $R$ is a non-commutative subring of ${\rm Mat}_{3\times 3}(\mathbb{Z})$. It also satisfies the condition of the problem, because any product of four matrices in $R$ is proportional to $B^2=0$. More explicitly, $$[(\alpha A+\beta B+\gamma C)(\alpha' A+\beta' B+\gamma' C)]^2=(\alpha\gamma' B)^2=0$$ and $$[(\alpha A+\beta B+\gamma C)]^2[(\alpha' A+\beta' B+\gamma' C)]^2=(\alpha\gamma B)(\alpha'\gamma' B)=0.$$ Note that the ring of coefficients for $R$ is really immaterial. Even $\mathbb{Z_2}$ would suffice, furnishing us with an eight-element ring satisfying the condition of the problem. ### Herstein 3.24: ### (a) Let $R$ be a unital ring with $(ab)^2=(ba)^2$ for all $a,b\in R$. If, for any $x\in R$, $2x=0$ implies $x=0$, then show that $R$ is commutative. ### (b) Let $R$ be a unital ring with $(ab)^2=(ba)^2$ for all $a,b\in R$. Show that $R$ may fail to be commutative if $2x=0$ does not imply $x=0$ for all $x\in R$. ### (c) Let $R$ be a non-unital ring with $(ab)^2=(ba)^2$ for all $a,b\in R$ and such that $2x=0$ implies $x=0$ for all $x\in R$. Provide an example to show that $R$ need not be commutative. This exercise illustrates just how fragile the conditions in (a) are for ensuring that $R$ is commutative. If they are relaxed in any respect, $R$ no longer needs to be commutative. **(a)** This is another follow-your-nose elementary manipulation. Based on the problem statement, we would like to end up with a result like $2(ab-ba)=0$. First note that $$[a(1+b)]^2=(a+ab)^2=a^2+a^2b+aba+(ab)^2,$$ but this is also equal to $$[(1+b)a]^2=(a+ba)^2=a^2+aba+ba^2+(ba)^2.$$ Comparing the two, we have $a^2b=ba^2$ for any $a,b\in R$. In other words, a square commutes with anything. Now, in particular, it is true that $(1+a)^2b=b(1+a)^2$. We write $$0=(1+a)^2b-b(1+a)^2=b+2ab+a^2b-b-2ba-ba^2=2(ab-ba),$$ again using the property derived above. Because $2x=0$ implies $x=0$, we have that $ab=ba$ for arbitrary $a,b\in R$ and thus $R$ is commutative. **(b)** Here we seek to construct a non-commutative ring with $(ab)^2=(ba)^2$ for all $a,b\in R$. By part (a), we have the hint that this ring must contain some non-zero element $x$ with $2x=0$. This naturally suggests things like $\mathbb{Z}/2\mathbb{Z}$ and $\mathbb{Z}/4\mathbb{Z}$. In order to get non-commutativity, it's then natural to look at matrices over those rings. As in problem 3.23, I tried various rings of $2\times2$ matrices over $\mathbb{Z}/2\mathbb{Z}$ and $\mathbb{Z}/4\mathbb{Z}$, but they always failed. One can easily write down explicit expressions for $2\times2$ matrices $a,b$ with $(ab)^2=(ba)^2$, and the resulting ring always ends up commutative. Convinced of the futility of $2\times 2$ matrices, we look at $3\times 3$ matrices over $\mathbb{Z}/2\mathbb{Z}$. The solution of 3.23 does not work here because $R$ is required to be unital and the strictly upper-triangular matrices lack an identity element. However, if we include the diagonal, then we have it: recall the notation $A,B,C$ from the solution to 3.23, above, and consider the set $$R=\{x1+\alpha A+\beta B+\gamma C\mid x,\alpha,\beta,\gamma\in\mathbb{Z}/2\mathbb{Z}\}$$ ($1$ is the identity matrix). $R$ is the $16$ element set of all $3\times 3$ upper-triangular matrices over $\mathbb{Z}$, which we know to be a ring. It is unital ($x=1$, $\alpha=\beta=\gamma=0$) and non-commutative: $$1+A+B+C=(1+A)(1+C)\ne(1+C)(1+A)=1+A+C.$$ Crucially, it also satisfies the condition $(ab)^2=(ba)^2$ for all $a,b\in R$: letting ${\bf x}=(x,\alpha,\beta,\gamma)$ and ${\bf E}=(1,A,B,C)$, we have $$({\bf x}\cdot{\bf E})({\bf x}'\cdot{\bf E})=xx'+(x\alpha'+\alpha x')A+(x\beta'+\beta x'+\alpha\gamma')B+(x\gamma'+\gamma x')C,$$ $$({\bf x}'\cdot{\bf E})({\bf x}\cdot{\bf E})=xx'+(x\alpha'+\alpha x')A+(x\beta'+\beta x'+\alpha'\gamma)B+(x\gamma'+\gamma x')C.$$ Squaring, $$[({\bf x}\cdot{\bf E})({\bf x}'\cdot{\bf E})]^2=2xx'({\bf x}\cdot{\bf E})({\bf x}'\cdot{\bf E})-(xx')^2+(x\alpha'+\alpha x')(x\beta'+\beta x')B,$$ $$[({\bf x}'\cdot{\bf E})({\bf x}\cdot{\bf E})]^2=2xx'({\bf x}'\cdot{\bf E})({\bf x}\cdot{\bf E})-(xx')^2+(x\alpha'+\alpha x')(x\beta'+\beta x')B.$$ Because $2=0$, the first terms vanish and the result is proven. I was stuck on this problem until getting guidance from Jack Schmidt in [this post]( A modification to the ring of 3.23 *should* have been an obvious candidate, but hindsight is always 20-20! **(c)** Consider the ring $R$ from 3.23, the $3\times3$ strictly upper-triangular matrices over $\mathbb{Z}$. It is non-unital, non-commutative, and $2x=0$ implies $x=0$ for any $x\in R$. We also have (see the solution to 3.23) that $(ab)^2=0=(ba)^2$ for all $a,b\in R$, so it satisfies the requirements of this problem.
comments powered by Disqus