This section covers section 3.11 (“Polynomial Rings over Commutative Rings”).

**Definition**: Let $R$ be a commutative ring with a unit element. $R\left[x\right]$ is defined, as in section 3.9, as the set of all symbols ${a}_{0}+{a}_{1}x+\cdots +{a}_{m}{x}^{m}$ where $m$ is a non-negative integer and ${a}_{0},\dots ,{a}_{m}\in R$. The standard addition and multiplication make $R\left[x\right]$ a ring. It will be shown in exercise 3.11.1 that $R\left[x\right]$ so defined is a commutative ring with unit element.**Definition**: Let $R$ be a commutative ring with unit element. By problem 3.11.1, $R\left[x\right]$ is again a commutative ring with unit element. Therefore we may recursively define $R[{x}_{1},\dots ,{x}_{n}]$ as $R[{x}_{1},\dots ,{x}_{n-1}]\left[{x}_{n}\right]$.**Lemma 3.11.1**: If $R$ is an integral domain, then $R\left[x\right]$ is an integral domain.**Corollary**: If $R$ is an integral domain, then $R[{x}_{1},\dots ,{x}_{n}]$ is an integral domain for any positive integer $n$.**Definition**: Let $R$ be an integral domain. $R$ is a**unique factorization domain**(UFD) if (1) non-zero $r\in R$ is either a unit or expressible as a product of a finite number of irreducible elements of $R$, (2) the decomposition just mentioned is unique up to re-ordering and multiplication by units.**Lemma 3.11.2**: If $R$ is a UFD and $a,b\in R$, then $(a,b)\in R$, i.e. their gcd exists in $R$. Additionally, if $(a,b)=1$ then $a\mid bc$ implies that $a\mid c$.**Corollary**: If $R$ is a UFD and $a\in R$ is irreducible, then $a\mid bc$ implies $a\mid b$ or $a\mid c$.**Lemma 3.11.3**: If $R$ is a UFD and $f,g\in R\left[x\right]$ are primitive, then $fg\in R\left[x\right]$ is primitive.**Lemma 3.11.4 (Gauss Lemma)**: Let $R$ be a UFD and hence an integral domain. By lemma 3.11.1 and theorem 3.6.1, $R\left[x\right]$ embeds in a field of fractions $F\left[x\right]$. If $f\in R\left[x\right]$ is primitive in $R\left[x\right]$, then it is irreducible in $R\left[x\right]$ if and only if it is irreducible in $F\left[x\right]$.**Lemma 3.11.5**: Let $R$ be a UFD and let $p\in R\left[x\right]$ be primitive. Then $f$ may be factored uniquely into irreducible elements in $R\left[x\right]$.**Theorem 3.11.1**: If $R$ is a UFD, then so is $R\left[x\right]$.**Corollary**: If $R$ is a UFD, then so is $R[{x}_{1},\dots ,{x}_{n}]$ for any positive integer $n$.**Corollary**: If $F$ is a field, then $F[{x}_{1},\dots ,{x}_{n}]$ is a UFD.

The problems below are paraphrased from/inspired by those given in Topics in Algebra by Herstein. The solutions are my own unless otherwise noted. I will generally try, in my solutions, to stick to the development in the text. This means that problems will not be solved using ideas and theorems presented further on in the book.

This problem justifies the recursive definition of $R[{x}_{1},\dots ,{x}_{n}]$. It is trivial but tedious (and notationally cumbersome) to show that $R\left[x\right]$ is a commutative ring, so I will take it for granted. The unit element is simply the constant polynomial $f\in R\left[x\right]$ given by $f\left(x\right)={1}_{R}$ for all $x$.

What follows is a sketch of a proof. First consider a simple illustration using $R[x,y]$ and $R[y,x]$. It is easy to see that the elements of the former all have the form $\sum {a}_{ij}{x}^{i}{y}^{j}$

while elements of the latter have the form $\sum {a}_{ij}{y}^{i}{x}^{j}\mathrm{.}$

Defining these polynomials as “formal symbols” as we do, it is not *a priori* obvious that monomials $r{x}^{i}{y}^{j}$ and $r{y}^{j}{x}^{i}$, with $r\in R$, are in any sense equal. Still, it is tempting to simply move the indeterminates around.

In $R[{x}_{1},\dots ,{x}_{n}]$, the above comments suggest the mapping $\varphi :R[{x}_{1},\dots ,{x}_{n}]\to R[{x}_{\sigma \left(1\right)},\dots ,{x}_{\sigma \left(n\right)}]$

given by $\varphi (\sum {a}_{{i}_{1}\cdots {i}_{n}}{x}_{1}^{{i}_{1}}\cdots {x}_{n}^{{i}_{n}})=\sum {a}_{{i}_{1}\cdots {i}_{n}}{x}_{\sigma \left(1\right)}^{{i}_{\sigma \left(1\right)}}\cdots {x}_{\sigma \left(n\right)}^{{i}_{\sigma \left(n\right)}}\mathrm{.}$

Here, the coefficient $a$ on each monomial remains the same while the positions of the indeterminates are permuted. For simplicity of notation, we can imagine each of ${i}_{1},\dots ,{i}_{n}$ ranging from zero to infinity, although we note that only finitely many of the numbers in the $a$ array can be non-zero (a polynomial has finite degree) so there are no awkward questions of convergence, etc.

We assert that $\varphi $ is an isomorphism, which is easy to believe. The only non-trivial thing to check is that $\varphi $ respects ring multiplication. With correct bookkeeping, one sees that it does, but the details will be skipped here.

Let $f={r}_{0}+{r}_{1}x+\cdots +{r}_{n}{x}^{n}$ and let $g={s}_{0}+{s}_{1}x+\cdots +{s}_{m}{x}^{m}$, where all the coefficients belong to $R$ and ${r}_{n},{s}_{m}\mathrm{\ne}0$. Then the degree of $f$ is $n$ and the degree of $g$ is $m$. The term of largest power in $fg$ is ${r}_{n}{s}_{m}{x}^{m+n}$, and its coefficient is ${r}_{n}{s}_{m}\mathrm{\ne}0$ because $R$ is an integral domain. Hence the degree of $fg$ is $m+n$.

Let $v\in R\left[x\right]$ be such that $uv=1\in R\left[x\right]$. We have by problem 3.11.3 that $0=\mathrm{d}\mathrm{e}\mathrm{g}\left(uv\right)=\mathrm{d}\mathrm{e}\mathrm{g}\left(u\right)+\mathrm{d}\mathrm{e}\mathrm{g}\left(v\right)$, which forces both $u$ and $v$ to have degree zero, i.e. to be elements of $R$. But then $uv=1\in R$ so that $u\in R$ is a unit. This is a slight abuse of notation, but the idea is clear that $u\in R\left[x\right]$ is a constant polynomial which we identify with its lone coefficient belonging to $R$.

Because $f$ is a zero divisor, there exists $g\in R\left[x\right]$ such that $fg=0$. Then if $g\left(x\right)={b}_{0}+{b}_{1}x+\cdots +{b}_{n}{x}^{n}$, we have the product $\left(fg\right)\left(x\right)=\sum _{k=0}^{m+n}{c}_{k}{x}^{k}$

where ${c}_{k}={a}_{k}{b}_{0}+{a}_{k-1}{b}_{1}+\cdots +{a}_{0}{b}_{k}$. Now this polynomial is identically zero, so each ${c}_{k}=0$. For instance, we see that ${c}_{0}={a}_{0}{b}_{0}=0$ and that ${c}_{1}={a}_{0}{b}_{1}+{a}_{1}{b}_{0}=0$. Because ${a}_{0}{b}_{0}=0$, we can try multiplying the second equation by ${b}_{0}$, in which case we see $0={a}_{0}{b}_{0}{b}_{1}+{a}_{1}{b}_{0}^{2}={a}_{1}{b}_{0}^{2}\mathrm{.}$

This suggests an induction: we claim that ${b}_{0}^{\mathrm{\ell}+1}{a}_{\mathrm{\ell}}=0$ for all $\mathrm{\ell}<k$ and then we consider $0={b}_{0}^{k}{c}_{k+1}={b}_{0}^{k+1}{a}_{k}+{b}_{0}^{k}{a}_{k-1}{b}_{1}+{b}_{0}^{k}{a}_{k-2}{b}_{2}+\cdots +{b}_{0}^{k}{a}_{0}{b}_{k}\mathrm{.}$

By the induction hypothesis, all terms vanish except the first, leaving $0={b}_{0}^{k+1}{a}_{k}$

which completes the induction.

Returning to the problem at hand, we see that $b={b}_{0}^{m+1}$ is such that $b{a}_{0}=b{a}_{1}=\cdots =b{a}_{m}=0.$

If ${b}_{0}\mathrm{\ne}0$ then we are done, because $b={b}_{0}^{m+1}\mathrm{\ne}0$ by the prohibition on nilpotent elements. The case of ${b}_{0}=0$ is somewhat trivial. If ${b}_{0}=0$, we may rewrite $g$ as $g\left(x\right)={x}^{r}h\left(x\right)$ where the degree-zero coefficient of $h\left(x\right)$ is non-zero, and then the above analysis goes through by using the coefficients of $h$ rather than those of $g$.

Here is a clever and subtle proof of this result due to this math.stackexchange.com post.

Let non-zero $g\in R\left[x\right]$ be of minimal degree such that $fg=0$ and write $g={b}_{0}+{b}_{1}x+\cdots +{b}_{n}{x}^{n}$. Suppose that there does not exist any such $b$ having $b{a}_{0}=\cdots =b{a}_{m}=0$.

If all of the coefficients ${a}_{i}$, $i=0,\dots ,m$ annihilate $g$, having ${a}_{i}g=0$, then we would certainly have ${a}_{i}{b}_{n}=0$

for every $i$. However, this would lead to ${b}_{n}f={b}_{n}{a}_{0}+\cdots +{b}_{n}{a}_{m}{x}^{m}=0,$

contradicting our supposition that no such $b={b}_{n}$ exists.

Therefore if we accept that no such $b$ exists, then some coefficient fails to annihilate $g$. In particular, there is a largest index $i$ such that ${a}_{i}g\mathrm{\ne}0$. Now, $\begin{aligned} fg&=\Big(a_0+\cdots+a_i x^i+\cdots+a_m x^m\Big)\Big(b_0+\cdots+b_n x^n\Big)$

because ${a}_{j}g=0$ for $j>i$. Expanding this product, it is seen that ${a}_{i}{b}_{n}=0$ so that $\mathrm{d}\mathrm{e}\mathrm{g}\left({a}_{i}g\right)<n$ because we have previously determined that ${a}_{i}g\mathrm{\ne}0$. We had supposed that $g$ was the non-zero polynomial of least degree which annihilated $f$, but now we are forced to conclude that $f\cdot \left({a}_{i}g\right)=0$ as well. This contradiction proves our result.

The above proof does not exhibit any concrete element of $R$ which sends $f$ to zero. A constructive proof shows that every coefficient of $f$ must indeed annihilate $g$ (contrast with the second paragraph) and has us conclude that ${b}_{n}f=0.$

There is intuition to be gained here from writing $\frac{1}{f}=\frac{1}{{a}_{0}(1+{f}^{\mathrm{\prime}})},$

where ${f}^{\mathrm{\prime}}=(f-{a}_{0})\mathrm{/}{a}_{0}$, and then “Taylor expanding”: $\frac{1}{f}=\frac{1}{{a}_{0}}(1-{f}^{\mathrm{\prime}}+{f}^{\mathrm{\prime}2}-\cdots \text{\hspace{0.17em}})\mathrm{.}$

Roughly speaking, this should only make sense when (1) ${a}_{0}$ is invertible, and (2) the series terminates due to nilpotency. Now we seek to formalize this argument.

One way of doing so is to postulate this **lemma**: Let $S$ be a commutative, unital ring and let non-zero $n\in S$ be nilpotent with ${n}^{k}=0$. Then $1+n$ is a unit. **Proof**: Indeed, if we try the “Taylor expansion” as an inverse, we find
$(1+n)(1-n+{n}^{2}-\cdots +(-1{)}^{k-1}{n}^{k-1})=1+(-1{)}^{k-1}{n}^{k}=1,$

a telescoping sum. The series expansion for the inverse of $1+n$ may make sense in a formal context even if it does not terminate, but in our case we don’t need to worry – it terminates thanks to the nilpotency of $n$.$\phantom{\rule{1em}{0ex}}\mathrm{\square}$ Furthermore, if $u\in S$ is any unit, we have $u+n=u(1+{u}^{-1}n)$ is again a unit because ${u}^{-1}n$ is nilpotent.

Now suppose that ${a}_{0}$ is a unit in $R$ and ${a}_{1},\dots ,{a}_{n}$ are all nilpotent, say with nilpotency indices ${k}_{i}$ such that ${a}_{i}^{{k}_{i}}=0$. We have that ${f}_{0}={a}_{0}$ is a unit, and, by the lemma, ${f}_{1}={a}_{0}+{a}_{1}x$ is again a unit because ${a}_{1}x$ is nilpotent, $({a}_{1}x{)}^{{k}_{1}}={a}_{1}^{{k}_{1}}{x}^{{k}_{1}}=0$. The lemma applies because $R\left[x\right]$ is a commutative, unital ring by exercise 3.11.1. Now repeating this stacking process $n-1$ more times, we conclude that $f={f}_{n}$ is a unit.

On the other hand, suppose that $f$ were a unit and that $g\left(x\right)={b}_{0}+{b}_{1}x+\cdots +{b}_{n}{x}^{n}$

were such that $fg=1$. Writing out the product explicitly, we have that ${a}_{0}{b}_{0}=1$ so that ${a}_{0}$ must be a unit. At the highest degree terms, we find ${a}_{n}{b}_{m}=0$

and ${a}_{n}{b}_{m-1}+{a}_{n-1}{b}_{m}=0.$

Multiplying the second equation by ${a}_{n}$, the second term in the sum vanishes, and we are left with ${a}_{n}^{2}{b}_{m-1}=0$. This suggests that ever higher powers of ${a}_{n}$ should annihilate each successive coefficient of $g$, counting downwards. Performing the relevant induction, we see that it is true and, as a result, ${a}_{n}^{m+1}$ annihilates every coefficient in $g$. Now ${a}_{n}^{m+1}={a}_{n}^{m+1}\left(fg\right)=f\cdot \left({a}_{n}^{m+1}g\right)=0$

shows that ${a}_{n}$ is nilpotent.

Finally, we consider the polynomial $f-{a}_{n}{x}^{n}$

which is a unit by the lemma, because $f$ is a unit and ${a}_{n}{x}^{n}$ was just shown to be nilpotent. Repeating the argument from above, we conclude that ${a}_{n-1}$ is nilpotent. By induction, the same may be said of each coefficient ${a}_{i}$ with $i>1$.

Some nudges toward this solution were provided by this excellent site. There is also a very well known, elegant proof that unit $f$ implies nilpotent coefficients. It can be found in this math.stackexchange.com post. However, it is clearly not the solution Herstein had in mind, because prime ideals have not been introduced at this point in the text. Also, to say that a prime ideal exists in $R$ is a non-trivial statement, equivalent to the axiom of choice.

We try the most natural thing, which is to look at the ideal generated by ${x}_{1}$ and ${x}_{2}$. This hasn’t been defined in the text, but we take a stab, writing $I=({x}_{1},{x}_{2})=\{a{x}_{1}+b{x}_{2}\mid a,b\in F[{x}_{1},{x}_{2}\left]\right\}\mathrm{.}$

It is trivial to see that $I$ is indeed an ideal of $F[{x}_{1},{x}_{2}]$.

Let $d\in I$ be arbitrary. Because $F$ has no zero divisors and $d$ is built on top of ${x}_{1}$ and ${x}_{2}$, we must have $\mathrm{d}\mathrm{e}\mathrm{g}\left(d\right)\ge 1$. Here we define the degree of a multivariate monomial as the sum of the powers on its indeterminates, and the degree of a multivariate polynomial as the largest out of the degrees of its constituent monomials. If $d$ were to generate the ideal, we would also have $d\mid {x}_{1}$ and $d\mid {x}_{2}$, which is only possible if $d$ is a constant, having degree zero. Therefore the ideal $I$ is not principal and $F[{x}_{1},{x}_{2}]$ is not a principal ideal domain.

**(a)** The idea here is very simple: construct the greatest common divisor as the product of all the primes which $a$ and $b$ share. Formally, suppose that $a=u{\pi}_{1}\cdots {\pi}_{m}$ and $b=v{\rho}_{1}\cdots {\rho}_{n}$ where $u,v\in R$ are units and the $\pi $’s and the $\rho $’s are all irreducible elements of $R$, not necessarily distinct. Because $R$ is a UFD, these decompositions of $a$ and $b$ are finite and unique up to associates. Out of those decompositions, run through the list of ${\pi}_{1},\dots ,{\pi}_{m}$ and pick out those ${\pi}_{i}$ which are associated to some ${\rho}_{j}$. This must be done without repetition, so if such a ${\rho}_{j}$ is found, it is removed from consideration for further $\pi $’s (for instance, consider $a=3\cdot 3$ and $b=3$; we would pick out $3$ only once). Now, define $d\in R$ to be the product of those ${\pi}_{i}$ which were picked out by this algorithm. We claim that $d$ is the greatest common divisor of $a$ and $b$.

It is clear that $d\mid a$ because it was constructed out of a subset of factors of $a$. We also have that $d\mid b$ because each of those factors was associated to a factor of $b$, and we were mindful of repetitions. Now suppose that $c\mid a$ and $c\mid b$ and perhaps $c\nmid d$. Then there would exist an irreducible $\sigma \in R$ with $\sigma \mid c$ and $\sigma \nmid d$. Of course, we must have $\sigma \mid a$ and $\sigma \mid b$, but then $\sigma \nmid d$ contradicts the construction of $d$. Therefore it must be true that $c\mid d$ and as a result $d$ is the greatest common divisor.

**(b)** Let $a=u{\pi}_{1}\cdots {\pi}_{l}$, $b=v{\rho}_{1}\cdots {\rho}_{m}$ and $c=w{\sigma}_{1}\cdots {\sigma}_{n}$ where $u,v,w\in R$ are units and the $\pi $’s, $\rho $’s and $\sigma $’s are all irreducible elements of $R$. For each ${\pi}_{i}$, $a\mid bc$ implies that ${\pi}_{i}$ is associated to some ${\rho}_{j}$ or ${\sigma}_{j}$. Because $(a,b)=1$, we know that ${\pi}_{i}$ is not associated to any ${\rho}_{j}$. Hence it is associated to some $\sigma $. This argument holds for every ${\pi}_{i}$, and thus $a\mid c$.

**©** If $(a,b)=1$ then $a\mid c$ by part (b). The only other choice for $(a,b)$ is for $(a,b)=a$, because $a$ is irreducible. In that case, we have $a\mid b$ because $b$ will always be divisible by $(a,b)$.

**(a)** Suppose $f\left(x\right)={a}_{0}+{a}_{1}x+\cdots +{a}_{n}{x}^{n}$. By trivial extension of exercise 3.11.9, the greatest common divisor $({a}_{0},\dots ,{a}_{n})$ exists in $R$. Then we may take $a=({a}_{0},\dots ,{a}_{n})$, the content of $f$, and the coefficients of $g$ are those of $f$ with the common irreducible divisors pulled out. By definition of gcd, we know that $g$ is primitive.

**(b)** The corollary to lemma 3.11.3 states that, with $R$ a UFD, the content of a product of polynomials in $R\left[x\right]$ is the product of the contents of the polynomials, modulo multiplication by units.

Suppose $f\left(x\right)=ag\left(x\right)=bh\left(x\right)$ where $a,b\in R$ and $g,h\in R\left[x\right]$ are primitive. Applying the statement above, we have that $a$ is associated to $b$ (and both are associates of the content of $f$). Writing $b=au$ with unit $u\in R$, we see that $0=ag\left(x\right)-bh\left(x\right)=a\left(g\right(x)-uh(x\left)\right)$

also implies that $g\sim h$. Therefore the decomposition of part (a) is unique up to associates.

Again this is a fairly easy idea just couched in some abstract language. The idea is to clear the denominators. Write $f\left(x\right)=\frac{{r}_{0}}{{s}_{0}}+\frac{{r}_{1}}{{s}_{1}}x+\cdots +\frac{{r}_{n}}{{s}_{n}}{x}^{n}$

with ${r}_{i},{s}_{i}\in R$ and ${s}_{i}$ restricted to non-zero values. The fraction notation is shorthand for equivalence classes in $F$ where, as we recall, $a\mathrm{/}b=c\mathrm{/}d$ if and only if $ad-bc=0$.

The clear thing to consider is $p\left(x\right)=\frac{1}{{s}_{0}\cdots {s}_{n}}({r}_{0}{s}_{1}\cdots {s}_{n}+{s}_{0}{r}_{1}{s}_{2}\cdots {s}_{n}x+\cdots +{s}_{0}\cdots {s}_{n-1}{x}^{n}),$

which is in the form $p\left(x\right)=g\left(x\right)\mathrm{/}a$ with $a\in R$ and $g\left(x\right)\in R\left[x\right]$. We must establish that $p\left(x\right)=f\left(x\right)$ in $F\left[x\right]$. This is easily worked out for each coefficient individually. For instance, for the zeroth coefficient, we would like to show that $\frac{{r}_{0}}{{s}_{0}}=\frac{{r}_{0}{s}_{1}\cdots {s}_{n}}{{s}_{0}\cdots {s}_{n}},$

and of course ${r}_{0}({s}_{0}\cdots {s}_{n})-{s}_{0}({r}_{0}{s}_{1}\cdots {s}_{n})=0$

so we are correct. All other coefficients follow in the same manner, and we see that the result has been established.

This does not seem to require that $f$ be primitive. Suppose that $f$ were reducible in $R\left[x\right]$, say $f\left(x\right)=g\left(x\right)h\left(x\right)$ with $g,h\in R\left[x\right]$ of positive degree. Then such a factorization also exists in $F\left[x\right]$ because $g$ and $h$ can be considered as elements of $F\left[x\right]$ if we replace each coefficient $a$ by the fraction $a\mathrm{/}1$ (the unit element $1$ exists because $R$ is a UFD, but it is not essential: the fraction $ar\mathrm{/}r$ for arbitrary non-zero $r\in R$ would suffice). So if $f$ is reducible in $R\left[x\right]$ then it must be reducible in $F\left[x\right]$, and the contrapositive is the statement that we want.

Theorem 3.11.1 and its corollary state that if $R$ is a UFD then $R[{x}_{1},\dots ,{x}_{n}]$ is also a UFD. Therefore we must show here that a field $F$ is indeed a UFD. But this is vacuously true of a field. We recall the definition:

“Let $R$R be an integral domain. $R$R is a **unique factorization domain** (UFD) if (1) non-zero $r\in R$r∈R is either a unit or expressible as a product of a finite number of irreducible elements of $R$R, (2) the decomposition just mentioned is unique up to re-ordering and multiplication by units.”

A field is an integral domain by exercise 3.2.12. Every non-zero element in a field is invertible under multiplication, i.e. is a unit. The condition (2) is trivial in this case. Therefore $F$ is a UFD and the result holds by Theorem 3.11.1.

Here’s a sketch of a proof which was guided by this PDF. One really has to wonder if there’s some elusive result in the text which simplifies the exercise, because it seems that this should really be a starred problem. To show that $R$ is a UFD we must prove two facts: **(1)** Every element $r\in R$ can be factored into a finite number of irreducible elements of $R$, **(2)** The factorization from (1) is unique up to reordering and multiplication by units.

**(1)** Let $r\in R$ be such that it cannot be factored into finitely many irreducible elements. This means that, no matter how $r$ has been factorized, it can be broken down still further into non-trivial (i.e. non-unit) factors. Now, this also means that there must be some proper factor ${a}_{0}$ of $r$ (that is, ${a}_{0}$ not an associate of $r$) with the same “infinite divisibility” property: if $r$ had a factorization where none of the factors had the property, then $r$ itself would be expressible as a finite product of irreducibles, a contradiction. By the same argument, ${a}_{0}$ has a proper factor ${a}_{1}$ with the infinite divisibility property, and so forth. Now consider the chain of ideals
$\left({a}_{0}\right)\subset \left({a}_{1}\right)\subset \left({a}_{2}\right)\subset \cdots $

where each inclusion is proper. It is easy to show that the set $U=\bigcup _{i=0}^{\mathrm{\infty}}\text{}\left({a}_{i}\right)$

is also an ideal of $R$: first note that if $x\in U$ then $x\in \left({a}_{i}\right)$ for some $i$. If $s\in R$ and $x\in U$, then $x\in \left({a}_{i}\right)$ for some $i$ and $sx\in \left({a}_{i}\right)\subset U$. If $x,y\in U$, with $x\in \left({a}_{i}\right)$ and $y\in \left({a}_{j}\right)$, then, because of the inclusion chain, both $x$ and $y$ belong to $\left({a}_{\mathrm{m}\mathrm{a}\mathrm{x}(i,j)}\right)$ so that $x+y\in \left({a}_{\mathrm{m}\mathrm{a}\mathrm{x}(i,j)}\right)\subset U$.

Finally, we employ the fact that $R$ is a PID. Because $U$ is an ideal, it is principal, say $U=\left(\alpha \right)$ with $\alpha \in R$. Because $\alpha \in U$, we know that $\alpha \in \left({a}_{i}\right)$ for some $i$. But then if $j\ge i$, $\left({a}_{j}\right)$ is supposed to properly contain $U$, which is impossible because $U$ is at least as large as all of the $a$ ideals. Therefore no element of a PID can have this infinite divisibility property: it can be factored into finitely many irreducible elements.

**(2)** Here we point out that the proof of theorem 3.7.2, uniqueness of prime factorization in Euclidean rings, carries over almost identically to PIDs. The only difficulty is that we have no result in PIDs which states that a prime element of $R$ dividing $ab$ must divide one of $a$ or $b$. The result holds for Euclidean rings (lemma 3.7.6) and for UFDs (exercise 3.11.9c), but not, a priori, for PIDs.

With $R$ a PID, $a,b\in R$ have a greatest common divisor. Consider the ideal $I=\{ax+by\mid x,y\in R\}$, which must be principal, say $I=\left(d\right)$ for $d\in R$. Clearly $d$ is a divisor of both $a$ (with $x=1$ and $y=0$) and $b$ (with $x=0$ and $y=1$). It is also the greatest such divisor: let ${x}_{0},{y}_{0}\in R$ be such that $d=a{x}_{0}+b{y}_{0}$. Then if $c\mid a$ and $c\mid b$, then $c\mid (a{x}_{0}+b{y}_{0})=d$. Therefore $d$ is the greatest common divisor of $a$ and $b$, and $d\in R$.

If $\pi \in R$ is irreducible and $a,b\in R$ and $\pi \mid ab$, we would like to show that $\pi \mid a$ or $\pi \mid b$. If $\pi \nmid a$, then $(\pi ,a)=1$. Therefore there exist $x,y\in R$ with $\pi x+ay=1$ and consequently $\pi xb+aby=b$. Because $\pi \mid ab$, we have that $\pi $ divides the left hand side, but of course then it also dividies the right hand side: $\pi \mid b$. Therefore if $\pi \nmid a$, then $\pi \mid b$. Otherwise $\pi \mid a$. This proves the result, which is a generalization of lemma 3.7.6.

Now it would be possible to quote the proof of theorem 3.7.2 verbatim here. Instead, we will give the rough sketch of the argument: let $r={\pi}_{1}\cdots {\pi}_{m}={\rho}_{1}\cdots {\rho}_{n}$ be two factorizations of $r$ into irreducibles. Because ${\pi}_{1}$ divides the right hand side, it must be associate to one of the ${\rho}_{i}$. Because we are working in a domain, we can cancel ${\pi}_{1}$ from both sides, leaving a unit on the right hand side but one less irreducible. Continuing this process, we end up with $1$ on the left hand side and, if there are any non-units remaining on the right hand side, then we have a contradiction. Hence we must have $n=m$ and all of the irreducibles in the two factorizations pair off as associates. This proves that the factorization is unique up to rearranging and multiplication by units. Taking (1) and (2) together, we see that any principal ideal domain must be a unique factorization domain.

By corollary 1 to theorem 3.11.1, we have that, when $R$ is a UFD, $R[{x}_{1},\dots ,{x}_{n}]$ is a UFD. The fact that $\mathbb{Z}$ is a UFD is the statement of the fundamental theorem of arithmetic (theorem 1.3.1). Therefore the result follows.