Datasets:
Tasks:
Text Generation
Modalities:
Text
Sub-tasks:
language-modeling
Languages:
English
Size:
100K - 1M
License:
File size: 91,968 Bytes
afd65d6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 |
\chapter{Foundations}
\label{foundations}
The present foundational chapter will introduce the notion of a ring and,
next, that of a module over a ring. These notions will be the focus of the
present book. Most of the chapter will be definitions.
We begin with a few historical remarks. Fermat's last theorem states that the
equation
\[ \label{ft} x^n + y^n = z^n \]
has no nontrivial solutions in the integers, for $n \ge 3$. We could try to
prove this by factoring the expression on the left hand side. We can write
\[ (x+y)(x+ \zeta y) (x+ \zeta^2y) \dots (x+ \zeta^{n-1}y) = z^n, \]
where $\zeta$ is a primitive $n$th root of unity. Unfortunately, the factors
lie in $\mathbb{Z}[\zeta]$, not the integers $\mathbb{Z}$. Though
$\mathbb{Z}[\zeta]$ is still a \emph{ring} where we have notions of primes and
factorization, just as in $\mathbb{Z}$, we will see that prime factorization
is not always unique in $\mathbb{Z}[\zeta]$. (If it were always unique, then we
could at least one important case of Fermat's last theorem rather easily; see
the introductory chapter of \cite{Wa97} for an argument.)
For instance, consider the ring
$\mathbb{Z}[\sqrt{-5}]$ of complex numbers of the form $a + b\sqrt{-5}$, where
$a, b \in \mathbb{Z}$. Then we have the two factorizations
\[ 6 = 2 \cdot 3 = (1 + \sqrt{-5})(1 - \sqrt{-5}). \]
Both of these are factorizations of 6 into irreducible factors, but they
are fundamentally different.
In part, commutative algebra grew out of the need to understand this failure
of unique factorization more generally. We shall have more to say on
factorization in the future, but here we just focus on the formalism.
The basic definition for studying this problem is that of a \emph{ring}, which we now
introduce.
\section{Commutative rings and their ideals}
\subsection{Rings}
We shall mostly just work with commutative rings in this book, and consequently
will just say ``ring'' for one such.
\begin{definition}
A \textbf{commutative ring} is a set $R$ with an addition map
$+ : R \times R \to R$ and a multiplication map $\times : R \times R \to R$
that satisfy the following conditions.
\begin{enumerate}
\item $R$ is a group under addition.
\item The multiplication map is commutative and distributes over addition.
This means that $x \times (y+z) = x \times y + x\times z$ and $x \times y = y
\times x$.
\item There is a \textbf{unit} (or \textbf{identity element}), denoted by
$1$, such that $1 \times x = x$ for all $x \in R$.
\end{enumerate}
We shall typically write $xy$ for $x \times y$.
Given a ring, a \textbf{subring} is a subset that contains the identity
element and is closed under addition and multiplication.
\end{definition}
A \emph{noncommutative} (i.e. not necessarily commutative) ring is one
satisfying the above conditions, except possibly for the commutativity
requirement $xy = yx$. For instance, there is a noncommutative ring of
$2$-by-$2$ matrices over $\mathbb{C}$. We shall not work too much with noncommutative rings in
the sequel, though many of the basic results (e.g. on modules) do generalize.
\begin{example}
$\mathbb{Z}$ is the simplest example of a ring.
\end{example}
\begin{exercise}\label{polynomial} Let $R$ be a commutative ring.
Show that the set of polynomials in one variable over $R$ is a commutative
ring $R[x]$. Give a rigorous definition of this.
\end{exercise}
\begin{example}
For any ring $R$, we can consider the polynomial ring $R[x_1, \ldots, x_n]$
which consists of the polynomials in $n$ variables with coefficients in $R$.
This can be defined inductively as $(R[x_1, \dots, x_{n-1}])[x_n]$, where the
procedure of adjoining a single variable comes from the previous
\cref{polynomial}.
\end{example}
We shall see a more general form of this procedure in \cref{groupring}.
\begin{exercise}
If $R$ is a commutative ring, recall that an \textbf{invertible element} (or, somewhat
confusingly, a \textbf{unit}) $u \in R$ is an element such
that there exists $v \in R$ with $uv = 1$. Prove that $v$ is necessarily
unique.
\end{exercise}
\begin{exercise} \label{ringoffns}
Let $X$ be a set and $R$ a ring. The set $R^X$ of functions $f:X \to R$ is a
ring. \end{exercise}
\subsection{The category of rings}
The class of rings forms a category. Its morphisms are called ring homomorphisms.
\begin{definition}
A \textbf{ring homomorphism} between two rings $R$ and $S$ as a map
$f : R \to S$ that respects addition and multiplication. That is,
\begin{enumerate}
\item $f(1_R) = 1_S$, where $1_R$ and $1_S$ are the respective identity
elements.
\item $f(a + b) = f(a) + f(b)$ for $a, b \in R$.
\item $f(ab) = f(a)f(b)$ for $a, b \in R$.
\end{enumerate}
There is thus a \emph{category} $\mathbf{Ring}$ whose objects are commutative
rings and whose morphisms are ring-homomorphisms.
\end{definition}
The philosophy of Grothendieck, as expounded in his EGA \cite{EGA}, is that one should
always do things in a relative context. This means that instead of working
with objects, one should work with \emph{morphisms} of objects. Motivated by
this, we introduce:
\begin{definition}
Given a ring $A$, an \textbf{$A$-algebra} is a ring $R$ together with a
morphism of rings (a \textbf{structure morphism}) $A \to R$. There is a category of $A$-algebras, where a
morphism between $A$-algebras is a ring-homomorphism that is required to commute with the structure
morphisms.
\end{definition}
So if $R$ is an $A$-algebra, then $R$ is not only a ring, but there is a way
to multiply elements of $R$ by elements of $A$ (namely, to multiply $a \in A$
with $r \in R$, take the image of $a $ in $R$, and multiply that by $r$).
For instance, any ring is an algebra over any subring.
We can think of an $A$-algebra as an arrow $A \to R$, and a morphism from $A
\to R$ to $A \to S$ as a commutative diagram
\[ \xymatrix{
R \ar[rr] & & S \\
& \ar[lu] A \ar[ru]
}\]
This is a special case of the \emph{undercategory} construction.
If $B$ is an $A$-algebra and $C$ a $B$-algebra, then $C$ is an $A$-algebra in a
natural way. Namely, by assumption we are given morphisms of rings $A \to B$
and $B \to C$, so composing them gives the structure morphism $A \to C$ of $C$
as an $A$-algebra.
\begin{example}
Every ring is a $\mathbb{Z}$-algebra in a natural and unique way. There is a
unique map (of rings) $\mathbb{Z} \to R$ for any ring $R$ because a
ring-homomorphism is required to preserve the identity.
In fact, $\mathbb{Z}$ is the \emph{initial object} in the category of rings:
this is a restatement of the preceding discussion.
\end{example}
\begin{example}
If $R$ is a ring, the polynomial ring $R[x]$ is an $R$-algebra in a natural
manner. Each element of $R$ is naturally viewed as a ``constant polynomial.''
\end{example}
\begin{example}
$\mathbb{C}$ is an $\mathbb{R}$-algebra.
\end{example}
Here is an example that generalizes the case of the polynomial ring.
\begin{example}
\label{groupring}
If $R$ is a ring and $G$ a commutative monoid,\footnote{That is, there is a
commutative multiplication on $G$ with an identity element, but not
necessarily with inverses.} then the set
$R[G]$ of formal finite sums $\sum r_i g_i$ with $r_i \in R, g_i \in G$ is a
commutative ring, called the \textbf{moniod ring} or \textbf{group ring} when
$G$ is a group.
Alternatively, we can think of elements of $R[G]$ as infinite sums $\sum_{g \in
G} r_g g$ with $R$-coefficients, such that almost all the $r_g$ are zero.
We can define the multiplication law such that
\[ \left(\sum r_g g\right)\left( \sum s_g g\right) =
\sum_h \left( \sum_{g g' = h} r_g s_{g'} \right) h.
\]
This process is called \emph{convolution.} We can think of the multiplication
law as extended the group multiplication law (because the product of the
ring-elements corresponding to $g, g'$ is the ring element corresponding to
$gg' \in G$).
The case of $G =
\mathbb{Z}_{\geq 0}$ is the polynomial ring.
In some cases, we can extend this notion to formal infinite sums, as in the
case of the formal power series ring; see \cref{powerseriesring} below.
\end{example}
\begin{exercise}
\label{integersinitial}
The ring $\mathbb{Z}$ is an \emph{initial object} in the category of rings.
That is, for any ring $R$, there is a \emph{unique} morphism of rings
$\mathbb{Z} \to R$. We discussed this briefly earlier; show more generally that
$A$ is the initial object in the category of $A$-algebras for any ring $A$.
\end{exercise}
\begin{exercise}
The ring where $0=1$ (the \textbf{zero ring}) is a \emph{final object} in the category of rings. That
is, every ring admits a unique map to the zero ring.
\end{exercise}
\begin{exercise}
\label{corepresentable}
Let $\mathcal{C}$ be a category and $F: \mathcal{C} \to \mathbf{Sets}$ a
covariant functor. Recall that $F$ is said to be \textbf{corepresentable} if
$F$ is naturally isomorphic to $X \to \hom_{\mathcal{C}}(U, X)$ for some
object $U \in \mathcal{C}$. For instance, the functor sending everything to a
one-point set is corepresentable if and only if $\mathcal{C}$ admits an
initial object.
Prove that the functor $\mathbf{Rings} \to \mathbf{Sets}$ assigning to each ring its underlying set is
representable. (Hint: use a suitable polynomial ring.)
\end{exercise}
The category of rings is both complete and cocomplete. To show this in full
will take more work, but we can here describe what
certain cases (including all limits) look like.
As we saw in \cref{corepresentable}, the forgetful functor $\mathbf{Rings} \to
\mathbf{Sets}$ is corepresentable. Thus, if we want to look for limits in the
category of rings, here is the approach we should follow: we should take the
limit first of the underlying sets, and then place a ring structure on it in
some natural way.
\begin{example}[Products]
The \textbf{product} of two rings $R_1, R_2$ is the set-theoretic product $R_1
\times R_2$ with the multiplication law $(r_1, r_2)(s_1, s_2) = (r_1 s_1, r_2
s_2)$. It is easy to see that this is a product in the category of rings. More
generally, we can easily define the product of any collection of rings.
\end{example}
To describe the coproduct is more difficult: this will be given by the
\emph{tensor product} to be developed in the sequel.
\begin{example}[Equalizers]
Let $f, g: R \rightrightarrows S$ be two ring-homomorphisms. Then we can
construct the \textbf{equalizer} of $f,g$ as the subring of $R$ consisting of
elements $x \in R$ such that $f(x) = g(x)$. This is clearly a subring, and one
sees quickly that it is the equalizer in the category of rings.
\end{example}
As a result, we find:
\begin{proposition}
$\mathbf{Rings}$ is complete.
\end{proposition}
As we said, we will not yet show that $\mathbf{Rings}$ is cocomplete. But we
can describe filtered colimits. In fact, filtered colimits will be constructed
just as in the set-theoretic fashion. That is, the forgetful functor
$\mathbf{Rings} \to \mathbf{Sets}$ commutes with \emph{filtered} colimits
(though not with general colimits).
\begin{example}[Filtered colimits]
Let $I$ be a filtering category, $F: I \to \mathbf{Rings}$ a functor. We can
construct $\varinjlim_I F$ as follows. An object is an element $(x,i)$ for $i
\in I$ and $x \in F(i)$, modulo equivalence; we say that $(x, i)$ and $(y, j)$
are equivalent if there is a $k \in I$ with maps $i \to k, j \to k$ sending
$x,y$ to the same thing in the ring $F(k)$.
To multiply $(x, i)$ and $(y,j)$, we find
some $k \in I$ receiving maps from $i, j$, and replace $x,y$ with elements of
$F(k)$. Then we multiply those two in $F(k)$. One easily sees that this is a
well-defined multiplication law that induces a ring structure, and that what we
have described is in fact the filtered colimit.
\end{example}
\subsection{Ideals}
An \emph{ideal} in a ring is analogous to a normal subgroup of a
group. As we shall see, one may quotient by ideals just as one quotients by
normal subgroups.
The idea is that one wishes to have a suitable \emph{equivalence relation} on a
ring $R$ such that the relevant maps (addition and multiplication) factor
through this equivalence relation. It is easy to check that any such relation
arises via an ideal.
\begin{definition}
Let $R$ be a ring. An \textbf{ideal} in $R$ is a subset $I \subset R$ that
satisfies the following.
\begin{enumerate}
\item $0 \in I$.
\item If $x, y \in I$, then $x + y \in I$.
\item If $x \in I$ and $y \in R$, then $xy \in I$.
\end{enumerate}
\end{definition}
There is a simple way of obtaining ideals, which we now describe.
Given elements $x_1, \ldots, x_n \in R$, we denote by $(x_1, \ldots, x_n) \subset
R$ the subset of linear combinations $\sum r_i x_i$, where $r_i \in R$. This
is clearly an ideal, and in fact the smallest one containing all $x_i$. It is
called the ideal \textbf{generated} by $x_1, \ldots, x_n$. A
\textbf{principal ideal} $(x)$ is one generated by a single $x \in R$.
\begin{example}
Ideals generalize the notion of divisibility. Note that
in $\mathbb{Z}$, the set of elements divisible by $n \in \mathbb{Z}$ forms the
ideal $I = n\mathbb{Z} = (n)$. We shall see that every ideal in $\mathbb{Z}$ is
of this form: $\mathbb{Z}$ is a \emph{principal ideal domain.}
\end{example}
Indeed, one can think of an ideal as axiomatizing the notions that
``divisibility'' ought to satisfy. Clearly, if two elements are divisible by
something, then their sum and product should also be divisible by it. More
generally, if an element is divisible by something, then the product of that
element with anything else should also be divisible. In general, we will extend
(in the chapter on Dedekind domains) much of the ordinary arithmetic with
$\mathbb{Z}$ to arithmetic with \emph{ideals} (e.g. unique factorization).
\begin{example}
We saw in \cref{ringoffns}
that if $X$ is a set and $R$ a ring, then the set $R^X$ of functions $X \to R$
is naturally a ring. If $Y \subset X$ is a subset, then the subset of functions
vanishing on $Y$ is an ideal.
\end{example}
\begin{exercise}
Show that the ideal $(2, 1 + \sqrt{-5}) \subset \mathbb{Z}[\sqrt{-5}]$ is not
principal.
\end{exercise}
\subsection{Operations on ideals}
There are a number of simple operations that one may do with ideals, which we
now describe.
\begin{definition}
The sum $I + J$ of two ideals $I, J \subset R$ is defined as the set of sums
\[ \left\{ x + y : x \in I, y \in J \right\}. \]
\end{definition}
\begin{definition}
The product $IJ$ of two ideals $I, J \subset R$ is defined as the smallest
ideal containing the products $xy$ for all $x \in I, y \in J$. This is just
the set
\[ \left\{ \sum x_i y_i : x_i \in I, y_i \in J \right\}. \]
\end{definition}
We leave the basic verification of properties as an exercise:
\begin{exercise}
Given ideals $I, J \subset R$, verify the following.
\begin{enumerate}
\item $I + J$ is the smallest ideal containing $I$ and $J$.
\item $IJ$ is contained in $I$ and $J$.
\item $I \cap J$ is an ideal.
\end{enumerate}
\end{exercise}
\begin{example}
In $\mathbb{Z}$, we have the following for any $m, n$.
\begin{enumerate}
\item $(m) + (n) = (\gcd\{ m, n \})$,
\item $(m)(n) = (mn)$,
\item $(m) \cap (n) = (\mathrm{lcm}\{ m, n \})$.
\end{enumerate}
\end{example}
\begin{proposition}
For ideals $I, J, K \subset R$, we have the following.
\begin{enumerate}
\item Distributivity: $I(J + K) = IJ + IK$.
\item $I \cap (J + K) = I \cap J + I \cap K$ if $I \supset J$ or $I \supset K$.
\item If $I + J = R$, $I \cap J = IJ$.
\end{enumerate}
\begin{proof}
1 and 2 are clear. For 3, note that $(I + J)(I \cap J) = I(I \cap J)
+ J(I \cap J) \subset IJ$. Since $IJ \subset I \cap J$, the result
follows.
\end{proof}
\end{proposition}
\begin{exercise}
There is a \emph{contravariant} functor $\mathbf{Rings} \to \mathbf{Sets}$ that
sends each ring to its set of ideals. Given a map $f: R \to S$ and an ideal $I
\subset S$, we define an ideal $f^{-1}(I) \subset R$; this defines the
functoriality.
This functor is not representable, as it does not send the initial object
in $\mathbf{Rings} $ to the
one-element set. We will later use a \emph{subfunctor} of this functor, the
$\spec$ construction, when we replace ideals with ``prime'' ideals.
\end{exercise}
\subsection{Quotient rings}
We next describe a procedure for producing new rings from old ones.
If $R$ is a ring and $I \subset R$ an ideal, then the quotient group $R/I$
is a ring in its own right. If $a+I, b+I$ are two cosets, then the
multiplication is $(a+I)(b+I) = ab + I$. It is easy to check that this does
not depend on the coset representatives $a,b$. In other words, as mentioned
earlier, the arithmetic operations on $R$ \emph{factor} through the equivalence
relation defined by $I$.
As one easily checks, this becomes to a multiplication
\[ R/I \times R/I \to R/I \]
which is commutative and associative, and
whose identity element is $1+I$.
In particular, $R/I$ is a ring, under multiplication $(a+I)(b+I) = ab+I$.
\begin{definition}
$R/I$ is called the \textbf{quotient ring} by the ideal $I$.
\end{definition}
The process is analogous to quotienting a group by a normal subgroup: again,
the point is that the equivalence relation induced on the algebraic
structure---either the group or the ring---by the subgroup (or ideal)---is
compatible with the algebraic structure, which thus descends to the quotient.
The
reduction map $\phi \colon R \to R/I$ is a ring-homomorphism with a
\emph{universal
property}.
Namely, for any ring $B$, there is a map
\[ \hom(R/I, B) \to \hom(R, B) \]
on the hom-sets
by composing with the ring-homomorphism $\phi$; this map is injective and the
image consists of all homomorphisms $R \to B$ which vanish on $I$.
Stated alternatively, to map out of $R/I$ (into some ring $B$) is the same thing as mapping out of
$R$ while killing the ideal $I \subset R$.
This is best thought out for oneself, but here is the detailed justification.
The reason is that any map $R/I \to B$ pulls back to a map $R \to R/I \to B$
which annihilates $I$ since $R \to R/I$ annihilates $I$. Conversely, if we have
a map
\[ f: R \to B \]
killing $I$, then we can define $R/I \to B$ by sending $a+I$ to $f(a)$; this is
uniquely defined since $f$ annihilates $I$.
\begin{exercise}
If $R$ is a commutative
ring, an element $e \in R$ is said to be \textbf{idempotent} if $e^2 =
e$. Define a covariant functor $\mathbf{Rings} \to \mathbf{Sets}$ sending a
ring to its idempotents. Prove that it is corepresentable. (Answer: the
corepresenting object is $\mathbb{Z}[X]/(X - X^2)$.)
\end{exercise}
\begin{exercise}
Show that the functor assigning to each ring the set of elements annihilated
by 2 is corepresentable.
\end{exercise}
\begin{exercise}
If $I \subset J \subset R$, then $J/I$ is an ideal of $R/I$, and there is a
canonical isomorphism
\[ (R/I)/(J/I) \simeq R/J. \]
\end{exercise}
\subsection{Zerodivisors}
Let $R$ be a commutative ring.
\begin{definition}
If $r \in R$, then $r$ is called a \textbf{zerodivisor} if there is $s \in R, s
\neq 0$ with $sr = 0$. Otherwise $r$ is called a \textbf{nonzerodivisor.}
\end{definition}
As an example, we prove a basic result on the zerodivisors in a polynomial ring.
\begin{proposition}
Let $A=R[x]$. Let $f=a_nx^n+\cdots +a_0\in A$. If there is a non-zero polynomial $g\in
A$ such that $fg=0$, then there exists $r\in R\smallsetminus\{0\}$ such that $f\cdot
r=0$.
\end{proposition}
So all the coefficients are zerodivisors.
\begin{proof}
Choose $g$ to be of minimal degree, with leading coefficient $bx^d$. We may assume
that $d>0$. Then $f\cdot b\neq 0$, lest we contradict minimality of $g$. We must have
$a_i g\neq 0$ for some $i$. To see this, assume that $a_i\cdot g=0$, then $a_ib=0$ for
all $i$ and then $fb=0$. Now pick $j$ to be the largest integer such that $a_jg\neq
0$. Then $0=fg=(a_0 + a_1x + \cdots a_jx^j)g$, and looking at the leading coefficient,
we get $a_jb=0$. So $\deg (a_jg)<d$. But then $f\cdot (a_jg)=0$, contradicting
minimality of $g$.
\end{proof}
\begin{exercise}
The product of two nonzerodivisors is a nonzerodivisor, and the product of two
zerodivisors is a zerodivisor. It is, however, not necessarily true that the
\emph{sum} of two zerodivisors is a zerodivisor.
\end{exercise}
\section{Further examples}
We now illustrate a few important examples of
commutative rings. The section is in large measure an advertisement for why
one might care about commutative algebra; nonetheless, the reader is
encouraged at least to skim this section.
\subsection{Rings of holomorphic functions}
The following subsection may be omitted without impairing understanding.
There is a fruitful analogy in number theory between the rings $\mathbb{Z}$ and
$\mathbb{C}[t]$, the latter being the polynomial ring over $\mathbb{C}$ in one
variable (\rref{polynomial}). Why are they analogous? Both of these rings have a theory of unique
factorization: that is, factorization into primes or irreducible polynomials. (In the
latter, the irreducible polynomials have degree one.)
Indeed we know:
\begin{enumerate}
\item Any nonzero integer factors as a product of primes (possibly times $-1$).
\item Any nonzero polynomial factors as a product of an element of
$\mathbb{C}^* =\mathbb{C} - \left\{0\right\}$ and polynomials of the form $t -
a, a \in \mathbb{C}$.
\end{enumerate}
There is another way of thinking of $\mathbb{C}[t]$ in terms of complex
analysis. This is equal to the ring of holomorphic functions on $\mathbb{C}$
which are meromorphic at infinity.
Alternatively, consider the Riemann sphere $\mathbb{C} \cup \{ \infty\}$; then the ring $\mathbb{C}[t]$
consists of meromorphic functions on the sphere whose poles (if any) are at
$\infty$.
This description admits generalizations.
Let $X$ be a
Riemann surface. (Example: take the complex numbers modulo a lattice, i.e. an
elliptic curve.)
Suppose that $x \in X$. Define $R_x$ to be the ring of meromorphic functions on $X$
which are allowed poles only at $x$ (so are everywhere else holomorphic).
\begin{example} Fix the notations of the previous discussion.
Fix $y \neq x \in X$. Let $R_x$ be the ring of meromorphic functions on the
Riemann surface $X$ which are holomorphic on $X - \left\{x\right\}$, as before.
Then the collection of functions that vanish at $y$ forms an
\emph{ideal} in $R_x$.
There are lots of other ideals. For instance, fix two
points $y_0, y_1 \neq x$; we look at the ideal of $R_x$ that vanish at both $y_0, y_1$.
\end{example}
\textbf{For any Riemann surface $X$, the conclusion of Dedekind's theorem
(\rref{ded1}) applies. } In other
words, the ring $R_x$ as defined in the example admits unique factorization of
ideals. We shall call such rings \textbf{Dedekind domains} in the future.
\begin{example} Keep the preceding notation.
Let $f \in R_x$, nonzero. By definition, $f$ may have a pole at $x$, but no poles elsewhere. $f$ vanishes
at finitely many points $y_1, \dots, y_m$. When $X$ was the Riemann sphere,
knowing the zeros of $f$ told us something about $f$. Indeed, in this case
$f$ is just a
polynomial, and we have a nice factorization of $f$ into functions in $R_x$ that vanish only
at one point. In general Riemann surfaces, this
is not generally possible. This failure turns out to be very interesting.
Let $X = \mathbb{C}/\Lambda$ be an elliptic curve (for $\Lambda \subset
\mathbb{C}^2$ a lattice), and suppose $x = 0$. Suppose we
are given $y_1, y_2, \dots, y_m \in X$ that are nonzero; we ask whether there
exists a function $f \in R_x$ having simple zeros at $y_1, \dots, y_m$ and nowhere else.
The answer is interesting, and turns out to recover the group structure on the
lattice.
\begin{proposition}
A function $f \in R_x$ with simple zeros only at the $\left\{y_i\right\}$ exists if and only if $y_1 + y_2 + \dots + y_n = 0$ (modulo $\Lambda$).
\end{proposition}
So this problem of finding a function with specified zeros is equivalent to
checking that the specific zeros add up to zero with the group structure.
In any case, there might not be such a nice function, but we have at least an
ideal $I$ of functions that have zeros (not necessarily simple) at $y_1, \dots,
y_n$. This ideal has unique factorization into the ideals of functions
vanishing at $y_1$, functions vanishing at $y_2$, so on.
\end{example}
\subsection{Ideals and varieties}
We saw in the previous subsection that ideals can be thought of as the
vanishing of functions. This, like divisibility, is another interpretation,
which is particularly interesting in algebraic geometry.
Recall the ring $\mathbb{C}[t]$ of complex polynomials discussed in the
last subsection. More generally, if $R$ is a ring, we saw in
\rref{polynomial} that the set $R[t]$ of polynomials with coefficients
in $R$
is a ring. This is a construction that
can be iterated to get a polynomial ring in several variables over $R$.
\begin{example}
Consider the polynomial ring $\mathbb{C}[x_1, \dots, x_n]$. Recall that before
we thought of the ring $\mathbb{C}[t]$ as a ring of meromorphic functions.
Similarly each element of the polynomial ring $\mathbb{C}[x_1, \dots, x_n]$
gives a function $\mathbb{C}^n \to \mathbb{C}$; we can think of the polynomial
ring as sitting inside the ring of all functions $\mathbb{C}^n \to \mathbb{C}$.
A question you might ask: What are the ideals in this ring? One way to get an
ideal is to pick a point $x=(x_1, \dots, x_n) \in \mathbb{C}^n$; consider the
collection of all functions $f \in \mathbb{C}[x_1, \dots, x_n]$ which vanish on
$x$; by the usual argument, this is an ideal.
There are, of course, other ideals. More generally, if $Y \subset
\mathbb{C}^n$, consider the collection of polynomial functions $f:
\mathbb{C}^n \to \mathbb{C}$ such that $f \equiv 0$ on
$Y$. This is easily seen to be an ideal in the polynomial ring. We thus have a
way of taking a subset of $\mathbb{C}^n$ and producing an ideal.
Let $I_Y$ be the ideal corresponding to $Y$.
This construction is not injective. One can have $Y \neq Y'$ but $I_Y = I_{Y'}$. For instance, if $Y$ is dense in
$\mathbb{C}^n$, then $I_Y = (0)$, because the only way a continuous function on
$\mathbb{C}^n$ can vanish on $Y$ is for it to be zero.
There is a much closer connection in the other direction. You might ask whether
all ideals can arise in this way. The quick answer is no---not even when $n=1$. The ideal $(x^2) \subset \mathbb{C}[x]$ cannot be obtained
in this way. It is easy to see that the only way we could get this as $I_Y$ is
for $Y=\left\{0\right\}$, but $I_Y$ in this case is just $(x)$, not $(x^2)$.
What's going wrong in this example is that $(x^2)$ is not a \emph{radical}
ideal.
\end{example}
\begin{definition}\label{def-radical-ideal}
An ideal $I \subset R$ is \textbf{radical} if whenever $x^2 \in I$, then $x \in
I$.
\end{definition}
The ideals $I_Y$ in the polynomial ring are all radical. This is obvious.
You might now ask whether this is the only obstruction. We now state a theorem
that we will prove later.
\begin{theorem}[Hilbert's Nullstellensatz] If $I \subset \mathbb{C}[x_1, \dots,
x_n]$ is a radical ideal, then $I = I_Y$ for some $Y \subset \mathbb{C}^n$. In
fact, the canonical choice of $Y$ is the set of points where all the functions
in $Y$ vanish.\footnote{Such a subset is called an algebraic variety.}
\end{theorem}
This will be one of the highlights of the present course. But before we can
get to it, there is much to do.
\begin{exercise}
Assuming the Nullstellensatz, show that any \emph{maximal} ideal in the
polynomial ring $\mathbb{C}[x_1, \dots, x_n]$ is of the form
$(x_1-a_1, \dots, x_n-a_n)$ for $a_1, \dots, a_n \in \mathbb{C}$. An ideal of a
ring is called \textbf{maximal} if the only ideal that contains it is the
whole ring (and it itself is not the whole ring).
As a corollary, deduce that if $I \subset \mathbb{C}[x_1, \dots, x_n]$ is a
proper ideal (an ideal is called \textbf{proper} if it is not equal to the
entire ring), then there exists $(x_1, \dots, x_n) \in \mathbb{C}^n$ such that
every polynomial in $I$ vanishes on the point $(x_1, \dots, x_n)$. This is
called the \textbf{weak Nullstellensatz.}
\end{exercise}
\section{Modules over a commutative ring}
We will now establish some basic terminology about modules.
\subsection{Definitions}
Suppose $R$ is a commutative ring.
\begin{definition}
An \textbf{$R$-module $M$} is an abelian group $M$ with a map $R \times M \to
M$ (written $(a,m) \to am$) such that
\begin{enumerate}[\textbf{M} 1]
\item $(ab) m = a(bm)$ for $a,b \in R, m \in M$, i.e. there is an associative law.
\item $1m
= m$; the unit acts as the identity.
\item There are distributive laws
on both sides:
$(a+b)m = am + bm$ and $a(m+n) = am + an$ for $a,b \in R, \ m,n \in M$.
\end{enumerate} \end{definition}
Another definition can be given as follows.
\begin{definition}
If $M$ is an abelian group, $End(M)$ is the set of homomorphisms $f: M \to M$.
This can be made into a (noncommutative) \emph{ring}.\footnote{A
noncommutative ring is one satisfying all the usual axioms of a ring except
that multiplication is not required to be commutative.} Addition is defined pointwise, and
multiplication is by composition. The identity element is the identity
function $1_M$.
\end{definition}
We made the following definition earlier for commutative rings, but for
clarity we re-state it:
\begin{definition}
If $R, R'$ are rings (possibly noncommutative) then a function $f: R \to R'$ is a
\textbf{ring-homomorphism} or \textbf{morphism} if it is compatible with the
ring structures, i.e
\begin{enumerate}
\item $f(x+y) = f(x) + f(y)$
\item $f(xy) = f(x)f(y)$
\item $f(1) = 1$.
\end{enumerate}
\end{definition}
The last condition is not redundant because otherwise the zero map would
automatically be a homomorphism.
The alternative definition of a module is left to the reader in the following
exercise.
\begin{exercise}
If $R$ is a ring and $R \to End(M)$ a homomorphism, then $M$ is made into an
$R$-module, and vice versa.
\end{exercise}
\begin{example}
If $R$ is a ring, then $R$ is an $R$-module by multiplication on the left.
\end{example}
\begin{example}
A $\mathbb{Z}$-module is the same thing as an abelian group.
\end{example}
\begin{definition}
If $M$ is an $R$-module, a subset $M_0 \subset M$ is a \textbf{submodule} if it
is a subgroup (closed under addition and inversion) and is closed under
multiplication by elements of $R$, i.e. $aM_0 \subset M_0$ for $a \in R$. A
submodule is a module in its own right. If $M_0 \subset M$ is a submodule,
there is a commutative diagram:
\[ \xymatrix{
R \times M_0 \ar[d] \ar[r] & M_0 \ar[d] \\ R \times M \ar[r] & M
}.\]
Here the horizontal maps are multiplication.
\end{definition}
\begin{example}
Let $R$ be a (\textbf{commutative}) ring; then an ideal in $R$ is the same thing as a
submodule of $R$.
\end{example}
\begin{example}
If $A$ is a ring, an $A$-algebra is an $A$-module in an obvious way. More
generally, if $A$ is a ring and $R$ is an $A$-algebra, any $R$-module becomes
an $A$-module by pulling back the multiplication map via $A \to R$.
\end{example}
Dual to submodules is the notion of a \emph{quotient module}, which we define
next:
\begin{definition} Suppose $M$ is an $R$-module and $M_0$ a
submodule. Then the abelian group $M/M_0$ (of cosets) is an $R$-module,
called the \textbf{quotient module} by $M_0$.
Multiplication is as follows. If
one has a coset $x + M_0 \in M/M_0$, one multiplies this by $a \in R$ to
get the coset $ax
+ M_0$. This does not depend on the coset representative.
\end{definition}
\subsection{The categorical structure on modules}
So far, we have talked about modules, but we have not discussed morphisms
between modules, and have yet to make the class of modules over a given ring
into a category. This we do next.
Let us thus introduce a few more basic notions.
\begin{definition}
Let $R$ be a ring. Suppose $M,N$ are $R$-modules. A map $f: M \to N$
is a \textbf{module-homomorphism} if it preserves all the relevant structures.
Namely, it must be a homomorphism of abelian groups, $f(x+y) = f(x) + f(y)$,
and second it must
preserve multiplication:
$$f(ax) = af(x)$$ for $a \in R, x \in M$.
\end{definition}
A simple way of getting plenty of module-homomorphisms is simply to consider
multiplication by a fixed element of the ring.
\begin{example}
If $a \in R$, then multiplication by $a$ is a module-homomorphism $M
\stackrel{a}{\to} M$ for any $R$-module $M$.\footnote{When one considers
modules over noncommutative rings, this is no longer true.} Such homomorphisms
are called \textbf{homotheties.}
\end{example}
If $M \stackrel{f}{\to} N$ and $N \stackrel{g}{\to} P$ are
module-homomorphisms, their composite $M \stackrel{g \circ f}{\to} P$ clearly
is too.
Thus, for any commutative ring $R$, the class of $R$-modules and
module-homomorphisms forms a \textbf{category}.
\begin{exercise}
The initial object in this category is the zero module, and this is also the
final object.
In general, a category where the initial object and final object are the same
(that is, isomorphic) is called a \emph{pointed category.} The common object
is called the \emph{zero object.} In a pointed category $\mathcal{C}$, there is a morphism
$X \to Y$ for any two objects $X, Y \in \mathcal{C}$: if $\ast$ is the zero
object, then we can take $X \to \ast \to Y$. This is well-defined and is
called the \emph{zero morphism.}
One can easily show that the composition (on the left or the right) of a
zero morphism is a zero morphism (between a possibly different set of objects).
In the case of the category of modules, the zero object is clearly the zero
module, and the zero morphism $M \to N$ sends $m \mapsto 0$ for each $m \in M$.
\end{exercise}
\begin{definition} Let $f: M \to N$ be a module homomorphism.
In this case, the \textbf{kernel} $\ker f$ of $f$ is the set of elements $m
\in M$ with $f(m)=0$. This is
a submodule of $M$, as is easy to see.
The \textbf{image} $\im f$ of $f$ (the set-theoretic
image, i.e. the collection of all $f(x), x \in M$) is also a submodule of $N$.
The
\textbf{cokernel} of $f$ is defined by
\( N/\im(f). \)
\end{definition}
\begin{exercise} \label{univpropertykernel}
The universal property of the kernel is as follows. Let $M \stackrel{f}{\to }
N$ be a morphism with kernel $K \subset M$. Let $T \to M$ be a map. Then $T \to M$ factors through the
kernel $K \to M$ if and only if its composition with $f$ (a morphism $T \to N$) is zero.
That is, an arrow $T \to K$ exists in the diagram (where the dotted arrow
indicates we are looking for a map that need not exist)
\[ \xymatrix{
& T \ar@{-->}[ld] \ar[d] \\
K \ar[r] & M \ar[r]^f & N
}\]
if and only if the composite $T \to N$ is zero.
In particular, if we think of the hom-sets as abelian groups (i.e.
$\mathbb{Z}$-modules)
\[ \hom_R( T,K) = \ker\left( \hom_R(T, M) \to \hom_R(T, N) \right). \]
\end{exercise}
In other words, one may think of the kernel as follows. If $X
\stackrel{f}{\to} Y$ is a morphism, then the kernel $\ker(f)$ is the equalizer
of $f$ and the zero morphism $X \stackrel{0}{\to} Y$.
\begin{exercise}
What is the universal property of the cokernel?
\end{exercise}
\begin{exercise} \label{moduleunderlyingsetrepresentable}
On the category of modules, the functor assigning to each module $M$ its
underlying set is corepresentable (cf. \rref{corepresentable}). What
is the corepresenting object?
\end{exercise}
We shall now introduce the notions of \emph{direct sum} and \emph{direct
product}. Let $I$ be a set, and suppose that for each $i \in I$, we are given
an $R$-module $M_i$.
\begin{definition}
The \textbf{direct product} $\prod M_i$ is set-theoretically the cartesian product. It is given
the structure of an $R$-module by addition and multiplication pointwise on
each factor.
\end{definition}
\begin{definition}
The \textbf{direct sum} $\bigoplus_I M_i$ is the set of elements in the direct
product such that all but finitely many entries are zero. The direct sum is a
submodule of the direct product.
\end{definition}
\begin{example} \label{productcoproduct}
The direct product is a product in the category of modules, and the direct sum
is a coproduct. This is easy to verify: given maps $f_i: M \to M_i$, then we
get get a unique map $f: M \to \prod M_i$ by taking the product in the category
of sets. The case of a coproduct is dual: given maps $g_i: M_i \to N$, then we
get a map $\bigoplus M_i \to N$ by taking the \emph{sum} $g$ of the $g_i$: on a
family $(m_i) \in \bigoplus M_i$, we take $g(m_i) = \sum_I g_i(m_i)$; this is
well-defined as almost all the $m_i$ are zero.
\end{example}
\cref{productcoproduct} shows that the category of modules over a fixed
commutative ring has products and coproducts. In fact, the category of modules
is both complete and cocomplete (see \cref{completecat} for the definition).
To see this, it suffices to show that (by
\cref{coprodcoequalsufficeforcocomplete} and its dual) that this category
admits equalizers and coequalizers.
The equalizer of two maps
\[ M \stackrel{f,g}{\rightrightarrows} N \]
is easily checked to be the submodule of $M$ consisting of $m \in M$ such that
$f(m) = g(m)$, or, in other words, the kernel of $f-g$. The coequalizer of these two maps is the quotient module of $N$
by the submodule $\left\{f(m) - g(m), m \in M\right\}$, or, in other words,
the cokernel of $f-g$.
Thus:
\begin{proposition}
If $R$ is a ring, the category of $R$-modules is complete and cocomplete.
\end{proposition}
\begin{example}
Note that limits in the category of $R$-modules are calculated in the same way
as they are for sets, but colimits are not. That is, the functor from
$R$-modules to $\mathbf{Sets}$, the forgetful functor, preserves limits but not
colimits. Indeed, we will see that the forgetful functor is a right adjoint
(\cref{freeadj}), which implies it preserves limits (by \cref{adjlimits}).
\end{example}
\subsection{Exactness}
Finally, we introduce the notion of \emph{exactness}.
\begin{definition} \label{exactness}
Let $f: M \to N$ be a morphism of $R$-modules. Suppose $g: N \to P$ is another morphism of
$R$-modules.
The pair of maps is a \textbf{complex} if $g \circ f = 0: M \to N \to P$.
This is equivalent to the condition that $\im(f) \subset \ker(g)$.
This complex is \textbf{exact} (or exact at $N$) if $\im(f) = \ker(g)$.
In other words, anything that is killed when mapped to $P$ actually comes from something in
$M$.
\end{definition}
We shall often write pairs of maps as sequences
\[ A \stackrel{f}{\to} B \stackrel{g}{\to} C \]
and say that the sequence is exact if the pair of maps is, as in
\rref{exactness}. A longer (possibly infinite) sequence of modules
\[ A_0 \to A_1 \to A_2 \to \dots \]
will be called a \textbf{complex} if each set of three
consecutive terms is a complex, and \textbf{exact} if it is exact at each step.
\begin{example}
The sequence $0 \to A \stackrel{f}{\to} B$ is exact if and only if the map $f$
is injective. Similarly, $A \stackrel{f}{\to} B \to 0$ is exact if and only if
$f$ is surjective. Thus, $0 \to A \stackrel{f}{\to} B \to 0$ is exact if and
only if $f$ is an isomorphism.
\end{example}
One typically sees this definition applied to sequences of the form
\[ 0 \to M'\stackrel{f}{ \to} M \stackrel{g}{\to} M'' \to 0, \]
which, if exact, is called a \textbf{short exact sequence}.
Exactness here means that $f$ is injective, $g$ is surjective, and $f$ maps
onto the kernel of $g$. So $M''$ can be thought of as the quotient $M/M'$.
\begin{example}
Conversely, if $M$ is a module and $M' \subset M$ a submodule, then there is a
short exact sequence
\[ 0 \to M' \to M \to M/M' \to 0. \]
So every short exact sequence is of this form.
\end{example}
Suppose $F$ is a functor from the category of $R$-modules to the
category of $S$-modules, where $R, S$ are rings. Then:
\begin{definition}
\begin{enumerate}
\item $F$ is called \textbf{additive} if $F$ preserves direct sums.
\item $F$ is called \textbf{exact} if $F$ is additive and preserves exact sequences.
\item $F$ is called \textbf{left exact} if $F$ is additive and preserves exact sequences of the form
$0 \to M' \to M \to M''$. Equivalently, $F$ preserves kernels.
\item $F$ is \textbf{right exact} if $F$ is additive and $F$ preserves exact
sequences of the form $M' \to M \to M'' \to 0$, i.e. $F$ preserves cokernels.
\end{enumerate}
\end{definition}
The reader should note that much of homological algebra can be developed using the more
general setting of an \emph{abelian category,} which axiomatizes much of the
standard properties of the category of modules over a ring. Such a
generalization turns out to be necessary when many natural categories, such as
the category of chain complexes or the category of sheaves on a topological
space, are not naturally categories of modules.
We do not go into this here, cf. \cite{Ma98}.
A functor $F$ is exact if and only if it is both left and right exact.
This actually requires proof, though it is not hard. Namely, right-exactness implies that $F$
preserves cokernels. Left-exactness implies that $F$ preserves kernels. $F$
thus preserves images, as the image of a morphism is the kernel of its cokernel.
So if
\[ A \to B \to C \]
is a short exact sequence, then the kernel of the second map is equal to the
image of the first; we have just seen that this is preserved under $F$.
From this, one can check that left-exactness is equivalent to requiring that $F$ preserve
finite limits (as an additive functor, $F$ automatically preserves products,
and we have just seen that $F$ is left-exact iff it preserves kernels).
Similarly, right-exactness is equivalent to requiring that $F$ preserve
finite colimits.
So, in \emph{any} category with finite limits and colimits, we can talk about
right or left exactness of a functor, but the notion is used most often for
categories with an additive structure (e.g. categories of modules over a ring).
\begin{exercise}
Suppose whenever $0 \to A' \to A \to A'' \to 0$ is short exact, then $FA' \to
FA \to FA'' \to 0$ is exact. Prove that $F$ is right-exact. So we get a
slightly weaker criterion for right-exactness.
Do the same for left-exact functors.
\end{exercise}
\subsection{Split exact sequences}
Let $f: A \to B$ be a map of sets which is injective. Then there is a map $g: A
\to B$ such that the composite $g \circ f: A \stackrel{f}{\to} B
\stackrel{g}{\to} A$ is the identity. Namely, we define $g$ to be the inverse
of $f$ on $f(A)$ and arbitrarily on $B-f(A)$.
Conversely, if $f: A \to B$ admits an element $g: B \to A$ such that $g \circ f
= 1_A$, then $f$ is injective. This is easy to see, as any $a \in A$ can be
``recovered'' from $f(a)$ (by applying $g$).
In general, however, this observation does not generalize to arbitrary
categories.
\begin{definition}
Let $\mathcal{C}$ be a category. A morphism $A \stackrel{f}{\to} B$ is called a
\textbf{split injection} if there is $g: B \to A$ with $g \circ f = 1_A$.
\end{definition}
\begin{exercise}[General nonsense]
Suppose $f: A \to B$ is a split injection. Show that $f$ is a categorical monomorphism.
(Idea: the map $\hom(C,A) \to \hom(C,B)$ becomes a split injection of sets
thanks to $g$.)
\end{exercise}
\add{what is a categorical monomorphism? Maybe omit the exercise}
In the category of sets, we have seen above that \emph{any} monomorphism is a
split injection. This is not true in other categories, in general.
\begin{exercise}
Consider the morphism $\mathbb{Z} \to \mathbb{Z}$ given by multiplication by
2. Show that this is not a split injection: no left inverse $g$ can exist.
\end{exercise}
We are most interested in the case of modules over a ring.
\begin{proposition}
A morphism $f: A \to B$ in the category of $R$-modules is a split injection if
and only if:
\begin{enumerate}
\item $f$ is injective.
\item $f(A)$ is a direct summand in $B$.
\end{enumerate}
\end{proposition}
The second condition means that there is a submodule $B' \subset B$ such that
$B = B' \oplus f(A)$ (internal direct sum). In other words, $B = B' + f(A)$
and $B' \cap f(A) = \left\{0\right\}$.
\begin{proof}
Suppose the two conditions hold, and we have a module $B'$ which is a
complement to $f(A)$.
Then we define a left inverse
\[ B \stackrel{g}{\to} A \]
by letting $g|_{f(A)} = f^{-1}$ (note that $f$ becomes an \emph{isomorphism}
$A \to f(A)$) and $g|_{B'}=0$. It is easy to see that this is indeed a left
inverse, though in general not a right inverse, as $g$ is likely to be
non-injective.
Conversely, suppose $f: A \to B$ admits a left inverse $g: B \to A$. The usual
argument (as for sets) shows that $f$ is injective. The essentially new
observation is that $f(A) $ is a direct summand in $B$. To define the
complement, we take $\ker(g) \subset B$.
It is easy to see (as $g \circ f = 1_A$) that $\ker(g) \cap f(A) =
\left\{0\right\}$. Moreover, $\ker(g) +f(A)$ fills $B$: given $b \in B$, it is
easy to check that
\[ b - f(g(b)) \in \ker(g). \]
Thus we find that the two conditions are satisfied.
\end{proof}
\add{further explanation, exactness of filtered colimits}
\subsection{The five lemma}
The five lemma will be a useful tool for us in proving that maps are
isomorphisms. Often this argument is used in inductive proofs. Namely, we will
see that often ``long exact sequences'' (extending infinitely in one or both
directions) arise from short exact sequences in a natural way. In such
events, the five lemma
will allow us to prove that certain morphisms are isomorphisms by induction on
the dimension.
\begin{theorem}
Suppose given a commutative diagram
\[ \xymatrix{
A \ar[d] \ar[r] & B \ar[d] \ar[r] & C \ar[d] \ar[r] & D \ar[d] \ar[r] & E \ar[d] \\
A' \ar[r] & B' \ar[r] & C' \ar[r] & D' \ar[r] & E'
}\]
such that the rows are exact and the four vertical maps $A \to A', B \to B', D
\to D', E \to E'$ are isomorphisms. Then $C \to C'$ is an isomorphism.
\end{theorem}
This is the type of proof that goes by the name of ``diagram-chasing,'' and
is best thought out visually for oneself, even though we give a complete proof.
\begin{proof}
We have the diagram
\[
\xymatrix{
A \ar[r]^k \ar[d]^\a & B \ar[r]^l \ar[d]^\b
& C \ar[r]^m \ar[d]^g & D \ar[r]^n \ar[d]^\d & E \ar[d]^\e \\
F \ar[r]_p & G \ar[r]_q & H \ar[r]_r & I \ar[r]_s & J
}
\]
where the rows are exact at $B, C, D, G, H, I$ and the squares commute. In
addition, suppose that $\a, \b, \d, \e$ are isomorphisms. We will show that
$\g$ is an isomorphism.
\emph{We show that $\g$ is surjective:}
Suppose that $h \in H$. Since $\d$ is surjective, there exists an element
$d \in D$ such that $r(h) = \d(d) \in I$.
By the commutativity of the rightmost square, $s(r(h)) = \e(n(d))$.
The exactness at $I$ means that $\im r = \ker s$, so hence
$\e(n(d)) = s(r(h)) = 0$. Because $\e$ is injective, $n(d) = 0$.
Then $d \in \ker(n) = \im(m)$ by exactness at $D$.
Therefore, there is some $c \in C$ such that $m(c) = d$.
Now, $\d(m(c)) = \d(d) = r(h)$ and by the commutativity of squares,
$\d(m(c)) = r(\g(c))$, so therefore $r(\g(c)) = r(h)$. Since $r$ is a
homomorphism, $r(\g(c) - h) = 0$. Hence $\g(c) - h \in \ker r = \im q$ by
exactness at $H$.
Therefore, there exists $g \in G$ such that $q(g) = \g(c) - h$.
$\b$ is surjective, so there is some $b \in B$ such that $\b(b) = g$ and hence
$q(\b(b)) = \g(c) - h$. By the commutativity of squares,
$q(\b(b)) = \g(l(b)) = \g(c) - h$. Hence
$h = \g(c) - \g(l(b)) = \g(c - l(b))$, and therefore $\g$ is surjective.
So far, we've used that $\b$ and $\g$ are surjective, $\e$ is injective, and
exactness at $D$, $H$, $I$.
\emph{We show that $\g$ is injective:}
Suppose that $c \in C$ and $\g(c) = 0$.
Then $r(\g(c)) = 0$, and by the commutativity of squares,
$\d(m(c)) = 0$. Since $\d$ is injective, $m(c) = 0$, so
$c \in \ker m = \im l$ by exactness at $C$.
Therefore, there is $b \in B$ such that $l(b) = c$.
Then $\g(l(b)) = \g(c) = 0$, and by the commutativity of squares,
$q(\b(b)) = 0$. Therefore, $\b(b) \in \ker q$, and by exactness at $G$,
$\b(b) \in \ker q = \im p$.
There is now $f \in F$ such that $p(f) = \b(b)$. Since $\a$ is surjective, this
means that there is $a \in A$ such that $f = \a(a)$, so then
$\b(b) = p(\a(a))$. By commutativity of squares,
$\b(b) = p(\a(a)) = \b(k(a))$, and hence $\b(k(a) - b) = 0$.
Since $\b$ is injective, we have $k(a) -b = 0$, so $k(a) = b$.
Hence $b \in \im k = \ker l$ by commutativity of squares, so $l(b) = 0$.
However, we defined $b$ to satisfy $l(b) = c$, so therefore $c = 0$ and hence
$\g$ is injective.
Here, we used that $\a$ is surjective, $\b, \d$ are injective, and exactness at
$B, C, G$.
Putting the two statements together, we see that $\g$ is both surjective and
injective, so $\g$ is an isomorphism. We only used that $\b, \d$ are
isomorphisms and that $\a$ is surjective, $\e$ is injective, so we can slightly
weaken the hypotheses; injectivity of $\a$ and surjectivity of $\e$ were
unnecessary.
\end{proof}
\section{Ideals}
The notion of an \emph{ideal} has already been defined. Now we will introduce additional terminology related to the theory of ideals.
\subsection{Prime and maximal ideals}
Recall that the notion of an ideal generalizes that of divisibility. In
elementary number theory, though, one finds that questions of divisibility
basically reduce to questions about primes.
The notion of a ``prime ideal'' is intended to generalize the familiar idea of a prime
number.
\begin{definition}
An ideal $I \subset R$ is said to be \textbf{prime} if
\begin{enumerate}[\textbf{P} 1]
\item $1 \notin I$ (by convention, 1 is not a prime number)
\item If $xy \in I$, either $x \in I$ or $y \in I$.
\end{enumerate}
\end{definition}
\begin{example}
\label{integerprimes}
If $R = \mathbb{Z}$ and $p \in R$, then $(p) \subset \mathbb{Z}$ is a prime ideal iff $p$ or $-p$ is a
prime number in $\mathbb{N}$ or if $p$ is zero.
\end{example}
If $R$ is any commutative ring, there are two obvious ideals. These obvious
ones are the zero ideal $(0)$
consisting only of the zero element, and the unit element $(1)$ consisting of all of
$R$.
\begin{definition} \label{maximalideal}
An ideal $I \subset R$ is called \textbf{maximal}\footnote{Maximal with
respect to not being the unit ideal.} if
\begin{enumerate}[\textbf{M} 1]
\item $1 \notin I$
\item Any larger ideal contains $1$ (i.e., is all of $R$).
\end{enumerate}
\end{definition}
So a maximal ideal is a maximal element in the partially ordered set of proper
ideals (an ideal is \textbf{proper} if it does not contain 1).
\begin{exercise}
Find the maximal ideals in $\mathbb{C}[t]$.
\end{exercise}
\begin{proposition}
A maximal ideal is prime.
\end{proposition}
\begin{proof}
First, a maximal ideal does not contain 1.
Let $I \subset R$ be a maximal ideal.
We need to show that if $xy \in I$,
then one of $x,y \in I$. If $x \notin I$, then $(I,x) = I + (x)$ (the ideal
generated by $I$ and $x$) strictly contains $I$, so by maximality contains
$1$. In particular, $1 \in I+(x)$, so we can write
\[ 1 = a + xb \]
where $a \in I, b \in R$. Multiply both sides by $y$:
\[ y = ay + bxy. \]
Both terms on the right here are in $I$ ($a \in I$ and $xy \in I$), so we find
that $y \in I$.
\end{proof}
Given a ring $R$, what can we say about the collection of ideals in $R$?
There
are two obvious ideals in $R$, namely $(0)$ and $ (1)$. These are the same if and
only if $0=1$, i.e. $R$ is the zero ring.
So for any nonzero commutative ring, we have at least two distinct ideals.
Next, we show that maximal ideals always \emph{do} exist, except in the case
of the zero ring.
\begin{proposition} \label{anycontainedinmaximal}
Let $R$ be a commutative ring. Let $I \subset R$ be a proper ideal. Then $I$
is contained in a maximal ideal.
\end{proposition}
\begin{proof}
This requires the axiom of choice in the form of Zorn's lemma. Let
$P$ be the collection of all ideals $J \subset R$ such that $I
\subset J$ and $J \neq R$. Then $P$ is a poset with respect to inclusion. $P$ is
nonempty because it contains $I$. Note that given a (nonempty) linearly ordered
collection of ideals $J_{\alpha} \in P$, the union $\bigcup J_{\alpha} \subset
R$ is an ideal: this is easily seen in view of the linear ordering (if $x,y
\in \bigcup J_{\alpha}$, then both $x,y$ belong to some $J_{\gamma}$, so $x+y
\in J_{\gamma}$; multiplicative closure is even easier). The union is not all
of $R$ because it does not contain $1$.
This implies that $P$ has a maximal element by Zorn's lemma. This maximal element may
be called $\mathfrak{M}$; it's a proper element containing $I$. I claim that
$\mathfrak{M}$ is a maximal ideal, because if it were contained in a larger
ideal, that would be in $P$ (which cannot happen by maximality) unless it were all of $R$.
\end{proof}
\begin{corollary}
Let $R $ be a nonzero commutative ring. Then $R$ has a maximal ideal.
\end{corollary}
\begin{proof}
Apply the lemma to the zero ideal.
\end{proof}
\begin{corollary}
Let $R$ be a nonzero commutative ring. Then $x \in R$ is invertible if and
only if it belongs to no maximal ideal $\mathfrak{m} \subset R$.
\end{corollary}
\begin{proof}
Indeed, $x$ is invertible if and only if $(x) = 1$. That is, if and only if
$(x)$ is not a proper ideal; now \rref{anycontainedinmaximal}
finishes the argument.
\end{proof}
\subsection{Fields and integral domains}
Recall:
\begin{definition}
A commutative ring $R$ is called a \textbf{field} if $1 \neq 0$ and for every $x \in R -
\left\{0\right\}$ there exists an \textbf{inverse} $x^{-1} \in R$ such that $xx^{-1} =
1$.
\end{definition}
This condition has an obvious interpretation in terms of ideals.
\begin{proposition}
A commutative ring with $1 \neq 0$ is a field iff it has only the two ideals $(1),
(0)$.
\end{proposition}
Alternatively, a ring is a field if and only if $(0)$ is a maximal ideal.
\begin{proof}
Assume $R$ is a field. Suppose $I \subset R$. If $I \neq (0)$, then there is
a nonzero $x \in I$. Then there is an inverse $x^{-1}$. We have $x^{-1} x =1
\in I$, so $I = (1)$.
In a field, there is thus no room for ideals other than $(0)$ and $(1)$.
To prove the converse, assume every ideal of $R$ is $(0)$ or $(1)$. Then for
each $x \in R$, $(x) = (0)$ or $(1)$. If $x \neq 0$, the first cannot happen, so
that means that the ideal generated by $x$ is the unit ideal. So $1$ is a
multiple of $x$, implying that $x$ has a multiplicative inverse.
\end{proof}
So fields also have an uninteresting ideal structure.
\begin{corollary} \label{maximalfield}
If $R$ is a ring and $I \subset R$ is an ideal, then $I$ is maximal if and only
if $R/I$ is a field.
\end{corollary}
\begin{proof}
The basic point here is that there is a bijection between the ideals of $R/I$
and ideals of $R$ containing $I$.
Denote by $\phi: R \to R/I$ the reduction map. There is a
construction mapping ideals of $R/I$ to ideals of $R$. This sends an ideal in
$R/I$ to
its inverse image. This is easily seen to map to ideals of $R$ containing $I$.
The map from ideals of $R/I$ to ideals of $R$ containing $I$ is a bijection,
as one checks easily.
It follows that $R/I$ is a field precisely if
$R/I$ has precisely two ideals, i.e. precisely if there are precisely two
ideals in $R$ containing $I$. These ideals must be $(1)$ and $I$, so this
holds if and only if $I$ is maximal.
\end{proof}
There is a similar characterization of prime ideals.
\begin{definition}
A commutative ring $R$ is an \textbf{integral domain} if for all $ x,y \in R$,
$x \neq 0 $ and $y \neq 0$ imply $xy \neq 0$.
\end{definition}
\begin{proposition} \label{primeifdomain}
An ideal $I \subset R$ is prime iff $R/I$ is a domain.
\end{proposition}
\begin{exercise}
Prove \rref{primeifdomain}.
\end{exercise}
Any field is an integral domain. This is because in a field, nonzero elements
are invertible, and the product of two invertible elements is invertible. This
statement translates in ring theory to the statement that a maximal ideal is
prime.
Finally, we include an example that describes what \emph{some} of the prime
ideals in a polynomial ring look like.
\begin{example}
Let $R$ be a ring and $P$ a prime ideal. We claim that $PR[x] \subset R[x]$ is a
prime ideal.
Consider the map $\tilde{\phi}:R[x]\rightarrow(R/P)[x]$ with
$\tilde{\phi}(a_0+\cdots+a_nx^n)=(a_0+P)+\cdots+(a_n+P)x^n$. This is clearly
a homomorphism because $\phi:R\rightarrow R/P$ is, and its kernel consists
of those polynomials $a_0+\cdots+a_nx^n$ with $a_0,\ldots,a_n\in P$, which is
precisely $P[x]$. Thus $R[x]/P[x]\simeq (R/P)[x]$, which is an integral domain
because $R/P$ is an integral domain. Thus $P[x]$ is a prime ideal.
However, if
$P$ is a maximal ideal, then $P[x]$ is never a maximal ideal because the ideal
$P[x]+(x)$ (the polynomials with constant term in $P$) always strictly contains
$P[x]$ (because if $x\in P[x]$ then $1\in P$, which is impossible). Note
that $P[x]+(x)$ is the kernel of the composition of $\tilde{\phi}$ with
evaluation at 0, i.e $(\text{ev}_0\circ\tilde{\phi}):R[x]\rightarrow R/P$,
and this map is a surjection and $R/P$ is a field, so that $P[x]+(x)$ is
the maximal ideal in $R[x]$ containing $P[x]$.
\end{example}
\begin{exercise} \label{quotfld1}
Let $R$ be a domain. Consider the set of formal quotients $a/b, a, b \in R$
with $b \neq 0$. Define addition and multiplication using usual rules. Show
that the resulting object $K(R)$ is a ring, and in fact a \emph{field}. The
natural map $R \to K(R)$, $r \to r/1$, has a universal property. If $R
\hookrightarrow L$ is an injection of $R$ into a field $L$, then there is a
unique morphism $K(R) \to L$ of fields extending $R \to L$. This construction
will be generalized when we consider \emph{localization.}
This construction is called the \textbf{quotient field.}
Note that a non-injective map $R\to L$ will \emph{not} factor through the
quotient field!
\end{exercise}
\begin{exercise}\label{Jacobson}
Let $R$ be a commutative ring. Then the \textbf{Jacobson radical} of $R$ is
the intersection $\bigcap \mathfrak{m}$ of all maximal ideals $\mathfrak{m}
\subset R$. Prove that an element $x$ is in the Jacobson radical if and only
if $1 - yx$ is invertible for all $y \in R$.
\end{exercise}
\subsection{Prime avoidance}
The following fact will come in handy occasionally. We will, for instance, use
it much later to show that an ideal consisting of zerodivisors on a module $M$ is
contained in associated prime.
\begin{theorem}[Prime avoidance] \label{primeavoidance}
Let $I_1,\dots, I_n \subset R$ be ideals. Let $A\subset R$ be a subset which is closed
under addition and multiplication. Assume that at least $n-2$ of the ideals are
prime. If $A\subset I_1\cup \cdots \cup I_n$, then $A\subset I_j$ for some $j$.
\end{theorem}
The result is frequently used in the following specific case: if an ideal $I$
is contained in a finite union $\bigcup \mathfrak{p}_i$ of primes, then $I
\subset \mathfrak{p}_i$ for some $i$.
\begin{proof}
Induct on $n$. If $n=1$, the result is trivial. The case $n=2$ is an easy argument: if
$a_1\in A\smallsetminus I_1$ and $a_2\in A\smallsetminus I_2$, then $a_1+a_2\in
A\smallsetminus (I_1\cup I_2)$.
Now assume $n\ge 3$. We may assume that for each $j$, $A\not\subset I_1\cup \cdots
\cup \hat I_j\cup \cdots I_n$.\footnote{The hat means omit $I_j$.} Fix an element
$a_j\in A\smallsetminus (I_1\cup \cdots \cup \hat I_j\cup \cdots I_n)$. Then this
$a_j$ must be contained in $I_j$ since $A\subset \bigcup I_j$. Since $n\ge 3$, one
of the $I_j$ must be prime. We may assume that $I_1$ is prime. Define
$x=a_1+a_2a_3\cdots a_n$, which is an element of $A$. Let's show that $x$ avoids
\emph{all} of the $I_j$. If $x\in I_1$, then $a_2a_3\cdots a_n\in I_1$, which
contradicts the fact that $a_i\not\in I_j$ for $i\neq j$ and that $I_1$ is prime. If
$x\in I_j$ for $j\ge 2$. Then $a_1\in I_j$, which contradicts $a_i\not\in I_j$ for
$i\neq j$.
\end{proof}
\subsection{The Chinese remainder theorem}
Let $m,n$ be relatively prime integers. Suppose $a, b \in \mathbb{Z}$; then
one can show that the two congruences $x \equiv a \mod m$
and $x \equiv b \mod n$ can be solved simultaneously in $x \in \mathbb{Z}$.
The solution is unique, moreover, modulo $mn$.
The Chinese remainder theorem generalizes this fact:
\begin{theorem}[Chinese remainder theorem] Let $I_1, \dots I_n$ be ideals in a
ring $R$ which satisfy $I_i + I_j = R$ for $i \neq j$. Then we have $I_1 \cap
\dots \cap I_n = I_1 \dots I_n$ and the morphism of rings
\[ R \to \bigoplus R/I_i \]
is an epimorphism with kernel $I_1 \cap \dots \cap I_n$.
\end{theorem}
\begin{proof}
First, note that for any two ideals $I_1$ and $I_2$, we
have $I_1I_2\subset I_1\cap I_2$ and $(I_1+I_2)(I_1\cap I_2)\subset
I_1I_2$ (because any element of $I_1+I_2$ multiplied by any element of
$I_1\cap I_2$ will clearly be a sum of products of elements from both $I_1$
and $I_2$). Thus, if $I_1$ and $I_2$ are coprime, i.e. $I_1+I_2=(1)=R$,
then $(1)(I_1\cap I_2)=(I_1\cap I_2)\subset I_1I_2\subset I_1\cap I_2$,
so that $I_1\cap I_2=I_1I_2$. This establishes the result for $n=2$.
If the
ideals $I_1,\ldots,I_n$ are pairwise coprime and the result holds for $n-1$,
then $$\bigcap_{i=1}^{n-1} I_i=\prod_{i=1}^{n-1}I_i.$$ Because $I_n+I_i=(1)$
for each $1\leq i\leq n-1$, there must be $x_i\in I_n$ and $y_i\in I_i$ such
that $x_i+y_i=1$. Thus, $z_n=\prod_{i=1}^{n-1}y_i=\prod_{i=1}^{n-1}(1-x_i)\in
\prod_{i=1}^{n-1} I_i$, and clearly $z_n+I_n=1+I_n$ since each $x_i\in
I_n$. Thus $I_n+\prod_{i=1}^{n-1}I_i=I_n+\bigcap_{i=1}^{n-1}I_i=(1)$,
and we can now apply the $n=2$ case to conclude that $\bigcap_{i=1}^n
I_i=\prod_{i=1}^n I_i$.
Note that for any $i$, we can construct a $z_i$
with $z_i\in I_j$ for $j\neq i$ and $z_i+I_i=1+I_i$ via the same procedure.
Define $\phi:R\rightarrow\bigoplus R/I_i$
by $\phi(a)=(a+I_1,\ldots,a+I_n)$. The kernel of $\phi$ is
$\bigcap_{i=1}^n I_i$, because $a+I_i=0+I_i$ iff $a\in I_i$, so that
$\phi(a)=(0+I_1,\ldots,0+I_n)$ iff $a\in I_i$ for all $i$, that is,
$a\in\bigcap_{i=1}^n I_i$. Combined with our previous result, the kernel
of $\phi$ is $\prod_{i=1}^n I_i$.
Finally, recall that we constructed
$z_i\in R$ such that $z_i+I_i=1+I_i$, and $z+I_j=0+I_j$ for all $j\neq
i$, so that $\phi(z_i)=(0+I_1,\ldots,1+I_{i},\ldots,0+I_n)$. Thus,
$\phi(a_1z_1+\cdots+a_nz_n)=(a_1+I_1,\ldots,a_n+I_n)$ for all $a_i\in R$,
so that $\phi$ is onto. By the first isomorphism theorem, we have that
$R/I_1\cdots I_n\simeq \bigoplus_{i=1}^nR/I_i$. \\
\end{proof}
\section{Some special classes of domains}
\subsection{Principal ideal domains}
\begin{definition}
A ring $R$ is a \textbf{principal ideal domain} or \textbf{PID} if $R \neq 0$, $R$ is not a
field, $R$ is a domain, and every ideal of $R$ is principal.
\end{definition}
These have the next simplest theory of ideals.
Each ideal is very simple---it's principal---though there might be a lot of ideals.
\begin{example}
$\mathbb{Z}$ is a PID. The only nontrivial fact to check here is that:
\begin{proposition}
Any nonzero ideal $I \subset \mathbb{Z}$ is principal.
\end{proposition}
\begin{proof}
If $I = (0)$, then this is obvious. Else there is $n \in I -
\left\{0\right\}$; we can assume $n>0$. Choose $n \in I$ as small as possible and
positive. Then I claim that the ideal $I$ is generated by $(n)$. Indeed, we have $(n)
\subset I$ obviously. If $m \in I$ is another integer, then divide $m$ by $n$,
to find $m = nb + r$ for $r \in [0, n)$. We find that $r \in I$ and $0 \leq r <
n$, so $r=0$, and $m$ is divisible by $n$. And $I \subset (n)$.
So $I = (n)$.
\end{proof}
\end{example}
A module $M$ is said to be \emph{finitely generated} if there exist elements
$x_1, \dots, x_n \in M$ such that any element of $M$ is a linear combination
(with coefficients in $R$) of the $x_i$. (We shall define this more formally
below.)
One reason that PIDs are so convenient is:
\begin{theorem}[Structure theorem] \label{structurePID}
If $M$ is a finitely generated module over a principal ideal domain $R$, then
$M$ is isomorphic to a direct sum
\[ M \simeq \bigoplus_{i=1}^n R/a_i, \]
for various $a_i \in R$ (possibly zero).
\end{theorem}
\add{at some point, the proof should be added. This is important!}
\subsection{Unique factorization domains}
The integers $\mathbb{Z}$ are especially nice because of the fundamental
theorem of arithmetic, which states that every integer has a unique
factorization into primes. This is not true for every integral domain.
\begin{definition}
An element of a domain $R$ is \textbf{irreducible} if it cannot be written
as the product of two non-unit elements of $R$.
\end{definition}
\begin{example}
Consider the integral domain $\mathbb{Z}[\sqrt{-5}]$. We saw earlier that
\[
6 = 2 \cdot 3 = (1 + \sqrt{-5})(1 - \sqrt{-5}),
\]
which means that $6$ was written as the product of two non-unit elements in
different ways. $\mathbb{Z}[\sqrt{-5}]$ does not have unique factorization.
\end{example}
\begin{definition} \label{earlyUFD}
A domain $R$ is a \textbf{unique factorization domain} or \textbf{UFD} if every
non-unit $x \in R$ satisfies
\begin{enumerate}
\item $x$ can be written as a product $x = p_1 p_2 \cdots p_n$ of
irreducible elements $p_i \in R$
\item if $x = q_1 q_2 \cdots q_m$ where $q_i \in R$ are irreducible
then the $p_i$ and $q_i$ are the same up to order and multiplication by units.
\end{enumerate}
\end{definition}
\begin{example}
$\mathbb{Z}$ is a UFD, while $\mathbb{Z}[\sqrt{-5}]$ is not. In fact, many of
our favorite domains have unique factorization. We will prove that all PIDs
are UFDs. In particular, in \rref{gaussianintegersareprincipal} and
\rref{polyringisprincipal}, we saw that $\mathbb{Z}[i]$ and $F[t]$ are PIDs,
so they also have unique factorization.
\end{example}
\begin{theorem} \label{PIDUFD}
Every principal ideal domain is a unique factorization domain.
\end{theorem}
\begin{proof}
Suppose that $R$ is a principal ideal domain and $x$ is an element of $R$. We
first demonstrate that $x$ can be factored into irreducibles.
If $x$ is a unit or an irreducible, then we are done. Therefore, we can assume
that $x$ is reducible, which means that $x = x_1 x_2$ for non-units
$x_1, x_2 \in R$. If there are irreducible, then we are again done, so we
assume that they are reducible and repeat this process. We need to show that
this process terminates.
Suppose that this process continued infinitely. Then we have an infinite
ascending chain of ideals, where all of the inclusions are proper:
$(x) \subset (x_1) \subset (x_{11}) \subset \cdots \subset R$.
We will show that this is impossible because any infinite ascending chain of
ideals $I_1 \subset I_2 \subset \cdots \subset R$ of a principal ideal domain
eventually becomes stationary, i.e. for some $n$, $I_k = I_n$ for $k \geq n$.
Indeed, let $I = \bigcup_{i=1}^\infty I_i$. This is an ideal, so it is
principally generated as $I = (a)$ for some $a$. Since $a \in I$, we must have
$a \in I_N$ for some $N$, which means that the chain stabilizes after $I_N$.
It remains to prove that this factorization of $x$ is unique. We induct on
the number of irreducible factors $n$ of $x$. If $n = 0$, then $x$ is a unit,
which has unique factorization up to units. Now, suppose that
$x = p_1 \cdots p_n = q_1 \cdots q_m$ for some $m \ge n$. Since $p_1$ divides
$x$, it must divide the product $q_1 \cdots q_m$ and by irreducibility, one of
the factors $q_i$. Reorder the $q_i$ so that $p_1$ divides $q_1$. However,
$q_1$ is irreducible, so this means that $p_1$ and $q_1$ are the same up to
multiplication by a unit $u$. Canceling $p_1$ from each of the two
factorizations, we see that $p_2 \cdots p_n = u q_2 \cdots q_m = q_2' \cdots
q_m$. By induction, this shows that the factorization of $x$ is unique up to
order and multiplication by units.
\end{proof}
\subsection{Euclidean domains}
A euclidean domain is a special type of principal ideal domain. In practice,
it will often happen that one has an explicit proof that a given domain is
euclidean, while it might not be so trivial to prove that it is a UFD without
the general implication below.
\begin{definition}
An integral domain $R$ is a \textbf{euclidean domain} if there is a function
$|\cdot |:R\to \mathbb \mathbb{Z}_{\geq 0}$ (called the norm) such that the following hold.
\begin{enumerate}
\item $|a|=0$ iff $a=0$.
\item For any nonzero $a,b\in R$ there exist $q,r\in R$ such that $b=aq+r$ and $|r|<|a|$.
\end{enumerate}
In other words, the norm is compatible with division with remainder.
\end{definition}
\begin{theorem}\label{EDPID}
A euclidean domain is a principal ideal domain.
\end{theorem}
\begin{proof}
Let $R$ be an euclidean domain, $I\subset R$ and ideal, and $b$ be the nonzero element of smallest norm in $I$.
Suppose $ a\in I$. Then we can write $ a = qb + r$ with $ 0\leq r < |b|$, but since $ b$ has minimal nonzero absolute value, $ r = 0$ and $ b|a$. Thus $ I=(b)$ is principal.
\end{proof}
As we will see, this implies that any euclidean domain admits \emph{unique
factorization.}
\begin{proposition} \label{polyringED}
Let $F$ be a field. Then the polynomial ring $F[t]$ is a euclidean domain.
In particular, it is a PID.
\end{proposition}
\begin{proof}
We define \add{}
\end{proof}
\begin{exercise} \label{gaussianintegersareprincipal}
Prove that $\mathbb{Z}[i]$ is principal.
(Define the norm as $N(a+ib) = a^2 + b^2$.)
\end{exercise}
\begin{exercise} \label{polyringisprincipal}
Prove that the polynomial ring $F[t]$ for $F$ a field is principal.
\end{exercise}
It is \emph{not} true that a PID is necessarily euclidean. Nevertheless, it
was shown in \cite{Gre97} that the converse is ``almost'' true. Namely,
\cite{Gre97} defines the notion of an \textbf{almost euclidean domain.}
A domain $R$ is almost euclidean if there is a function $d: R \to
\mathbb{Z}_{\geq 0}$ such that
\begin{enumerate}
\item $d(a) = 0$ iff $a = 0$.
\item $d(ab) \geq d(a)$ if $b \neq 0$.
\item If $a,b \in R - \left\{0\right\}$, then either $b \mid a$ or there is
$r \in (a,b)$ with $d(r)<d(b)$.
\end{enumerate}
It is easy to see by the same argument that an almost euclidean domain is a PID.
(Indeed, let $R$ be an almost euclidean domain, and $I \subset R$ a nonzero
ideal. Then choose $x \in I - \left\{0\right\}$ such that $d(x)$ is minimal among elements in
$I$. Then if $y \in I - \left\{0\right\}$, either $x \mid y$ or $(x,y) \subset I$ contains an
element with smaller $d$. The latter cannot happen, so the former does.)
However, in fact:
\begin{proposition}[\cite{Gre97}] \label{almosteuclidean}
A domain is a PID if and only if it is almost euclidean.
\end{proposition}
\begin{proof}
Indeed, let $R$ be a PID. Then $R$ is a UFD (\rref{PIDUFD}), so for any $x \in R$,
there is a factorization into prime elements, unique up to units. If $x$
factors into $n$ elements, we define $d(x)=n$; we set $d(0)=0$.
The first two conditions for an almost euclidean domain are then evident.
Let $x = p_1 \dots p_m$ and $y = q_1 \dots q_n$ be two elements of $R$,
factored into irreducibles. Suppose $x \nmid y$. Choose a generator $b$ of the (principal) ideal $(x,y)$; then obviously $y
\mid b$ so $d(y) \leq d(b)$. But if $d(y) = d(b)$, then the
number of factors of $y$ and $b$ is the same, so $y \mid b$ would imply
that $y$ and $b$ are associates. This is a contradiction, and implies that
$d(y)<d(b)$.
\end{proof}
\begin{remark}
We have thus seen that a euclidean domain is a PID, and a PID is a UFD. Both
converses, however, fail. By Gauss's lemma (\rref{}), the
polynomial ring $\mathbb{Z}[X]$ has unique factorization, though the ideal
$(2, X)$ is not principal.
In \cite{Ca88}, it is shown that the ring $\mathbb{Z}[\frac{1+
\sqrt{-19}}{2}]$ is a PID but not euclidean (i.e. there is \emph{no} euclidean
norm on it).
\end{remark}
According to \cite{Cl11}, sec. 8.3, \cref{almosteuclidean} actually goes back to Hasse
(and these norms are sometimes called ``Dedekind-Hasse norms'').
\section{Basic properties of modules}
\subsection{Free modules}
We now describe a simple way of constructing modules over a ring, and an
important class of modules.
\begin{definition}
\label{freemoduledef}
A module $M$ is \textbf{free} if it is isomorphic to $\bigoplus_I R$ for some
index set $I$. The cardinality of $I$ is called the \textbf{rank}.
\end{definition}
\begin{example}
$R$ is the simplest example of a free module.
\end{example}
Free modules have a \emph{universal property}.
Namely, recall that if $M$ is an $R$-module, then to give a homomorphism
\[ R \to M \]
is equivalent to giving an element $m \in M$ (the image of $1$).
By the universal product of the direct sum (which is the coproduct in the
category of modules), it follows that to give a map
\[ \bigoplus_I \to M \]
is the same as giving a map of \emph{sets} $I \to M$.
In particular:
\begin{proposition} \label{freeadj}
The functor $I \mapsto \bigoplus_I R$ from $\mathbf{Sets}$ to
$R$-modules is the \emph{left adjoint} to the forgetful functor from
$R$-modules to $\mathbf{Sets}$.
\end{proposition}
The claim now is that the notion of ``rank'' is well-defined for a free
module. To see this, we will have to use the notion
of a \emph{maximal ideal} (\rref{maximalideal}) and
\rref{maximalfield}.
Indeed, suppose
$\bigoplus_I R$ and $\bigoplus_J R$ are isomorphic; we must show that $I$ and
$J$ have the same cardinality. Choose a maximal ideal $\mathfrak{m}
\subset R$. Then, by applying the functor $M \to
M/\mathfrak{m}M$, we find that the $R/\mathfrak{m}$-\emph{vector spaces}
\[ \bigoplus_I R/\mathfrak{m}, \quad \bigoplus_J R/\mathfrak{m} \]
are isomorphic. By linear algebra, $I$ and $J$ have the same cardinality.
Free modules have a bunch of nice properties. The first is that it is very
easy to map out of a free module.
\begin{example}
Let $I$ be an indexing set, and $M$ an $R$-module. Then to give a morphism
\[ \bigoplus_I R \to M \]
is equivalent to picking an element of $M$ for each $i \in I$. Indeed, given
such a collection of elements $\left\{m_i\right\}$, we send the generator of $\bigoplus_I R$ with a 1
in the $i$th spot and zero elsewhere to $m_i$.
\end{example}
\begin{example}
In a domain, every principal ideal (other than zero) is a free module of rank
one.
\end{example}
Another way of saying this is that the free module $\bigoplus_I R$ represents
the functor on modules sending $M$ to the \emph{set} $ M^I$. We have already seen a special case of this for $I$ a
one-element set (\rref{moduleunderlyingsetrepresentable}).
The next claim is that free modules form a reasonably large class of the
category of $R$-modules.
\begin{proposition} \label{freesurjection}
Given an $R$-module $M$, there is a free module $F$ and a surjection
\[ F \twoheadrightarrow M. \]
\end{proposition}
\begin{proof}
We let $F$ to be the free $R$-module on the elements $e_m$, one for each $m
\in M$. We define the map
\[ F \to M \]
by describing the image of each of the generators $e_m$: we just send each
$e_m$ to $m \in M$. It is clear that this map is surjective.
\end{proof}
We close by making a few remarks on matrices.
Let $M$ be a free module of rank $n$, and fix an isomorphism $M \simeq R^n$.
Then we can do linear algebra with $M$, even though we are working over a
ring and not necessarily a field, at least to some extent.
For instance, we can talk about $n$-by-$n$ matrices over the ring $R$, and
then each of them induces a transformation, i.e. a module-homomorphism, $M \to
M$; it is easy to see that every module-homomorphism between free modules is
of this form. Moreover, multiplication of matrices corresponds to composition
of homomorphisms, as usual.
\begin{example} Let us consider the question of when the transformation
induced by an $n$-by-$n$ matrix is invertible. The answer is similar to the
familiar one from linear algebra in the case of a field. Namely, the condition
is that the determinant be invertible.
Suppose that an $n \times n$ matrix $A$ over a ring $R$ is invertible. This
means that there exists $A^{-1}$ so that $A A^{-1} = I$, so hence
$1 = \det I = \det(A A^{-1}) = (\det A) (\det A^{-1})$, and therefore,
$\det A$ must be a unit in $R$.
Suppose instead that an $n \times n$ matrix $A$ over a ring $R$ has an
invertible determinant. Then, using Cramer's rule, we can actually construct
the inverse of $A$.
\end{example}
We next show that if $R$ is a commutative ring, the category of modules over
$R$ contains enough information to reconstruct $R$. This is a small part of the
story of \emph{Morita equivalence,} which we shall not enter into here.
\begin{example}
Suppose $R$ is a commutative ring, and let $\mathcal{C}$ be the category of
$R$-modules. The claim is that $\mathcal{C}$, as an \emph{abstract} category,
determines $R$. Indeed, the claim is that $R$ is canonically the ring of
endomorphisms of the identity functor $1_{\mathcal{C}}$.
Such an \emph{endomorphism} is given by a natural transformation
$\phi: 1_{\mathcal{C}} \to 1_{\mathcal{C}}$. In other words, one requires for
each $R$-module $M$, a homomorphism of $R$-modules $\phi_M : M \to M$ such that
if $f: M \to N$ is any homomorphism of modules, then there is a commutative
square
\[ \xymatrix{
M \ar[d]^f \ar[r]^{\phi_M} & M \ar[d] \\
N \ar[r]^{\phi_N} & N.
}\]
Here is a simple way of obtaining such endomorphisms. Given $r \in R$, we
consider the map $r: M \to m$ which just multiplies each element by $r$. This
is a homomorphism, and it is clear that it is natural in the above sense. There
is thus a map $R \to \mathrm{End}(1_\mathcal{C})$ (note that multiplication
corresponds to composition of natural transformations).
This map is clearly injective; different $r, s \in R$ lead to different natural
transformations (e.g. on the $R$-module $R$).
The claim is that \emph{any} natural transformation of $1_{\mathcal{C}}$ is
obtained in this way.
Namely, let $\phi: 1_{\mathcal{C}} \to 1_{\mathcal{C}}$ be such a natural
transformation. On the $R$-module $R$, $\phi$ must be multiplication by some
element $r \in R$
(because $\hom_R(R, R)$ is given by such homotheties).
Consequently, one sees by drawing commutative diagrams that $\phi: R^{\oplus S}
\to R^{\oplus S}$ is of this form for any set $S$. So $\phi$ is multiplication
by $r$ on any free $R$-module.
Since any module $M$ is a quotient of a free module $F$, we can draw a diagram
\[ \xymatrix{
F\ar[d] \ar[r]^{\phi_F} & F \ar[d] \\
M \ar[r]^{\phi_M} & M.
}\]
Since the vertical arrows are surjective, we find that $\phi_F$ must be given
by multiplication by $r$ too.
\end{example}
\subsection{Finitely generated modules}
The notion of a ``finitely generated'' module is analogous to that of a
finite-dimensional vector space.
\begin{definition}
An $R$-module $M$ is \textbf{finitely generated} if there exists a surjection
$R^n \to M$ for some $n$. In other words, it has a finite number of elements
whose ``span'' contains $M$.
\end{definition}
The basic properties of finitely generated modules follow from the fact that
they are stable under extensions and quotients.
\begin{proposition} \label{exact-fingen}
Let $0 \to M' \to M \to M'' \to 0$ be an exact sequence. If $M', M''$ are
finitely generated, so is $M$.
\end{proposition}
\begin{proof}
Suppose $0\rightarrow
M'\stackrel{f}{\rightarrow}M\stackrel{g}{\rightarrow}M''\rightarrow0$
is exact. Then $g$ is surjective, $f$ is injective, and
$\text{ker}(g)=\text{im}(f)$. Now suppose $M'$ is finitely generated,
say by $\{a_1,\ldots,a_s\}$, and $M''$ is finitely generated, say by
$\{b_1,\ldots,b_t\}$. Because $g$ is surjective, each $g^{-1}(b_i)$ is
non-empty. Thus, we can fix some $c_i\in g^{-1}(b_i)$ for each $i$.
For any
$m\in M$, we have $g(m)=r_1b_1+\cdots+r_tb_t$ for some $r_i\in R$ because
$g(m)\in M''$ and $M''$ is generated by the $b_i$. Thus $g(m)=r_1g(c_i)+\cdots
r_tg(c_t)=g(r_1c_1+\cdots+r_tc_t)$, and because $g$ is a homomorphism
we have $m-(r_1c_1+\cdots+r_tc_t)\in\text{ker}(g)=\text{im}(f)$. But
$M'$ is generated by the $a_i$, so the submodule $\text{im}(f)\subset
M$ is finitely generated by the $d_i=f(a_i)$.
Thus, any $m\in
M$ has $m-(r_1c_1+\cdots+r_tc_t)=r_{t+1}d_1+\cdots+r_{t+s}d_s$
for some $r_1,\ldots,r_{t+s}$, thus $M$ is finitely generated by
$c_1,\ldots,c_t,d_1,\ldots,d_s$. \\
\end{proof}
The converse is false. It is possible for finitely generated modules to have
submodules which are \emph{not} finitely generated. As we shall see in
\rref{noetherian}, this does not happen over \emph{noetherian} rings.
\begin{example}
Consider the ring $R=\mathbb{C}[X_1, X_2, \dots,]$ and the ideal $(X_1, X_2,
\dots)$. This ideal is a submodule of the finitely generated $R$-module $R$,
but it is not finitely generated.
\end{example}
\begin{exercise}
Show that a quotient of a finitely generated module is finitely generated.
\end{exercise}
\begin{exercise}
Consider a \emph{split} exact sequence $0 \to M' \to M \to M'' \to 0$. In this
case, show that if $M$ is finitely generated, so is $M'$.
\end{exercise}
\subsection{Finitely presented modules}
Over messy rings, the notion of a finitely presented module is often a good
substitute for that of a finitely generated one. In fact, we are going to see
(\rref{}), that there is a general method of reducing questions about finitely
presented modules over arbitrary rings to finitely generated modules over
finitely generated $\mathbb{Z}$-algebras.
Throughout, fix a ring $R$.
\begin{definition}
An $R$-module $M$ is \textbf{finitely presented} if there is an exact sequence
\[ R^m \to R^n \to M \to 0. \]
\end{definition}
The point of this definition is that $M$ is the quotient of a free module
$R^n$ by the ``relations'' given by the images of the vectors in $R^m$.
Since $R^m$ is finitely generated, $M$ can be represented via finitely many
generators \emph{and} finitely many relations.
The reader should compare this with the definition of a \textbf{finitely
generated} module; there we only require an exact sequence
\[ R^n \to M \to 0. \]
As usual, we establish the usual properties of finitely presented modules.
We start by showing that if a finitely presented module $M$ is generated by
finitely many elements, the ``module of relations'' among these generators is
finitely generated itself. The condition of finite presentation only states that
there is \emph{one} such set of generators such that the module of generators
is finitely generated.
\begin{proposition}
Suppose $M$ is finitely presented. Then if $R^m \twoheadrightarrow M$ is a
surjection, the kernel is finitely generated.
\end{proposition}
\begin{proof} Let $K$ be the kernel of $R^m \twoheadrightarrow M$.
Consider an exact sequence
\[ F' \to F \to M \to 0 \]
where $F', F$ are finitely generated and free, which we can do as $M$ is
finitely presented.
Draw a commutative and exact diagram
\[
\xymatrix{
& F' \ar[r] & F \ar[r] \ar@{-->}[d] & M \ar[r] \ar[d] & 0 \\
0 \ar[r] & K \ar[r] & R^m \ar[r] & M \ar[r] & 0
}
\]
The dotted arrow $F \to R^m$ exists as $F$ is projective. There is induced a
map $F' \to K$.
We get a commutative and exact diagram
\[
\xymatrix{
& F' \ar[r]\ar[d]^f & F \ar[r] \ar[d]^g & M \ar[r] \ar[d] & 0 \\
0 \ar[r] & K \ar[r] & R^m \ar[r] & M \ar[r] & 0
},
\]
to which we can apply the snake lemma. There is an exact sequence
\[ 0 \to \coker(f) \to \coker(g) \to 0, \]
which gives an isomorphism $\coker(f) \simeq \coker(g)$.
However, $\coker(g)$ is finitely generated, as a quotient of $R^m$.
Thus $\coker(f)$ is too.
Since we have an exact sequence
\[ 0 \to \im(f) \to K \to \coker(f) \to 0, \]
and $\im(f)$ is finitely generated (as the image of a finitely generated
object, $F'$), we find by \rref{exact-fingen} that $\coker(f)$ is finitely generated.
\end{proof}
\begin{proposition} \label{exact-finpres}
Given an exact sequence
\[ 0 \to M' \to M \to M'' \to 0, \]
if $M', M''$ are finitely presented, so is $M$.
\end{proposition}
In general, it is not true that if $M$ is finitely presented, then $M'$ and
$M''$ are. For instance, it is possible that a submodule of the free, finitely
generated module $R$ (i.e. an ideal), might fail to be finitely generated. We
shall see in \rref{noetherian} that this does not happen over a
\emph{noetherian} ring.
\begin{proof}
Indeed, suppose we have exact sequences
\[ F_1' \to F_0' \to M' \to 0 \]
and
\[ F_1'' \to F_0'' \to M'' \to 0 \]
where the $F$'s are finitely generated and free.
We need to get a similar sequence for $M$.
Let us stack these into a diagram
\[ \xymatrix{
& F_1' \ar[d] & & F_1'' \ar[d] \\
& F_0' \ar[d] & & F_0'' \ar[d] \\
0 \ar[r] & M' \ar[r] & M \ar[r] & M'' \ar[r] & 0
}\]
However, now, using general facts about projective modules (\rref{}), we can
splice these presentations into a resolution
\[ F_1' \oplus F_1'' \to F_0' \oplus F_0'' \to M \to 0, \]
which proves the assertion.
\end{proof}
\begin{corollary}
The (finite) direct sum of finitely presented modules is finitely presented.
\end{corollary}
\begin{proof}
Immediate from \rref{exact-finpres}
\end{proof}
\subsection{Modules of finite length}
A much stronger condition on modules that of finite generation is that of \emph{finite
length}. Here, basically any operation one does will eventually terminate.
Let $R$ be a commutative ring, $M$ an $R$-module.
\begin{definition}
$M$ is \textbf{simple} if $M \neq 0$ and $M$ has no nontrivial submodules.
\end{definition}
\begin{exercise}
A torsion-free abelian group is never a simple $\mathbb{Z}$-module.
\end{exercise}
\begin{proposition}
$M$ is simple if and only if it is isomorphic to $R/\mathfrak{m}$ for $\mathfrak{m} \subset
R$ a maximal ideal.
\end{proposition}
\begin{proof} Let $M$ be simple. Then
$M$ must contain a cyclic submodule $Rx$ generated by some $x \in
M - \left\{0\right\}$. So it must contain a submodule isomorphic to $R/I$
for some ideal $I$, and
simplicity implies that $M \simeq R/I$ for some $I$. If $I$ is not maximal,
say properly contained in $J$,
then we will get a nontrivial submodule $J/I$ of $R/I \simeq M$. Conversely,
it is easy to see
that $R/\mathfrak{m}$ is simple for $\mathfrak{m}$ maximal.
\end{proof}
\begin{exercise}[Schur's lemma] Let $f: M \to N$ be a module-homomorphism,
where $M, N$ are both simple. Then either $f =0$ or $f$ is an isomorphism.
\end{exercise}
\begin{definition}
$M$ is of \textbf{finite length} if there is a finite filtration $0 \subset M^0
\subset \dots \subset M^n = M$ where each $M^i/M^{i-1}$ is simple.
\end{definition}
\begin{exercise}
Modules of finite length are closed under extensions (that is, if $0 \to M'
\to M \to M'' \to 0$ is an exact sequence, then if $M', M''$ are of finite
length, so is $M$).
\end{exercise}
In the next result (which will not be used in this chapter), we shall use the
notions of a \emph{noetherian} and an \emph{artinian} module. These notions
will be developed at length in \cref{chnoetherian}, and we refer the reader
there for more explanation.
A module is \emph{noetherian} if every ascending chain $M_1 \subset M_2 \subset
\dots$ of submodules stabilizes, and it is \emph{artinian} if every descending chain
stabilizes.
\begin{proposition}
$M$ is finite length iff $M$ is both noetherian and artinian.
\end{proposition}
\begin{proof}
Any simple module is obviously both noetherian and artinian: there are two
submodules. So if $M$ is finite length, then the finite filtration with simple
quotients implies that $M$ is noetherian and artinian, since these two
properties are stable under extensions (\rref{exactnoetherian}
and \rref{exactartinian} of \rref{noetherian}).
Suppose $M \neq 0$ is noetherian and artinian. Let $M_1 \subset M$ be a minimal
nonzero submodule, which exists as $M$ is artinian. This is necessarily simple. Then we have a filtration
\[ 0 = M_0 \subset M_1. \]
If $M_1 = M$, then the filtration goes up to $M$, and we have that $M$ is of
finite length. If not, find a submodule $M_2$ that contains $M_1$ and is
minimal among submodules containing $M_1$; then the quotient
$M_2/M_1$ is simple. We have the filtration
\[ 0 = M_0 \subset M_1 \subset M_2, \]
which we can keep continuing until at some point we reach $M$. Note that since
$M$ is noetherian, we cannot continue this strictly ascending chain forever.
\end{proof}
\begin{exercise}
In particular, any submodule or quotient module of a finite length module is
of finite length. Note that the analog is not true for finitely generated
modules unless the ring in question is noetherian.
\end{exercise}
Our next goal is to show that the length of a filtration of a module with
simple quotients is well-defined.
For this, we need:
\begin{lemma} \label{simplefiltrationint}
Let $0 = M_0 \subset M_1 \subset \dots \subset M_n = M$ be a filtration of
$M$ with simple quotients. Let $N \subset M$. Then the filtration
$0 = M_0 \cap N \subset M_1 \cap N \subset \dots \subset N$ has simple or zero
quotients.
\end{lemma}
\begin{proof}
Indeed, for each $i$, $(N \cap M_i)/(N \cap M_{i-1})$ is a submodule of
$M_i / M_{i-1}$, so is either zero or simple.
\end{proof}
\begin{theorem}[Jordan-H\"older]\label{lengthexists} Let $M$ be a module of
finite length.
In this case, any two filtrations
on $M$ with simple quotients have the same length.
\end{theorem}
\begin{definition}
This number is called the \textbf{length} of $M$ and is denoted $\ell(M)$.
\end{definition}
\begin{proof}[Proof of \rref{lengthexists}]
Let us introduce a temporary definition: $l(M)$ is the length of the
\emph{minimal} filtration on $M$. We will show that any filtration of $M$ (with
simple quotients) is of length
$l(M)$. This is the proposition in another form.
The proof of this claim is by induction on $l(M)$. Suppose we have a filtration
\[ 0 = M_0 \subset M_1 \subset \dots \subset M_n = M \]
with simple quotients. We would like to show that $n = l(M)$. By definition of
$l(M)$, there is another filtration
\[ 0 = N_0 \subset \dots \subset N_{l(M)} = M. \]
If $l(M) = 0,1$, then $M$ is zero or simple, which will necessarily imply that $n=0,1$
respectively. So we can assume $l(M) \geq 2$. We can also assume that the
result is known for strictly smaller submodules of $M$.
There are two cases:
\begin{enumerate}
\item $M_{n-1} = N_{l(M) -1 } $. Then $M_{n-1} = N_{l(M)-1}$ has $l$ at most
$l(M)-1$. Thus by the inductive hypothesis any two filtrations on $M_{n-1}$
have the same length, so $n-1 = l(M) -1$, implying what we want.
\item We have $M_{n-1} \cap N_{l(M) - 1} \subsetneq M_{n-1}, N_{l(M)-1}$.
Call this intersection $K$.
Now we have two filtrations of these modules $M_{n-1}, N_{l(M)-1}$ whose
quotients are simple. We can replace them such that the next
term before them is $K$.
To do this, consider the filtrations
\[ 0 = M_0 \cap K \subset M_1 \subset K \subset \dots M_{n-1} \cap K = K
\subset M_{n-1} \]
and
\[ 0 = N_0 \cap K \subset M_1 \subset K \subset \dots N_{l(M)-1} \cap K = K
\subset N_{l(M)-1} . \]
These filtrations have simple or zero quotients by
\rref{simplefiltrationint}, and since $ M_{n-1}/K =
M_{n-1}/M_{n-1} \cap N_{l(M)-1} = M/M_{n-1}$ is simple, and similarly for
$N_{l(M)-1}/K$. We can throw out redundancies to eliminate
the zero terms.
So we get two new filtrations of $M_{n-1}$ and $N_{l(M)-1}$ whose second-to-last
term is $K$.
By the
inductive hypothesis any two filtrations on either of these proper submodules $M_{n-1},
N_{l(M)-1} $
have the same length.
Thus the lengths of the two new filtrations are $n-1$ and $l(M)-1$,
respectively.
So we find that $n-1 = l(K) +1$ and $l(M)-1 = l(K)+1$ by
the inductive hypothesis. This implies what we want.
\end{enumerate}
\end{proof}
\begin{exercise}
Prove that the successive quotients $M_i/M_{i-1}$ are also determined (up to
permutation).
\end{exercise}
|