TITLE: What's an infinite dimensional or function version of a tensor? QUESTION [7 upvotes]: A function $f$ is like an infinite dimensional vector with the norm $|f| = \int^b_a f(t)^2 \, \mathrm{d} t $ and dot product $f \cdot g = \int^b_a f(t) g(t) \, \mathrm{d} t $ where appropriate boundaries have to be chosen or you have to restrict functions so that problems with infinities don't pop up. A linear operator, integral transform or its kernel $K$ is like an infinite dimensional matrix with the application operation being $Kf = \int^b_a K(s, t) f(t) \, \mathrm{d} t$. Matrix multiplication is $KL = \int^b_a K(s, t) L(t, u) \, \mathrm{d} t$. Again, you have to be careful to deal with problems with infinities. How do we generalize to the next level? What is the tensor version? In particular, what does tensor contraction and the tensor product mean? I know tensors are composed out of linear combinations of tensor products of vectors (which are functions in this case.) But I'm confused about what linear combinations means in this case. Suppose one has a tensor: $$ T = \sum^n_{k=0} c_k (u_k \otimes v_k) \otimes w_k $$ What exactly does a linear combination mean? REPLY [7 votes]: Any finite-dimensional vector space $V$ is isomorphic to a coordinate vector space $\mathbb{R}^n$ which can be thought of as the vector space of functions $\{1,\cdots,n\}\to \mathbb{R}$. Generalizing this, we can use a different index set than $\{1,\cdots,n\}$, for instance we can use a continuous interval as an index set. Then a "coordinate vector" whose indices are from $[0,1]$ should be thought of as a function $[0,1]\to\mathbb{R}$. The dot product $f\cdot g=\sum_i f_i g_i$ (I will put all my indices downstairs) for $\{1,\cdots,n\}$-indexed coordinate vectors can then be generalized to $\langle f,g\rangle=\int_0^1 f(x)g(x)\,\mathrm{d}x$. Similarly, the equation $f=Ag$ (where $f,g$ are column vectors and $A$ is a matrix), written with indices as $f_i=\sum_j a_{ij}g_j$, with integration replacing summation becomes the kernel $f(x)=\int_0^1 A(x,y)g(y)\,\mathrm{d}y$. With this perspective in mind, tensor contraction should generalize in an obvious way. For instance, the tensor contraction $\sum_{i,j,k} a_{ijk}b_{ir}c_{js}e_{kl}$ is now $\iiint a(i,j,k)b(i,r)c(j,s)e(k,l) \,\mathrm{d}i\,\mathrm{d}j\,\mathrm{d}k$, which is a function of $r$, $s$ and $l$. Indeed, the original kind of tensor contraction with a discrete set of indices can be considered a special case, since summation is just integration over a finite measure space. Now let's consider tensor products. If $\mathbb{R}^X$ denotes the vector space of functions $X\to\mathbb{R}$, there is a canonical identification $\mathbb{R}^X\otimes\mathbb{R}^Y\cong\mathbb{R}^{X\times Y}$. There is a bilinear map $\mathbb{R}^X\times\mathbb{R}^Y\to\mathbb{R}^{X\times Y}$ where the pair $(f,g)\in\mathbb{R}^X\times\mathbb{R}^Y$ is sent to the function $X\times Y\to\mathbb{R}$ defined by $h(x,y):=f(x)g(y)$, and this extends to an isomorphism $\mathbb{R}^X\otimes\mathbb{R}^Y\to \mathbb{R}^{X\times Y}$. This works even if $X$ and $Y$ aren't discrete. Pure tensors $u\otimes v$, then, correspond to separable functions $u(x)v(y)$. So then any arbitrary linear combination $\sum_i c_i(u_i\otimes v_i\otimes w_i)$ would correspond to a linear combination $\sum_i c_iu_i(x)v_i(y)w_i(z)$ of separable functions. (For example in differential equations, we sometimes find the separable solutions first, then superpose them to get the whole solution space.)<|endoftext|> TITLE: Deriving the mean of the Gumbel Distribution QUESTION [5 upvotes]: I'm trying to determine an expected value of a random variable related to the Gumbel/Extreme Value Type 1 distribution. I think the answer follows the same process as expected value of the Gumbel itself, but I can't figure out the derivation of the expected value of the Gumbel. I found here a derivation, but there's a step in the middle where magic happens. Need to understand what's going on there to see if I can apply it to my other problem. Recall, the density of the Gumbel distribution is $f(x) = e^{-e^{-x}}e^x$. The derivation at the link shows that $$ \int_{-\infty}^{\infty}x e^{-x} e^{-e^{-x}}dx = - \int_{0}^{\infty}{\ln y}e^{-y}dy\quad [y=e^{-x}]\\ = -\frac{d}{d\alpha}\int_0^\infty y^\alpha e^{-y}dy\bigg|_{\alpha=0}\\ =-\frac{d}{d\alpha}\Gamma(\alpha+1)\bigg|_{\alpha=0}\\ =\Gamma'(1) = \gamma \approx 0.577... $$ The jump from the first line to the second is the one I can't follow. I've tried doing integration by parts on one or the other to demonstrate the equivalence, but I end up with a floating 1 or infinity. Thanks in advance! REPLY [3 votes]: The step in question employs a trick called "differentiating under the integral." The idea is to introduce a parameter in the integrand, and express the integrand as a derivative with respect to this parameter, then change the order of integration and differentiation (assuming certain regularity conditions hold). Explicitly, suppose we let $f(y,\alpha) = y^\alpha e^{-y}.$ Then $$\frac{\partial f}{\partial \alpha} = y^\alpha \log y \, e^{-y}.$$ Then letting $\alpha = 0$ gives us the integrand in the first line of the solution; hence $$-\int_{y=0}^\infty \log y \, e^{-y} \, dy = -\int_{y=0}^\infty \frac{\partial}{\partial \alpha}\left[y^\alpha e^{-y}\right]_{\alpha = 0} \, dy = -\frac{\partial}{\partial \alpha} \left[\int_{y=0}^\infty y^\alpha e^{-y} \, dy \right]_{\alpha = 0}.$$<|endoftext|> TITLE: Proving a contraction mapping is a Cauchy sequence QUESTION [6 upvotes]: Let $\phi(x):[a,b]\rightarrow [a,b]$ be a continuous function. Show that if $\phi(x)$ is a contraction mapping on $[a,b]$ then the sequence $\{x^{(k)}\}$ defined by $x^{(k+1)} = \phi(x^{(k)})$ is a Cauchy sequence. Attempted solution - Since $\phi(x)$ is a contraction mapping we have $$|x^{(k+1)} - x^{(k)}| = |\phi(x^{(k)}) - \phi(x^{(k-1)})|\leq L|x^{(k)} - x^{(k-1)}|$$ Applying this idea repeatedly we get $$|x^{(k+1)} - x^{(k)}|\leq L^k|x^{(1)} - x^{(0)}|$$ Now consider the term that must be bounded in order to be a Cauchy sequence \begin{align*} |x^{(m)} - x^{(m+n)}| &= |(x^{(m)} - x^{(m+1)}) + (x^{(m+1)} - x^{(m+2)}) + \ldots + (x^{(m+n-1)} - x^{(m+n)})|\\ &\leq |(x^{(m)} - x^{(m+1)})| + |(x^{(m+1)} - x^{(m+2)})| + \ldots + |(x^{(m+n-1)} - x^{(m+n)})|\\ &\leq (L^m + L^{m+1} + \ldots + L^{m+n-1})|x^{(1)} - x^{(0)}| \end{align*} I am not sure how to proceed and show that for some $M$ we can get this inequality to be less than some $\epsilon$. Any suggestions is greatly appreciated. REPLY [6 votes]: You are very close to a complete proof. All you need is the "final step". Here is your proof, completed with the "final step" (in details). Let $\phi(x):[a,b]\rightarrow [a,b]$ be a continuous function. Show that if $\phi(x)$ is a contraction mapping on $[a,b]$ then the sequence $\{x^{(k)}\}$ defined by $x^{(k+1)} = \phi(x^{(k)})$ is a Cauchy sequence. Proof - Since $\phi(x)$ is a contraction mapping we have $$|x^{(k+1)} - x^{(k)}| = |\phi(x^{(k)}) - \phi(x^{(k-1)})|\leq L|x^{(k)} - x^{(k-1)}|$$ where $0\leq L< 1$. It folows by induction (that is, applying this idea repeatedly) we get $$|x^{(k+1)} - x^{(k)}|\leq L^k|x^{(1)} - x^{(0)}|$$ Now consider the term that must be bounded in order to be a Cauchy sequence \begin{align*} |x^{(m)} - x^{(m+n)}| &= |(x^{(m)} - x^{(m+1)}) + (x^{(m+1)} - x^{(m+2)}) + \ldots + (x^{(m+n-1)} - x^{(m+n)})|\\ &\leq |(x^{(m)} - x^{(m+1)})| + |(x^{(m+1)} - x^{(m+2)})| + \ldots + |(x^{(m+n-1)} - x^{(m+n)})|\\ &\leq (L^m + L^{m+1} + \ldots + L^{m+n-1})|x^{(1)} - x^{(0)}|= \\ & =\left( \sum_{k=m}^{m+n-1}L^k \right) |x^{(1)} - x^{(0)}| \leq \left( \sum_{k=m}^{\infty}L^k \right) |x^{(1)} - x^{(0)}| = \\ & = \frac{L^m}{1-L}|x^{(1)} - x^{(0)}| \end{align*} Given $\varepsilon >0$, since $0\leq L <1$, there is $M\in \mathbb{N}$ such that for all $m>M$ and all $n \in \mathbb{N}$, $$|x^{(m)} - x^{(m+n)}| \leq \frac{L^m}{1-L}|x^{(1)} - x^{(0)}| \leq \varepsilon$$ So the sequence $\{x^{(k)}\}$ is a Cauchy sequence.<|endoftext|> TITLE: Feasible point of a system of linear inequalities QUESTION [5 upvotes]: Let $P$ denote $(x,y,z)\in \mathbb R^3$, which satisfies the inequalities: $$-2x+y+z\leq 4$$ $$x \geq 1$$ $$y\geq2$$ $$ z \geq 3 $$ $$x-2y+z \leq 1$$ $$ 2x+2y-z \leq 5$$ How do I find an interior point in $P$? Is there a specific method, or should I just try some random combinations and then logically find an interior point? REPLY [2 votes]: You could also use the simplex method to solve the following problem: $$ \min\limits_{x,y,z,e,a}\quad Z= a_1+a_2+a_3 $$ subject to $$-2x+y+z+e_1 =4$$ $$x -e_2+a_1= 1$$ $$y-e_3+a_2=2$$ $$ z-e_4+a_3 = 3 $$ $$x-2y+z +e_5= 1$$ $$ 2x+2y-z +e_6= 5$$ $$ x,y,z,e,a\ge 0 $$ If $\min \{a_1+a_2+a_3\} = 0$, then you have a feasible point (with $a_1=a_2=a_3=0$).<|endoftext|> TITLE: X={1,2,3}. Give a list of topologies on X such that every topology on X is homeomorphic to exactly one on your list. QUESTION [5 upvotes]: I'm teaching my self topology with the aid of a book. I'm trying to do the following problem: Let X={1,2,3}. Give a list of topologies on X such that every topology on X is homeomorphic to exactly one on your list. I'm not sure If I totally understand what is being asked, but I'm going to attempt to list every topology in groups that are homeomorphic to one another. I want to know if this is correct. (A) trivial topology. $\mathscr{T}=${X,$\varnothing$}; I can't think of anything else that is homeomorphic to this one. (B) "singles" (B1): $\mathscr{T}=${X,$\varnothing$,{1}}; (B2): $\mathscr{T}=${X,$\varnothing$,{2}}; (B3): $\mathscr{T}=${X,$\varnothing$,{3}}; (C) "doubles" (C1): $\mathscr{T}=${X,$\varnothing$,{1,2}}; (C2): $\mathscr{T}=${X,$\varnothing$,{2,3}}; (C3): $\mathscr{T}=${X,$\varnothing$,{3,1}}; (D) "single-doubles" (D1): $\mathscr{T}=${X,$\varnothing$,{1},{1,2}}; (D2): $\mathscr{T}=${X,$\varnothing$,{1},{1,3}}; (D3): $\mathscr{T}=${X,$\varnothing$,{2},{2,1}}; (D4): $\mathscr{T}=${X,$\varnothing$,{2},{2,3}}; (D5): $\mathscr{T}=${X,$\varnothing$,{3},{3,1}}; (D6): $\mathscr{T}=${X,$\varnothing$,{3},{3,2}}; (D') "single-doubles (disjoint)" (D'1): $\mathscr{T}=${X,$\varnothing$,{3},{1,2}}; (D'2): $\mathscr{T}=${X,$\varnothing$,{2},{1,3}}; (D'3): $\mathscr{T}=${X,$\varnothing$,{1},{2,3}}; (E) "single-single-doubles" (E1): $\mathscr{T}=${X,$\varnothing$,{1},{2},{1,2}}; (E2): $\mathscr{T}=${X,$\varnothing$,{1},{3},{1,3}}; (E3): $\mathscr{T}=${X,$\varnothing$,{2},{3},{2,3}}; (F) "single-double-doubles" (F1): $\mathscr{T}=${X,$\varnothing$,{1},{1,2},{1,3}}; (F2): $\mathscr{T}=${X,$\varnothing$,{2},{2,1},{3,2}}; (F3): $\mathscr{T}=${X,$\varnothing$,{3},{3,2},{3,1}}; (G) "single-single-double-doubles" (G1): $\mathscr{T}=${X,$\varnothing$,{1},{2},{1,2},{2,3}}; (G2): $\mathscr{T}=${X,$\varnothing$,{1},{2},{1,2},{3,1}}; (G3): $\mathscr{T}=${X,$\varnothing$,{1},{3},{1,2},{3,1}}; (G4): $\mathscr{T}=${X,$\varnothing$,{1},{3},{2,3},{3,1}}; (G5): $\mathscr{T}=${X,$\varnothing$,{2},{3},{2,3},{3,1}}; (G6): $\mathscr{T}=${X,$\varnothing$,{2},{3},{1,2},{2,3}}; (H) power set: $\mathscr{T}=${X,$\varnothing$,{1}, {2},{3},{1,2},{2,3},{3,1}};; I can't think of anything else that is homeomorphic to this one. IS this a complete list of all topologies on X? REPLY [6 votes]: You’re missing the ones homeomorphic to $\big\{\varnothing,X,\{1\},\{2,3\}\big\}$; there are $3$ of those. Also, your (E) group lists one topology twice: (E1) and (E3) are the same. The question wants you to list one topology from each of the $9$ groups (including the group that I just added).<|endoftext|> TITLE: Any undirected graph on 9 vertices with minimum degree at least 5 contains a subgraph $K_4$? QUESTION [5 upvotes]: Let $G$ be simple undirected graph with degree of every vertices is at least 5. Prove or disprove that $G$ contains subgraph $K_4$. I came up with this question when I were trying to find Ramsey number $R(4,3)$. I think my conjecture is correct but I am unable to prove it. If anyone have any idea please share with me. Thank you in advance ! REPLY [2 votes]: If $\chi(G)=3$, the graph cannot contain any $K_4$ as a subgraph: The above graph is just a $K_9$ with only $9$ edges removed, in particular a $K_{3,3,3}$, with chromatic number $3$, as clear from the picture (no couple of vertices with the same colour is joined by an edge, but the graph has plenty of embedded triangles). On the other hand, it is not difficult to show that any graph on $9$ vertices with more than $27$ edges has a subgraph isomorphic to $K_4$. Quite disappointing and not even surprising, since a graph fulfilling such constraints is full of triangles and almost complete: REPLY [2 votes]: Clearly, since the sum of degrees across the 9 vertices must be even, there must be a vertex with degree at least $6$, meaning that only 2 vertices are isolated from this vertex. Choose the vertex of highest degree (degree-$6$ or higher) and label it as $A$ and the vertices connected it, the "linked set", as $\{B,C,D,E,F,G\}$. Each of these will have an additional minimum $4$ links to make in addition to the link to $A$, so at least $2$ of these in each case are within the linked set. So this gives 6 vertices and minimum 6 edges in the linked set, which can be connect as a ring to avoid triangles. Then the other two verrtices, $\{H,I\}$ can be connected to each point on the ring (but not to each other) to avoid any $K_4$. A diagram of this:<|endoftext|> TITLE: Why does $(128)!$ equal the product of these binomial coefficients $128! = \binom{128}{64}\binom{64}{32}^2 \dots \binom21^{64}$? QUESTION [9 upvotes]: I'm working through some combinatorics practice sets and found the following problem that I can't make heads or tails of. It asks to prove the following: $$128! = \binom{128}{64}\binom{64}{32}^2\binom{32}{16}^4\binom{16}8^8\binom 84^{16}\binom 42^{32}\binom{2}{1}^{64}$$ Weird, huh? The first thing I noticed is that the exponents mirror the $r$ variables. I would normally just re-express each statement in $\frac{n!}{(n-r)!r!}$ form, but the exponents throw me for a loop. Are there any intuitions about factorials or nCr I should be considering here? REPLY [10 votes]: Divide $128$ items in half, and assign one half a $1$ bit in the first digit and the other a $0$ bit. Then divide each half in half again, and in each half assign one half a $1$ bit in the second digit and the other a $0$ bit. Continue until the halves consist of single elements. Now each element has been assigned a binary number from $0$ to $127$. The left-hand side counts the number of ways of assigning the numbers, and the right-hand side counts the number of ways of performing the subdivisions.<|endoftext|> TITLE: Linear transformation $T$ such that for every extension $\overline{T}$, $\|\overline{T}\|>\|T\|$. QUESTION [8 upvotes]: Let $E$ and $F$ be normed spaces such that $\dim F < \infty$, $G$ a subspace of $E$ and $T:G\rightarrow F$ a continuous linear map. I know that there exists a continuous linear extension $\overline{T}:E\rightarrow F$. Also, if $E$ is a Hilbert space, then $\overline{T}$ can be chosen in the way that $\|\overline{T}\|=\|T\|$. Problem: Find an example of $E$, $F$, $G$ and $T$ (like above) such that every continuous linear extension $\overline{T}$ has a greater norm, i.e. $\|\overline{T}\|>\|T\|$. Now, $F$ must be at least a 2-dimensional space, otherwise I could use Hahn-Banach to find an extension with equal norm. My professor told me it could be done with $E$ of finite dimension. Of course, I tried to come up with an example of $E$ with a norm that doesn't satisfy the parallelogram law. For example, $E=\left(\mathbb{R}^3,\|\cdot\|_{\infty}\right)$ and $F=\left(\mathbb{R}^2,\|\cdot\|_1\right)$. But I couldn't prove that it works with any example I tried using those spaces. Can somebody help me to find an example and assure that it really has that property? EDIT: Apparently, it can't be done with $E$ of finite dimension nor with $F$ equipped with the $\sup$ norm, as @Hamza proved below. REPLY [4 votes]: Let $E=\mathbb R^3$, equipped with the sup norm, namely $$ \|(x,y,z)\|_\infty = \max\{|x|, |y|, |z| \}, $$ and let $$ F=G=\{(x, y, z)\in E : x+y+z=0\}, $$ equipped with the induced norm from $E$. Also let $$ T:G\to F $$ be the identity function. Then clearly $\|T\|=1$, but any extension $\bar T:E\to F$ has norm strictly bigger than one. The reason is that any such $\bar T$ is necessarily a projection from $E$ to $G$ and there is no such projection with norm $1$. The best way to convince oneself of this fact is to make a cardboard model of this cube cut it along the red line, place one of the two halves on top of the table with the red hexagon down, and attempt to shine a flashlight so that the shadow is restricted to within the hexagon. It is impossible! This is based on another answer I recently gave to this question.<|endoftext|> TITLE: How can I rewrite recursive function as single formula? QUESTION [7 upvotes]: There is following recursive function $$ \begin{equation} a_n= \begin{cases} -1, & \text{if}\ n = 0 \\ 1, & \text{if}\ n = 1\\ 10a_{n-1}-21a_{n-2}, & \text{if}\ n \geq 2 \end{cases} \end{equation} $$ I know this can be rewritten as $$ a_n=7^n-2\cdot3^n $$ But how can I reach that statement? I found this problem on some particular website. My skills are not enough to solve such things. Someone told me I have to read about Generating function but it didn't help me. I would be thankful if someone explained it to me. REPLY [4 votes]: This is a homogeneous linear recurrence relation with constant coefficients. From $$ a_n = 10 a_{n-1} -21 a_{n-2} $$ you can infer the order $d=2$ and the characteristic polynomial: $$ p(t) = t^2 - 10 t + 21 $$ Calculating the roots: $$ 0 = p(t) = (t - 5)^2 - 25 + 21 \iff t = 5 \pm 2 $$ this gives the general solution $$ a_n = k_1 3^n + k_2 7^n $$ The two constants have to be determined from two initial conditions: $$ -1 = a_0 = k_1 + k_2 \\ 1 = a_1 = 3 k_1 + 7 k_2 $$ This leads to $4 = 4 k_2$ or $k_2 = 1$ and thus $k_1 = -2$. So we get $$ a_n = -2 \cdot 3^n + 7^n $$<|endoftext|> TITLE: Prove that Standard Deviation is always $\geq$ Mean Absolute Deviation QUESTION [5 upvotes]: Where $$s = \sqrt{ \frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x})^2}$$ and $$ M = \frac{1}{n} \sum_{i=1}^{n} |x_i - \bar{x}|$$ I came up with a sketchy proof for the case of $2$ values, but I would like a way to generalize (my "proof" unfortunately doesn't, as far as I can tell). Proof for $2$ values (I would appreciate feedback on this as well): $$\frac{1}{\sqrt{2}} \sqrt{(x_1 - \bar{x})^2 + (x_2 - \bar{x})^2} \geq \frac{1}{2} (|x_1- \bar{x}| + |x_2- \bar{x}|)$$ Now let $|x_1- \bar{x}| = a$ and $|x_2- \bar{x}| = b$ be the $2$ legs of a right triangle and $\sqrt{(x_1 - \bar{x})^2 + (x_2 - \bar{x})^2} = c$ its hypothenuse. And let $\theta$ be the angle between $c$ and either $a$ or $b$. Then $(\sin{\theta} + \cos{\theta}) = \frac{a}{c} + \frac{b}{c} = \frac{a+b}{c} = \frac{\frac{1}{2} (|x_1- \bar{x}| + |x_2- \bar{x}|)}{\frac{1}{\sqrt{2}} \sqrt{(x_1 - \bar{x})^2 + (x_2 - \bar{x})^2}}$ so that $$ \sqrt{2} \geq (\sin{\theta} + \cos{\theta})$$ And we know that $\max(\sin{\theta} + \cos{\theta}) = \sqrt{2}$. QED? I have no idea how to prove the general case, though. REPLY [5 votes]: Let $y_i = x_i - \bar{x}$ then by Cauchy-Schwarz we have the following: $\sum_{i=1}^n |y_i| \frac{1}{n} \leq \|y\|_2 \sqrt{\sum_{i=1}^n \frac{1}{n^2}} = \sqrt{\frac{1}{n}\sum_{i=1}^n |x_i - \bar{x}|^2}$<|endoftext|> TITLE: Does associativity imply closure? QUESTION [6 upvotes]: Does associativity of binary operation imply closure under this operation? Sometimes definitions of semigroup, group or vector space omit axiom of closure under corresponding operations and sometimes they don't. One of the arguments for omitting the axiom that I found is that associativity implies closure. As a possible proof, let + be binary operation on set A. Assume a, b and c are elements of A. Also (b + c) is in set but (a + b) is not in a set. Then a + (b + c) is well-defined (even though the result can be out of set). However, if we assume that + is associative, we will get: a + (b + c) = (a + b) + c But that is not true because second "addition" in right part is not defined for (a + b) that is out of set and c. So for associativity, (a + b) must be in set. Does this argument make sense? Is it true? UPDATE: In a possible proof the error is in the first assumption. If + is binary operation on A and a and b are in A then (a + b) must be in set by definition of binary operation (A x A -> A). REPLY [4 votes]: Associativity does not imply closure - both are characteristics required of a set to form a group. Both (usually) need to be verified to show that a set forms a group under said binary operation, although it is the binary operation that usually implies the closure property.<|endoftext|> TITLE: Polynomial ring with arbitrarily many variables in ZF QUESTION [14 upvotes]: For a given field $k$ and a set $X$ we want to define the ring $k[X]$ of polynomials with $X$ as the set of variables. We do not assume $X$ to be finite. And we want to do this without employing axiom of choice. Informally, the elements of $k[X]$ will be finite sums of monomials of the form $cx_1^{k_1}\dots x_n^{k_n}$, where each monomial is determined by a coefficient $c\in k$, finitely many elements $x_1,\dots,x_n\in X$ and the exponents $k_1,\dots,k_n$, which are positive integers. Addition and multiplication of polynomials from $k[X]$ will be defined in the natural way. However, we also should be able to describe this algebraic structure more formally. Especially if we are trying to use it in some proof in the axiomatic system ZF. In this case it is also important to check that we have not used AC anywhere in the proof. (Using Axiom of Choice can easily be overlooked, especially if somebody is used to work in ZFC rather than ZF, i.e., without the restriction that AC should be avoided.) To explain a bit better what I mean, this is similar to defining the polynomial ring $k[x]$ of polynomials in a single variable $x$. Informally, we view polynomials as expressions of the form $a_nx^n+\dots+a_1x+a_0$ (with $a_i\in k$). And we will also write them in this way. But formally they are sequences of elements of $K$ with finite support. I will also provide below a suggestion how to construct $k[X]$ in ZF. I would be interested in any comments on my approach, but also if there are different ways to do this, I'd be glad to hear about them. This cropped up in a discussion with some colleagues of mine. Transfinite induction and direct limit One colleague suggested the following approach, which clearly uses AC (in the form of the well-ordering theorem). But he said that this is the construction of $k[X]$ which seems the most natural to him. We take any well-ordering of the set $X=\{x_\beta; \beta<\alpha\}$. By a transfinite induction we define rings $k_\beta$ for $\beta\le\alpha$ and also an embeddings $k_\beta \hookrightarrow k_{\beta'}$ for any $\beta<\beta'<\alpha$. The ring $k_\beta$ is supposed to represent the polynomials using only variables $x_\gamma$ for $\gamma\le\beta$. We put $k_0=k[x_0]$. Similarly if $\beta$ is a successor ordinal we can define $k_\beta=k_{\beta-1}[x_\beta]$. If $\beta$ is a limit ordinal, then we can take $k_\beta$ as a direct limit of $k_\gamma$, $\gamma<\beta$. Then the ring $k_\alpha$ is $k[X]$ which we wanted to construct. It is not immediately clear to me whether the proof can be simplified in the way that the direct limit can be replaced by union. However, I do not consider this to be an important difference, since using direct limit (especially in such a simple case, with linear order and embeddings) seem to me to be a rather standard approach for this type of constructions. And anybody with enough mathematical maturity to study a proofs of this level will probably not have a problem with the notion of direct limit. The fact that this is indeed a ring (or even integral domain) follows from the fact that these properties are preserved by this simple version of direct limits. (I.e., direct limit based on linearly ordered system of rings with embeddings between them. This does not differ substantially from the proof that union of chain of rings is a ring.) Functions with finite support I have suggested this approach, which is more closely modeled after the case of ring in a single variable. Unless I missed something, this can be done in ZF, i.e., without use of ZFC. Let us first try to definite the set $M$ of all monomials of the form $x_1^{k_1}\dots x_n^{k_n}$. (I.e., the monomials with the coefficient $1$.) Every such monomial is uniquely determined by a finite subset $F\subseteq X$ and a function $g: F\mapsto\mathbb N$, where $\mathbb N=\{1,2,\dots\}$. Or, if you will, $\mathbb N=\omega\setminus\{0\}$. (Since we are talking about finite sets, it might be worth mentioning that there are several notions of finite set in ZF. We take the standard one, which is sometimes called Tarski-finite or Kuratowski-limit. This notion of finiteness is well behaved. For our purposes it is important to know that union of finite set of finite sets is again finite and the same is true for Cartesian product.) So we can get $M$ as a set of all pairs $(F,g)$ with the properties described above. Existence of such sets can be defined in ZF in a rather straightforward manner. (All properties of $F$ and $g$ can be described by a formula in a language of set theory. Clearly $F\in\mathcal P(X)$. Or we can use the set $\mathcal P^{<\omega}(X)$ of finite subsets of $X$ instead. The function $g$ belongs to the set of all functions from such $F$'s to $\mathbb N$. For each $F$ we have the set $\mathbb N^F$ consisting of all functions $F\to\mathbb N$. Then we can simply take the union $G=\bigcup\limits_{F\in\mathcal P(X)} \mathbb N^F$, based on axiom of union. The we use axiom scheme of specification to get only those pairs from $\mathcal P(X)\times G$ which have the required properties.) Now we have the set $M$. We want to model somehow the finite sums of elements from $M$ multiplied by a coefficents from $k$. To this end we simply take the functions from $M$ to $k$ with finite support. So far we have only defined the underlying set $k[X]$. We still need to define addition, multiplication, verify that this is integral domain. However, any polynomial $p\in k[X]$ only uses finitely many variables, since we have finitely many monomials and each of them only contains finitely many variables. If we are verifying closure under addition or multiplication, or some properties of integral domain such as associativity or distributivity, then any such condition only includes finitely many polynomials and thus we have only finitely many variables. So we can look at this condition as property of polynomials in $k[F]$, where $F$ is some finite subsets. Assuming we already know that polynomial ring in finitely many variables over a field $k$ is an integral domain, this argument can be used to argue that $k[X]$ is an integral domain, too. The above discussion occurred in connection with the proof of Andreas Blass' result that existence of Hamel basis for vector space over arbitrary fields implies Axiom of Choice. This proof can be found for example in the references below. It is also briefly described in this answer. In this proof the polynomials from $k[X]$ are used. (Then the field $k(X)$ of all rational functions in variables from $X$ is created - in the other words, the quotient field of $k[X]$. And the proof than uses existence of a Hamel basis of $k(X)$ considered as a vector space over a particular subfield of $k(X)$.) Unless I missed something, the proofs given there do not discuss whether $k[X]$ can be constructed without AC. Which suggests that the authors considered this point to be simple enough to be filled in by a reader. So I assume that proof of this fact should not be too difficult. (Of course, if you know of another reference for a proof this results which also discusses this issue, I'd be glad to learn about it.) A. Blass: Existence of bases implies the axiom of choice. Contemporary Mathematics, 31:31–33, 1984. Available on the authors website Theorem 5.4 in L. Halbeisen: Combinatorial Set Theory, Springer, 2012. The book is freely available on the author's website. Theorem 4.44 in H. Herrlich Axiom of choice, Springer, 2006, (Lecture Notes in Mathematics 1876). There are these related questions: Polynomial ring with uncountable indeterminates. Polynomial ring indexed by an arbitrary set. The answers given there can be considered somewhat similar to the approach I suggested above. However, it is not discussed there whether AC was used somewhere in this construction. REPLY [10 votes]: Your "functions with finite support" approach is the standard approach to take to this, and is very likely what any author who considers the question trivial has in mind. The correct notion of "finite" to take when defining monomials is the usual one, as you have done (this is necessary for the polynomial ring you get to be the free commutative $k$-algebra on its variables). Just to give some evidence that this approach is standard, this is the definition of $k[X]$ used in Lang's Algebra, for instance. (Lang describes this construction rather tersely on page 106 as an example of the more general monoid algebra construction described on pages 104-5. Lang's construction of the set $M$ of monomials is a little different from yours: he defines it as a subset of the free abelian group on $X$, which he constructs as the group of finite-support functions $X\to\mathbb{Z}$ on page 38.) Note that it is not actually necessary to observe that every polynomial involves only finitely many variables to define or verify the properties of addition and multiplication. Indeed, addition can just be defined pointwise on the coefficients and makes sense for arbitrary functions from $M$ to $k$. Multiplication also makes sense for arbitrary functions from $M$ to $k$ (which you can think of as formal power series): you just need to define the coefficient of a given monomial $x_1^{k_1}\dots x_n^{k_n}$ in a product, and you can do this by the usual convolution formula. You then just have to check that if two functions $M\to k$ each have finite support, then so does their product; this is straightforward (a monomial in the support of the product must be the product of monomials in the support of each of the factors). The verification of the basic properties of multiplication then works exactly as it does in the case of finitely many variables.<|endoftext|> TITLE: Number of solutions to this nice equation $\varphi(n)+\tau(n^2)=n$ QUESTION [5 upvotes]: How many natural numbers $n$ satisfy the equation$$\varphi(n)+\tau(n^2)=n$$where $\varphi$ is the Euler's totient function and $\tau$ is the divisor function i.e. number of divisors of an integer. I made this equation and I think it is not hard. I haven't solved this completely yet, so I want you to work on this along with me. I'd love to see your solutions! REPLY [2 votes]: Hint: You can show that $n$ is odd and has at most two (distinct) prime factors. Then, it follows that $n=21$ and $n=25$ are the only solutions. Further Hint: Suppose $n=p^aq^br^cm$ where $p,q,r,a,b,c\in\mathbb{N}$ with $p,q,r$ being pairwise distinct primes, and $m\in\mathbb{N}$ is not divisible by $p$, $q$, or $r$. Prove that $$n-\phi(n)> n\left(\frac{1}{2p}+\frac{1}{q}+\frac{1}{r}\right)>\tau\left(n^2\right)\,,$$ provided that $p,q,r$ are the smallest primes dividing $n$ with $2