The G/Z Theorem

Theorem: Let G be a group and let Z(G) be the center of G. If G/Z(G) is cyclic, then G is Abelian.

Proof: Suppose that G/Z(G) is cyclic. Then there is some aZ(G)\in G/Z(G) such that \left<aZ(G)\right>=G/Z(G).

Let x,y\in G. We wish to show that xy=yx. It follows that there exist integers j,k such that xZ(G)=(aZ(G))^j and yZ(G)=(aZ(G))^k. So there exist z_1,z_2\in Z(G) such that x=a^jz_1 and y=a^kz_2. Now consider their product:

xy=(a^jz_1)(a^kz_2)

=a^j(z_1a^k)z_2

=(a^ja^k)(z_1z_2)

=(a^ka^j)(z_2z_1)

=(a^kz_2)(a^jz_1)=yx

Note that in the third line above we are able to commute a^k and z_1 since z_1\in Z(G). Similarly we are able to commute from line four to five.

Thus, we have shown that G is Abelian.

\Box

Reflection: It is necessary that your subgroup is the center of G, otherwise we wouldn’t be able to commute and conclude that G is Abelian. The proof itself followed mainly from definitions and helpful manipulation.

Advertisements
Posted in Abelian, Algebra, Center, Cyclic, Group | Leave a comment

Gallian- Ch. 8 #11

Problem Statement: How many elements of order 4 does \mathbb{Z}_4\oplus\mathbb{Z}_4 have? Explain why \mathbb{Z}_4\oplus\mathbb{Z}_4 has the same number of elements of order 4 as \mathbb{Z}_{8000000}\oplus\mathbb{Z}_{400000}. Generalize to the case of \mathbb{Z}_m\oplus\mathbb{Z}_n.

Solution: First let’s find how many elements there are of order 4 in \mathbb{Z}_4\oplus\mathbb{Z}_4. An element (a, b)\in\mathbb{Z}_4\oplus\mathbb{Z}_4 has order 4 if lcm(\left|a\right|,\left|b\right|)=4. This results in five possible cases we need to consider:

First: If \left|a\right|=1, \left|b\right|=4. Then there is one choice for a since \phi(1)=1 and there are two choices for b since \phi(4)=2. So we have a total of 2 possible (a,b).

Second: If \left|a\right|=4, \left|b\right|=4. Then there are two choices for a and there are two choices for b since \phi(4)=2. So we have a total of 4 possible (a,b).

Third: If \left|a\right|=2, \left|b\right|=4. Then there is one choice for a since \phi(1)=1 and there are two choices for b since \phi(4)=2. So we have a total of 2 possible (a,b).

Fourth: If \left|a\right|=4, \left|b\right|=2. Then there are two choices for a and there is one choice for b. So we have a total of 2 possible (a,b).

Fifth: If \left|a\right|=4, \left|b\right|=1. Then there are two choices for a and one choice for b. So we have a total of 2 possible (a,b).

Now, adding up all these possible (a,b) we see that there are 12 elements of order 4 in \mathbb{Z}_4\oplus\mathbb{Z}_4.

\mathbb{Z}_4\oplus\mathbb{Z}_4 has the same number of elements of order 4 as \mathbb{Z}_{8000000}\oplus\mathbb{Z}_{400000} since the only thing that determines how many elements there are of each order are the divisors of 4. Also note that it is necessary that the divisors also divide the orders of the groups in the direct product. Since 4 divides 4, 8000000, and 400000 it follows that all of the divisors of 4 also divide those numbers and so the process outlined above will be the exact same for \mathbb{Z}_{8000000}\oplus\mathbb{Z}_{400000}.

In general, if you are trying to count the number of elements of order d in \mathbb{Z}_m\oplus\mathbb{Z}_n the result will be the same for any m and n so long as d divides both m and n.

Reflection: One more key thing to note here is that this only works when we are dealing with cyclic groups. Another way one could approach this problem is by writing out the elements of the group. Since we are only dealing with 16 elements it’s not that hard to do, but for larger groups, like \mathbb{Z}_{8000000}\oplus\mathbb{Z}_{400000}, it is much more difficult. This method is also helpful when trying to count cyclic subgroups of order d. Suppose we wanted to know how many cyclic subgroups there are in \mathbb{Z}_4\oplus\mathbb{Z}_4 with order 4. First we would find how many elements there are of order 4, we’ve shown above that there are 12, and then we would divide that number by \phi(4)=2. In general if you want to know how many cyclic subgroups there are of order d you first find how many elements there are of order d and then divide that number by \phi(d). Again, you could do this by writing out all of the elements, but that is more time consuming for large groups.

Posted in Algebra, Cyclic, Group, Math, Order | Leave a comment

Gallian: Ch. 3 #35

Problem Statement: Prove that a group of even order must have an element of order 2.

Proof: Let G be a group such that \left|G\right|=2k and consider the set S=\{g\in G:g\neq g^{-1}\}. S is the set of all elements of G that are not of order 2.

I claim that \left|S\right| is even. Let s\in S, then s\neq s^{-1} and so s^{-1}\in S as well. So no element may be in S unless it’s inverse is also an element of S, thus, \left|S\right| is even. Let \left|S\right|=2j.

Note that any element of G is either in S or in G-S, by construction. Furthermore, S\cap(G-S)=\varnothing and S\cup(G-S)=G and so it follows that \left|G\right|=\left|S\right|+\left|G-S\right|. This implies that 2k=2j+n and so it must be that n=\left|G-S\right| is even. Let \left|G-S\right|=2m.

G-S=\{e, b_2, b_3, \dots, b_{2m} \} where m\geq 1. Since m must be \geq 1 it follows that there is always some b\in G-S that is not the identity element. Since b\in G-S it follows that b=b^{-1} which implies that b^2=e and so \left|b\right|=2.

Thus, any group of even order must have at least one element of order 2.

\Box

Reflection: The key to this proof was breaking the group into two subsets, elements of order 2 and elements that were not of order 2. The other major idea is that every element is the inverse of it’s inverse. i.e. g=(g^{-1})^{-1}. This forces our set S to have even order which in turn forces the set we are interested in to also have even order. So, even though we know the identity is always in G-S, since the order is even there must always be at least one other, non-identity element in G-S as well.

Posted in Algebra, Group, Math, Order | Leave a comment

Rudin: Ch. 5 #4

Problem Statement: If c_0+\dfrac{c_1}{2}+\dfrac{c_2}{3}+\dots+\dfrac{c_{n-1}}{n}+\dfrac{c_n}{n+1}=0 where c_0,\dots,c_n are real constants, prove that c_0+c_1x+c_2x^2+\dots+c_nx^n=0 has at least one real root between 0 and 1.

Proof: Consider the polynomial p(x)=c_0x+\dfrac{c_1}{2}x^2+\dfrac{c_2}{3}x^3+\dots+\dfrac{c_n}{n+1}x^{n+1} on [0,1]. Then it follows that p(0)=0 and p(1)=c_0+\dfrac{c_1}{2}+\dots+\dfrac{c_n}{n+1} which equals 0 by our assumption. Furthermore, since p(x) is a polynomial with real coefficients it is differentiable on [0,1] and so we may apply the Mean Value Theorem to p on the interval [0,1]. Applying MVT we see that there exists at least one point c\in(0,1) such that

p'(c)=\dfrac{p(1)-p(0)}{1-0}=\dfrac{0-0}{1}=0

So there is some c\in(0,1) such that p'(c)=0. But we may compute p'(x) since we know p(x).

p'(x)=c_0+c_1x+c_2x^2+\dots+c_nx^n

Thus, we have shown that there is at least one real root to c_0+c_1x+c_2x^2+\dots+c_nx^n=0 on the interval [0,1].

\Box

Reflection: What a nice, sweet, and simple proof. I have to admit, when I first saw this problem I had no clue what to do. The trick came in remembering that I was in the chapter on differentiation and MVT, but that won’t happen on the qual…After completing the proof I understand a bit better how I would “see” the proof in the future. After realizing that the index of the numerator was 1 plus the value of the denominator I started to see that differentiation could be helpful here. I think the hardest part of this proof is realizing that this is one of those times where you want a “helpful function”. After writing out p(x) and recognizing that p(1)=0=p(0) the rest fell out.

Posted in Analysis, Differentiable, Math, MVT | Leave a comment

Suz and Mike had a question

Problem Statement: If f(x) is bounded with finitely many discontinuities on [a,b] then f is Riemann Integrable on [a,b].

Proof: Let N be the number of discontinuities of f on [a,b] and let M\in\mathbb{R} be such that \left|f(x)\right|<M for every x\in [a,b]. We know such an M exists since f is bounded. Since f is bounded we will be using the sup-inf definition of Riemann Integrable.

Denote a discontinuity by d_j where d_1<d_2<\dots<d_N.  Let d=min\{\dfrac{\left|d_j-d_k\right|}{3}: j,k\in[1,N] and j\neq k\}. Let I_1=[a,d_1-d], I_2=[d_1+d,d_2-d],\dots I_{N+1}=[d_N,b]. By construction there are no discontinuities in any I_j and so f is Riemann Integrable one each I_j.

Let \varepsilon>0. Then for each I_j there exists a \delta_j>0 such that for any partition \pi of I_j with \|\pi\|<\delta_j it follows that \left|\sum\limits_{k=1}^{n}(sup-inf)\delta x_k\right|<\dfrac{\varepsilon}{2(N+1)}.

Let \delta=min\{\delta_1,\dots\delta_{N+1},d,\dfrac{\varepsilon}{4NM}\} and let \pi be a partition of [a,b] with \|\pi\|<\delta. Consider the sum below, we may separate the sum into parts, intervals which contain a discontinuity and those which do not.

\left|\sum\limits_{k=1}^{n}(sup-inf)\delta x_k\right|=\left|\sum\limits_{k=1}^{n_1}(sup-inf)\delta x_k+\sum\limits_{k=n_1}^{n_1+1}(sup-inf)\delta x_k+\dots+\sum\limits_{k={n_N+1}}^{n}(sup-inf)\delta x_k\right|

<\dfrac{\varepsilon}{2(N+1)}+2M\delta+\dots+2M\delta+\dfrac{\varepsilon}{2(N+1)}

Note that there are N+1 I_j intervals and so there are N+1 terms less than \dfrac{\varepsilon}{2(N+1)}. Also note that we have bounded each of the intervals that contain a discontinuity by 2M\delta, this is since the largest possible difference between the sup and inf is 2M. Since there are N discontinuities there are N such terms in our inequality. Now we will use the fact that \delta<\dfrac{\varepsilon}{4NM} in order to simplify our inequality further.

<\dfrac{\varepsilon}{2(N+1)}(N+1)+N\left(\dfrac{2M\varepsilon}{4NM}\right)

=\dfrac{\varepsilon}{2}+\dfrac{\varepsilon}{2}=\varepsilon

Thus, we have shown that f is Riemann Integrable on [a,b].

\Box

Posted in Analysis, Math, Riemann Integrable | 2 Comments

Rudin- Ch. 3 #13

Problem Statement: Given \sum a_n and \sum b_n define the product to be \sum c_n where c_n=\sum\limits_{k=0}^{n}a_kb_{n-k}. Suppose that \sum a_n converges to A absolutely and \sum b_n converges to B absolutely. Prove that \sum c_n converges to a value C absolutely.

Proof: Since \sum a_n and \sum b_n converge absolutely we know that their product converges to AB (this is by Rudin Theorem 3.50 which requires only one of the sums converge absolutely in order to guarantee that their product will converge to AB. This theorem does not guarantee that the product converges absolutely.).

Define C_n=\sum\limits_{k=0}^{n}|c_k| where c_k is as defined previously. Also define \beta_n=B_n-B where B_n=\sum\limits_{k=0}^{n}b_k. Similarly let A_n=\sum\limits_{k=0}^{n}a_k. Then it follows that :

C_n=\sum\limits_{k=0}^{n}\left|c_n\right|

=\left|a_0b_0+\left(a_0b_1+a_1b_0\right)+\dots+\left(a_0b_n+\dots+a_nb_0\right)\right|

=\left|a_0(b_0+\dots+b_n)+a_1(b_0+\dots+b_{n-1})+\dots\right|

\left|a_0B_n+a_1B_{n-1}+\dots+a_nB_0\right|

=\left|a_0(\beta_n+B)+a_1(\beta_{n-1}+B)+\dots+a_n(\beta_0+B)\right|

=\left|(a_0+\dots+a_n)B+a_0\beta_n+\dots+a_n\beta_0\right|

\leq\left|A_nB\right|+ \left|a_0\beta_n+\dots+a_n\beta_0 \right|

Recall that we are trying to show that the sequence of C_n converges to AB. So let \gamma_n=\left|a_0\beta_n+\dots+a_n\beta_0\right|. Since we know that A_n\rightarrow A it follows that the limit of the above equation will tend to AB iff \gamma_n\rightarrow 0. Let us define A^{*}=\sum\limits_{n=0}^{\infty}\left|a_n\right| (we know that A^{*} exists since we know the series converges absolutely).

First note that  as n\rightarrow\infty it follows that \beta_n\rightarrow 0 since B_n\rightarrow B. So let \varepsilon >0. Then there is a N\in\mathbb{N} such that for n> N it follows that \left|B_n-B\right|<\dfrac{\varepsilon}{A^{*}}. But by our definition of \beta_n this also means that for n>N it follows that \left|\beta_n\right|<\dfrac{\varepsilon}{A^{*}}.

Now consider n> N:

\left|\gamma_n\right|=\left|a_0\beta_n+\dots+a_n\beta_0\right|

\leq \left|a_0\beta_n+\dots+a_{n-N-1}\beta_{N+1}\right|+\left|a_{n-N}\beta_N+\dots+a_n\beta_0\right|

\leq\left|a_0\beta_n+\dots+a_{n=N-1}\beta_{N+1}\right|+\left|a_{n-N}\beta_N\right|+\dots+\left|a_n\beta_0\right|

<\left|a_0\beta_n+\dots+a_{n-N-1}\beta_{N+1}\right|+\left(\sum\limits_{k=0}^{n}\left|a_k\right|\right)\dfrac{\varepsilon}{A^{*}}

\leq\left|a_0\beta_n+\dots+a_{n-N-1}\beta_{N+1}\right|+A^{*}\dfrac{\varepsilon}{A^{*}}

=\left|a_0\beta_n+\dots+a_{n-N-1}\beta_{N+1}\right|+\varepsilon

Now let \delta=\left|a_0\beta_n+\dots+a_{n-N-1}\beta_{N+1}\right|. Since N is fixed the above inequality holds as we take the limit as n\rightarrow\infty. But we know that a_n\rightarrow 0 since the series converges and so as n\rightarrow\infty it follows that \delta\rightarrow 0. This implies that \left|\gamma_n\right|< \varepsilon, but since \varepsilon may be made arbitrarily small, it follows that \left|\gamma_n\right|\rightarrow 0.

Thus, we may conclude that C_n\rightarrow AB and so \sum \left|c_n\right| converges, thus, \sum c_n converges absolutely.

\Box

Reflection: The key to this proof is that both original series are absolutely convergent. The necessity of this isn’t immediately clear, but once you try to show that \gamma_n\rightarrow 0 you see that you need to know that you can bound the absolute values of the terms from the series.

Posted in Absolute Convergence, Analysis, Math, Sequence, Series | Leave a comment

LADR-Ch. 5 #12

Problem Statement: Suppose T\in\mathcal{L}(V) is such that every v\in V is an eigenvector of T. Prove that T is a scalar multiple of the identity operator.

Proof: Let v,w\in V. Then by assumption there exist scalars \alpha,\beta such that T(v)=\alpha v and T(w)=\beta w. Now consider T(v+w). Since V is a vector space it follows that v+w\in V and so there exists a scalar, \gamma, such that T(v+w)=\gamma (v+w). But T is a linear operator and so it follows that T(v+w)=\alpha v+\beta w and so setting these two equations equal we see that \alpha v+\beta w=\gamma(v+w)=\gamma v+\gamma w. Equating like terms we get \alpha=\gamma and \beta=\gamma and so it follows that \alpha=\beta=\gamma. Thus, for any v\in V, T(v)=\beta v for some scalar \beta.

Now we must consider if v,w are already scalar multiples of each other. Consider: w=\eta v and T(v)=\mu v. Then T(w)=T(\eta v)=\eta T(v)=\eta\mu v=\mu\eta v=\mu w. An argument similar to what was done above to show that \alpha=\beta=\gamma will show that all of these scalars are in fact the same scalar.

Thus, T is a scalar multiple of the identity operator.

\Box

Posted in Algebra, Linear Operator | Leave a comment