SOLUTION: Let L:V→W be a linear transformation. Let {(X1),(X2),…, (Xn)} ϵ V. IF {L(X1),L(X2),…, L(Xn)} is linearly dependent, then {X1,X2,...,Xn} is linearly dependent.

Algebra ->  College  -> Linear Algebra -> SOLUTION: Let L:V→W be a linear transformation. Let {(X1),(X2),…, (Xn)} ϵ V. IF {L(X1),L(X2),…, L(Xn)} is linearly dependent, then {X1,X2,...,Xn} is linearly dependent.      Log On


   



Question 1031968: Let L:V→W be a linear transformation. Let {(X1),(X2),…, (Xn)} ϵ V. IF {L(X1),L(X2),…, L(Xn)} is linearly dependent, then {X1,X2,...,Xn} is linearly dependent.
Found 2 solutions by ikleyn, robertb:
Answer by ikleyn(52800) About Me  (Show Source):
You can put this solution on YOUR website!
.
It is not necessarily true.
-----------------------------------

I just said it, and I am repeating it one more time and again.

>>> IT IS NOT NECESSARILY TRUE. <<<

It is true only in the case when the operator L is non-degenerated (has the zero kernel).
Which is not always the case for linear transformations.

The proof of the other tutor is wrong, unfortunately.

A contr-example is:


Take 3 linearly independent vectors in R%5E3.

Let the operator L be the projection R%5E3 on R%5E2.

Every three vectors in R%5E2 are dependent.

So are dependent in R%5E2 the projections of the original vectors from {{R^3}}}.

But the original vectors were chosen as linearly independent. 

It is on the level of elementary knowledge of linear algebra.

Again: the fact that the images are linearly dependent DOES NOT IMPLY that the pre-images are necessarily linearly dependent.


Answer by robertb(5830) About Me  (Show Source):
You can put this solution on YOUR website!
The problem asks to show that: If {L(X1),L(X2),…, L(Xn)} is linearly dependent, then {X1,X2,...,Xn} is linearly dependent.
It would be easier to prove the contrapositive of this statement:
If {X1,X2,...,Xn} is linearly independent, then {L(X1),L(X2),…, L(Xn)} is linearly independent as well. This is quite easy to prove.
DEFINITION: Let +c%5B1%5DX%5B1%5D+%2B+c%5B2%5DX%5B2%5D+...+c%5Bn%5DX%5Bn%5D+=+theta%5BV%5D, where theta%5BV%5D is the zero vector in V.
Then linear independence of the set implies that only c%5B1%5D+=+c%5B2%5D+=+c%5B3%5D = ...= c%5Bn-1%5D+=+c%5Bn%5D+=+0 will satisfy the previous equation.

Now let
d%5B1%5DL%28X%5B1%5D%29+%2B+d%5B2%5DL%28X%5B2%5D%29+...+d%5Bn%5DL%28X%5Bn%5D%29 =+theta%5BW%5D <-----Equation A.
(theta%5BW%5D is the zero vector in W and the d constants are arbitrary.)
By the property of the linear transformation L,
Equation A is equivalent to
L%28d%5B1%5DX%5B1%5D%29+%2B+L%28d%5B2%5DX%5B2%5D%29+...+L%28d%5Bn%5DX%5Bn%5D%29) =+theta%5BW%5D, or
L(d%5B1%5DX%5B1%5D%2B+d%5B2%5DX%5B2%5D+...+d%5Bn%5DX%5Bn%5D) =+theta%5BW%5D.
==> d%5B1%5DX%5B1%5D%2B+d%5B2%5DX%5B2%5D+...+d%5Bn%5DX%5Bn%5D = theta%5BV%5D,
or the left-hand side linear combination would be an element of the kernel of L.
==> d%5B1%5D=+d%5B2%5D=...=d%5Bn%5D=0,
by virtue of the linear independence of the X vectors.
Therefore, {L(X1),L(X2),…, L(Xn)} is linearly independent as well, and the theorem is proved.