![](http://www.freewebs.com/thematrix-revisited/matrix%20home.jpg)
0.6
I + 0.3 S
while the population of the suburbs is
0.4
I + 0.7 S
After two years, the population of the inner city is
0.6
(0.6 I + 0.3 S) + 0.3 (0.4 I + 0.7 S)
and the suburban population is given by
0.4
(0.6 I + 0.3 S) + 0.7(0.4 I + 0.7 S)
![\begin{displaymath}\left(\begin{array}{c}
I\\
S\\
\end{array}\right)\end{displaymath}](http://www.sosmath.com/matrix/matrix1/img1.gif)
So after one year the table which gives the two populations is
![\begin{displaymath}\left(\begin{array}{c}
0.6 I + 0.3 S\\
0.4 I + 0.7 S\\
\end{array}\right)\end{displaymath}](http://www.sosmath.com/matrix/matrix1/img2.gif)
If we consider the following rule (the product of two matrices)
![\begin{displaymath}\left(\begin{array}{cc}
a&b\\
c&d\\
\end{array}\right) \lef...
...left(\begin{array}{c}
aI + bS\\
cI + dS\\
\end{array}\right),\end{displaymath}](http://www.sosmath.com/matrix/matrix1/img3.gif)
then the populations after one year are given by the formula
![\begin{displaymath}\left(\begin{array}{cc}
0.6&0.3\\
0.4&0.7\\
\end{array}\right) \left(\begin{array}{c}
I\\
S\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix1/img4.gif)
After two years the populations are
![\begin{displaymath}\left(\begin{array}{cc}
0.6&0.3\\
0.4&0.7\\
\end{array}\rig...
...ght) \left(\begin{array}{c}
I\\
S\\
\end{array}\right)\Bigg).\end{displaymath}](http://www.sosmath.com/matrix/matrix1/img5.gif)
Combining this formula with the above result, we get
![\begin{displaymath}\left(\begin{array}{cc}
0.6&0.3\\
0.4&0.7\\
\end{array}\rig...
...imes 0.4&0.4 \times 0.3 + 0.7 \times0.7\\
\end{array}\right). \end{displaymath}](http://www.sosmath.com/matrix/matrix1/img6.gif)
In other words, we have
![\begin{displaymath}\left(\begin{array}{cc}
a&b\\
c&d\\
\end{array}\right) \lef...
...{array}{cc}
ae+ bg&af+bh\\
ce + dg&cf+dh\\
\end{array}\right)\end{displaymath}](http://www.sosmath.com/matrix/matrix1/img7.gif)
![](http://upload.wikimedia.org/wikipedia/en/thumb/e/eb/Matrix_multiplication_diagram_2.svg/313px-Matrix_multiplication_diagram_2.svg.png)
![\begin{displaymath}\left(\begin{array}{ccc}
a&b&c\\
d&e&f\\
\end{array}\right)...
...a +b\beta +c\nu\\
d\alpha +e\beta +f\nu\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix1/img8.gif)
Remember that though we were able to perform the above multiplication, it is not possible to perform the multiplication
![\begin{displaymath}\left(\begin{array}{c}
\alpha\\
\beta\\
\nu\\
\end{array}\...
...)\left(\begin{array}{ccc}
a&b&c\\
d&e&f\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix1/img9.gif)
So we have to be very careful about multiplying matrices. Sentences like "multiply the two matrices A and B" do not make sense. You must know which of the two matrices will be to the right (of your multiplication) and which one will be to the left; in other words, we have to know whether we are asked to perform
![$A \times B$](http://www.sosmath.com/matrix/matrix1/img10.gif)
![$B \times A$](http://www.sosmath.com/matrix/matrix1/img11.gif)
![\begin{displaymath}\left(\begin{array}{cc}
0&1\\
0&0\\
\end{array}\right)\;\mb...
...nd}\; \left(\begin{array}{cc}
0&0\\
1&0\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix1/img12.gif)
We have
![\begin{displaymath}\left(\begin{array}{cc}
0&1\\
0&0\\
\end{array}\right)\left...
...ht) = \left(\begin{array}{cc}
1&0\\
0&0\\
\end{array}\right) \end{displaymath}](http://www.sosmath.com/matrix/matrix1/img13.gif)
and
![\begin{displaymath}\left(\begin{array}{cc}
0&0\\
1&0\\
\end{array}\right)\left...
...ht) = \left(\begin{array}{cc}
0&0\\
0&1\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix1/img14.gif)
So what is the conclusion behind this example? The matrix multiplication is not commutative, the order in which matrices are multiplied is important. In fact, this little setback is a major problem in playing around with matrices. This is something that you must always be careful with. Let us show you another setback. We have
![\begin{displaymath}\left(\begin{array}{cc}
0&1\\
0&0\\
\end{array}\right)\left...
...egin{array}{cc}
0&0\\
0&0\\
\end{array}\right);\;\mbox{i.e.},\end{displaymath}](http://www.sosmath.com/matrix/matrix1/img15.gif)
the product of two non-zero matrices may be equal to the zero-matrix.
Properties involving Addition. Let A, B, and C be mxn matrices. We have
- 1.
- A+B = B+A
- 2.
- (A+B)+C = A + (B+C)
- 3.
whereis the mxn zero-matrix (all its entries are equal to 0);
- 4.
if and only if B = -A.
Properties involving Multiplication.
- 1.
- Let A, B, and C be three matrices. If you can perform
the products AB, (AB)C, BC, and A(BC),
then we have
(AB)C = A (BC)
Note, for example, that if A is 2x3, B is 3x3, and C is 3x1, then the above products are possible (in this case, (AB)C is 2x1 matrix). - 2.
- If
and
are numbers, and A is a matrix, then we have
- 3.
- If
is a number, and A and B are two matrices such that the product
is possible, then we have
- 4.
- If A is an nxm matrix and
the mxk zero-matrix, then
Note thatis the nxk zero-matrix. So if n is different from m, the two zero-matrices are different.
Properties involving Addition and Multiplication.
- 1.
- Let A, B, and C be three matrices. If you can perform
the appropriate products, then we have
(A+B)C = AC + BC
and
A(B+C) = AB + AC
- 2.
- If
and
are numbers, A and B are matrices, then we have
and
Example. Consider the matrices
![\begin{displaymath}A = \left(\begin{array}{cc}
0&1\\
-1&0\\
\end{array}\right)...
...nd}\; C = \left(\begin{array}{ccc}
0&1&5\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img14.gif)
Evaluate (AB)C and A(BC). Check that you get the same matrix.
Answer. We have
![\begin{displaymath}AB = \left(\begin{array}{c}
-1\\
-2\\
\end{array}\right)\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img15.gif)
so
![\begin{displaymath}(AB)C = \left(\begin{array}{c}
-1\\
-2\\
\end{array}\right)...
...t(\begin{array}{ccc}
0&-1&-5\\
0&-2&-10\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img16.gif)
On the other hand, we have
![\begin{displaymath}BC = \left(\begin{array}{ccc}
0&2&10\\
0&-1&-5\\
\end{array}\right)\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img17.gif)
so
![\begin{displaymath}A(BC) = \left(\begin{array}{cc}
0&1\\
-1&0\\
\end{array}\ri...
...t(\begin{array}{ccc}
0&-1&-5\\
0&-2&-10\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img18.gif)
Example. Consider the matrices
![\begin{displaymath}X = \left(\begin{array}{c}
a\\
b\\
c\\
\end{array}\right)\...
...ray}{cccc}
\alpha & \beta & \nu & \gamma\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img19.gif)
It is easy to check that
![\begin{displaymath}X = a \left(\begin{array}{c}
1\\
0\\
0\\
\end{array}\right...
...) + c \left(\begin{array}{c}
0\\
0\\
1\\
\end{array}\right) \end{displaymath}](http://www.sosmath.com/matrix/matrix2/img20.gif)
and
![\begin{displaymath}Y = \alpha \left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
\end{ar...
...ma \left(\begin{array}{cccc}
0 &0 & 0& 1\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img21.gif)
These two formulas are called linear combinations. More on linear combinations will be discussed on a different page.
We have seen that matrix multiplication is different from normal multiplication (between numbers). Are there some similarities? For example, is there a matrix which plays a similar role as the number 1? The answer is yes. Indeed, consider the nxn matrix
![\begin{displaymath}I_n = \left(\begin{array}{ccccc}
1&0&0&\cdots&0\\
0&1&0&\cdo...
...cdot\\
\cdot&&&&\cdot\\
0&0&0&\cdots&1\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img22.gif)
In particular, we have
![\begin{displaymath}I_2 = \left(\begin{array}{ccc}
1&0\\
0&1\\
\end{array}\righ...
...egin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&1\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img23.gif)
The matrix In has similar behavior as the number 1. Indeed, for any nxn matrix A, we have
A
In = In A = A
The matrix In is called the Identity Matrix of order n.
Example. Consider the matrices
![\begin{displaymath}A = \left(\begin{array}{cc}
1&2\\
-1&-1\\
\end{array}\right...
...B = \left(\begin{array}{cc}
-1&-2\\
1&1\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img24.gif)
Then it is easy to check that
![\begin{displaymath}AB = I_2 \;\;\mbox{and}\;\; BA = I_2.\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img25.gif)
The identity matrix behaves like the number 1 not only among the matrices of the form nxn. Indeed, for any nxm matrix A, we have
![\begin{displaymath}I_n A = A\;\;\mbox{and}\;\; A I_m = A.\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img26.gif)
In particular, we have
![\begin{displaymath}I_4 \left(\begin{array}{c}
a\\
b\\
c\\
d\\
\end{array}\ri...
... \left(\begin{array}{c}
a\\
b\\
c\\
d\\
\end{array}\right).\end{displaymath}](http://www.sosmath.com/matrix/matrix2/img27.gif)