The Matrix Version of Linear Programming Linear-Programming-the-matrix-version.pdf
In general, we will be dealing with systems in which we have lots of variables that can influence an outcome we are interested in. To qualify for a linear programming approach, these need to be able to be related to the outcome "linearly." This means the influence of a change in the variable is proportional to that change (and not to it's square root or it's cube):
Effect on outcome = proportionality constant x change in variable
The following are linear expressions
y=mx+b
income=23×years_of_education +30
degrees_centigrade =(degrees_fahrenheit-32)×5/9
easier to see if we rearrange.
degrees_centigrade =5/9×degrees_fahrenheit-(5×32)/9
y=c_1 x_1+c_2 x_2+c_3 x_3+c_4 x_4
This last one is the most general. It has two parts. The first is a list of "coefficients" (the m's) which tell us how much each variable contributes to our outcome. The second is the list of variables (the x's). The equation says "you can compute the outcome (y) by taking the list of variable values and multiplying each one by the appropriate coefficient and then adding the products together":
Let's represent the c's like this $(c_1 c_2 c_3 c_4)$ and the x's like this $(x_1 x_2 x_3 x_4)$. We call a set of numbers like this (it's basically an extension of an "ordered pair" (x,y) a "vector" and often you'll see (or hear) the term n-vector meaning a vector with n elements. We can then write the above equation like this:

(1)
\begin{align} y=\left(c_1 c_2 c_3 c_4\right) \times \left(\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4\end{array}\right) \end{align}

y=(■(■(c_1&c_2 )&■(c_3&c_4 )))(■(■(x_1@x_2 )@■(x_3@x_4 )))

\[\left(\!\!\!
\begin{array}{c}
n \ r
\end{array}
\!\!\!\right) = {^n}C_r = \frac{n!}{r!(n-r)!}
\]
This multiplication happens element by element scanning across the first vector and down the second adding the products as we go ($c_1$ times $x_1$ plus $c_2$ times $x_2$ etc.). We'll later see why this rule makes sense (that is, why we can't just multiply two vertical vectors together) – for now, just take it as the way it's done.
By convention we represent vectors by bold printed letters like this:
c=(■(■(c_1@c_2 )@■(c_3@c_4 )))
x=(■(■(x_1@x_2 )@■(x_3@x_4 )))
To get a vector written vertically into horizontal form (or vice versa) we "transpose" it and this is represented by a superscript T:
c^T=(■(■(c_1&c_2 )&■(c_3&c_4 )))
Thus, our equation looks like this:
c^T x=c_1 x_1+c_2 x_2+c_3 x_3+c_4 x_4
Now let's look at our constraints. Typically, we have one or more linear equations relating one or more of our variables to a constant. Let's assume we have three constraints and four variables. These would look like this

(2)
\begin{align} a_{11} x_1+a_{12} x_2+a_{13} x_3+a_{14} x_4 \leq b_1 \\ a_{21} x_1+a_{22} x_2+a_{23} x_3+a_{24} x_4 \leq b_2 \\ a_{31} x_1+a_{32} x_2+a_{33} x_3+a_{34} x_4 \leq b_3 \\ \end{align}

The "a" coefficients are double subscripted. The first subscript corresponds to which constraint equation we are in and the second one tells us which variable the coefficient goes with. The b's are the numbers that are a part of each constraint. The a-coefficients can be pulled out of each of these equations just like we did above as an "A vector times an X vector":
(■(■(a_11&a_12 )&■(a_13&a_14 )))(■(■(x_1@x_2 )@■(x_3@x_4 )))≤b_1
We can stack the four A vectors together in what we call a matrix:

A=(■(■(■(a_11@a_21@a_31 )&■(a_12@a_22@a_32 ))&■(■(a_13@a_23@a_33 )&■(a_14@a_24@a_34 ))))

And we can write the b's as

\[\left(\!\!\!
\begin{array}{c}
n \ r
\end{array}
\!\!\!\right) = {^n}C_r = \frac{n!}{r!(n-r)!}
\]strikethrough text

b=(■(■(b_1@b_2 )@b_3 ))

And so we can write the constraints as

Ax≤b

STANDARD FORM

So the standard maximization problem can be stated like this. Find an n-vector $x=(x_1,…,x_n )^T$to maximize

(3)
\begin{equation} c^T x=c_1 x_1+⋯+c_n x_n \end{equation}

subject to the constraints

Ax≤b
and
x≥0