Difference between revisions of "Assignment 2 - 2010 - Solution"

From Process Model Formulation and Solution: 3E4
Jump to navigation Jump to search
m (Created page with "<rst> <rst-options: 'toc' = False/> <rst-options: 'reset-figures' = False/> .. |m3| replace:: m\ :sup:`3` Question 1 [1] ================ The Gauss elimination method is an ef...")
 
m
 
Line 1: Line 1:
<rst>
{{OtherSidebar
<rst-options: 'toc' = False/>
| due_dates = 13 October 2010
<rst-options: 'reset-figures' = False/>
| dates_alt_text = Due date(s)
.. |m3| replace:: m\ :sup:`3`
| questions_PDF = Assignment-2-2010.pdf
| questions_text_alt = Assignment questions
| solutions_PDF =  Assignment-2-2010-Solution.pdf
| solutions_text_alt = Assignment solutions
| other_instructions =
}}


'''Comprehensive solutions prepared by both Ali and Elliot. '''


Question 1 [1]
* [[Assignment_2_-_2010_-_Solution/Question 1 | Question 1]]
================
* [[Assignment_2_-_2010_-_Solution/Question 2 | Question 2]]
 
* [[Assignment_2_-_2010_-_Solution/Question 3 | Question 3]]
The Gauss elimination method is an effective algorithm for solving a set of linear equations, :math:`Ax = b`.  Solve this set of equations by hand, showing the steps taken.
* [[Assignment_2_-_2010_-_Solution/Question 4 | Question 4]]
 
* [[Assignment_2_-_2010_-_Solution/Question 5 | Question 5]]
.. math::
* [[Assignment_2_-_2010_-_Solution/Bonus question | Bonus question]]
 
\left\{\begin{array}{r}
x_1 + x_2 + x_3 + x_4 = 0\hphantom{-}\\
x_1 - 2x_2 - 2x_3 - 2x_4 = 4\hphantom{-}\\
x_1 + 4x_2 - 4x_3 + x_4 = 2\hphantom{-}\\
x_1 - 5x_2 - 5x_3 - 3x_4 = -4\\
\end{array}\right.\\
 
Solution
---------
 
We start by converting the system of equations to the form Ax = b.
 
.. math::
 
\begin{array}{ccccccccc}
x_{1} & + &  x_{2} & + &  x_{3} & + &  x_{4} & = & 0\hphantom{-}\\
x_{1} & - & 2x_{2} & - & 2x_{3} & - & 2x_{4} & = & 4\hphantom{-}\\
x_{1} & + & 4x_{2} & - & 4x_{3} & + &  x_{4} & = & 2\hphantom{-}\\
x_{1} & - & 5x_{2} & - & 5x_{3} & - & 3x_{4} & = & -4\\
\end{array} \;\; \Longrightarrow \;\;
\left[\begin{array}{cccc}
1 &  1 &  1 &  1\\
1 & -2 & -2 & -2\\
1 &  4 & -4 &  1\\
1 & -5 & -5 & -3\\
\end{array} \right] \left[\begin{array}{c}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{c}
0\\
4\\
2\\
-4\\
\end{array} \right]
 
**Forward elimination**
 
Recall that the basic algorithm for the forward elimination step of naive Gauss elimination is:
 
#. Divide the current row :math:`i` by the :math:`i\text{th}` diagonal element.
#. Eliminate all elements in the :math:`i\text{th}` column below the :math:`i\text{th}` diagonal element (i.e. rows :math:`j` > :math:`i`) by subtracting :math:`n` times the :math:`i\text{th}` row from the :math:`j\text{th}` row (where :math:`n` is the current value of value in the :math:`i\text{th}` column of the :math:`j\text{th}` row, i.e. element ( :math:`i` , :math:`j` )). Do not forget to subtract the :math:`b` vector elements as well.
#. Move to the next row.
#. Repeat steps 1 - 3 until the A matrix is upper triangular (i.e. you have eliminated all elements below the diagonal).
 
Therefore our first step is to divide row 1 by the it's diagonal element (in this case the first value), which is equal to 1 (easy enough)
 
.. math::
 
\left[\begin{array}{rrrr}
\frac{1}{(1)} &  \frac{1}{(1)} &  \frac{1}{(1)} &  \frac{1}{(1)}\\
1 & -2 & -2 & -2\\
1 &  4 & -4 &  1\\
1 & -5 & -5 & -3\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
\frac{0}{(1)}\\
4\\
2\\
-4\\
\end{array} \right] \;\; \Longrightarrow \;\;
\left[\begin{array}{rrrr}
1 &  1 &  1 &  1\\
1 & -2 & -2 & -2\\
1 &  4 & -4 &  1\\
1 & -5 & -5 & -3\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0\\
4\\
2\\
-4\\
\end{array} \right]
 
Next we subtract row 1 from rows 2, 3, and 4 using :math:`n` values equal to the coefficients in the first column of each row. In thise case the all the column coefficients are equal to 1, which makes our lives easy again.
 
.. math::
 
\left[\begin{array}{rrrr}
1 &  1 &  1 &  1\\
1 - (1)*(1) & -2 - (1)*(1) & -2 - (1)*(1) & -2 - (1)*(1)\\
1 - (1)*(1) &  4 - (1)*(1) & -4 - (1)*(1) &  1 - (1)*(1)\\
1 - (1)*(1) & -5 - (1)*(1) & -5 - (1)*(1) & -3 - (1)*(1)\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0\\
4 - (1)*(0)\\
2 - (1)*(0)\\
-4 - (1)*(0)\\
\end{array} \right] \;\; \Longrightarrow \;\;
\left[\begin{array}{rrrr}
1 &  1 &  1 &  1\\
0 & -3 & -3 & -3\\
0 &  3 & -5 &  0\\
0 & -6 & -6 & -4\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0\\
4\\
2\\
-4\\
\end{array} \right]
 
Having eliminated all elements in the first column, we move to the second row/column. Once again we start by dividing the current row (row 2) by its diagonal element.
 
.. math::
 
\left[\begin{array}{rrrr}
1 1 &  1 &  1\\
0 & \frac{-3}{(-3)} & \frac{-3}{(-3)} & \frac{-3}{(-3)}\\
0 &  3 & -5 &  0\\
0 & -6 & -6 & -4\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0\\
\frac{4}{(-3)}\\
2\\
-4\\
\end{array} \right] \;\; \Longrightarrow \;\;
\left[\begin{array}{rrrr}
1 &  1 &  1 &  1\\
0 &  1 &  1 &  1\\
0 &  3 & -5 &  0\\
0 & -6 & -6 & -4\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0\\
-4/3\\
2\\
-4\\
\end{array} \right]
 
Next we eliminate all elements below row 2 in column 2 by subtracting 3 times row 2 from row 3 and (-6) times row 2 from row 4.
 
.. math::
 
\left[\begin{array}{rrrr}
1 &  1 &  1 &  1\\
0 &  1 &  1 &  1\\
0 &  3 - (3)*(1) & -5 - (3)*(1) &  0 - (3)*(1)\\
0 & -6 - (-6)*(1) & -6 - (-6)*(1) & -4 - (-6)*(1)\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0\\
-4/3\\
2 - (3)*(-4/3)\\
-4 - (-6)*(-4/3)\\
\end{array} \right] \;\; \Longrightarrow \;\;
\left[\begin{array}{rrrr}
1 & 1 &  1 &  1\\
0 & 1 &  1 &  1\\
0 & 0 & -8 & -3\\
0 & 0 &  0 &  2\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0\\
-4/3\\
6\\
-12\\
\end{array} \right]
 
Moving to row 3 we notice that the previous elimination step removed all values below the 3rd column diagonal. Therefore all we do is divide row 3 by its diagonal element and move on.
 
.. math::
 
\left[\begin{array}{rrrr}
1 & 1 &  1 &  1\\
0 & 1 &  1 &  1\\
0 & 0 & \frac{-8}{(-8)} & \frac{-3}{(-8)}\\
0 & 0 &  0 &  2\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0\\
-4/3\\
\frac{6}{(-8)}\\
-12\\
\end{array} \right] \;\; \Longrightarrow \;\;
\left[\begin{array}{rrrr}
1 & 1 & 1 & 1\\
0 & 1 & 1 & 1\\
0 & 0 & 1 & 3/8\\
0 & 0 & 0 & 2\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0\\
-4/3\\
-3/4\\
-12\\
\end{array} \right]
 
Similarly, since row 4 is the final row of our matrix, there exist no elements under the diagonal to eliminate so we simply divide through by the row diagonal element and then move on to perform backwards substitution.
 
.. math::
 
\left[\begin{array}{rrrr}
1 & 1 & 1 & 1\\
0 & 1 & 1 & 1\\
0 & 0 & 1 & 3/8\\
0 & 0 & 0 & \frac{2}{(2)}\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0\\
-4/3\\
-3/4\\
\frac{-12}{(2)}\\
\end{array} \right] \;\; \Longrightarrow \;\;
\left[\begin{array}{rrrr}
1 & 1 & 1 & 1\\
0 & 1 & 1 & 1\\
0 & 0 & 1 & 3/8\\
0 & 0 & 0 & 1\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0\\
-4/3\\
-3/4\\
-6\\
\end{array} \right]
 
**Backward substitution**
 
Recall that the basic algorithm for the backwards substitution step of naive Gauss elimination is:
 
#. Eliminate all elements in the :math:`i\text{th}` column above the :math:`i\text{th}` diagonal element (i.e. rows :math:`j` < :math:`i`) by subtracting :math:`n` times the :math:`i\text{th}` row from the :math:`j\text{th}` row (where :math:`n` is the current value of value in the :math:`i\text{th}` column of the :math:`j\text{th}` row, i.e. element ( :math:`i` , :math:`j` )). Do not forget to subtract the :math:`b` vector elements as well.
#. Move to the next row.
#. Repeat steps 1 - 2 until the A matrix becomes the identity matrix (i.e. you have eliminated all elements above the diagonal).
 
Therefore we start by subtracting 1 times row 4 from row 1, 1 times row 4 from row 2, and (3/8) times row 4 from row 3.
 
.. math::
 
\left[\begin{array}{rrrr}
1 & 1 & 1 & 1 - (1)*(1)\\
0 & 1 & 1 & 1 - (1)*(1)\\
0 & 0 & 1 & 3/8 - (3/8)*(1)\\
0 & 0 & 0 & 1\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
0 - (1)*(-6)\\
-4/3 - (1)*(-6)\\
-3/4 - (3/8)*(-6)\\
-6\\
\end{array} \right] \;\; \Longrightarrow \;\;
\left[\begin{array}{rrrr}
1 & 1 & 1 & 0\\
0 & 1 & 1 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
6\\
14/3\\
3/2\\
-6\\
\end{array} \right]
 
Moving to next row, in order to eliminate all elements above the 3rd diagonal element we subtract 1 times row 3 from row 1 and 1 times row 3 from row 2.
 
.. math::
 
\left[\begin{array}{rrrr}
1 & 1 & 1 - (1)*(1) & 0\\
0 & 1 & 1 - (1)*(1) & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
6 - (1)*(3/2)\\
14/3 - (1)*(3/2)\\
3/2\\
-6\\
\end{array} \right] \;\; \Longrightarrow \;\;
\left[\begin{array}{rrrr}
1 & 1 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
9/2\\
19/6\\
3/2\\
-6\\
\end{array} \right]
 
Finally, we move to row 2 and eliminate all elements above the 2nd diagonal element by subtracting 1 times row 2 from row 1.
 
.. math::
 
\left[\begin{array}{rrrr}
1 & 1 - (1)*(1) & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
9/2 - (1)*(19/6)\\
19/6\\
3/2\\
-6\\
\end{array} \right] \;\; \Longrightarrow \;\;
\left[\begin{array}{rrrr}
1 & 1 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
\end{array} \right] \left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
4/3\\
19/6\\
3/2\\
-6\\
\end{array} \right]
 
Therefore the solution, arrived at using naive Gauss elimination, is:
 
.. math::
 
\left[\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}\\
\end{array} \right] = \left[\begin{array}{r}
4/3\\
19/6\\
3/2\\
-6\\
\end{array} \right]
 
 
Question 2 [2]
================
 
#. Derive, by hand, the lower-triangular matrix :math:`L` (with ones on the diagonal), and upper triangular matrix :math:`U` so that :math:`A = LU`, using the matrix :math:`A` from question 1.  Afterwards, show that :math:`A = LU`.
 
#. Use MATLAB or Python's built-in functions to solve for :math:`L` and :math:`U`.  Show your code and show the matrices from the software.  Explain any disagreement.  See `the software tutorial on matrix operations <http://modelling3e4.connectmv.com/wiki/Software_tutorial/Matrix_operations>`_ for help.
 
#. Use **your** matrices for :math:`L` and :math:`U` from the first part of this question, and calculate the inverse of :math:`A`.  Compare your inverse with the inverse calculated by MATLAB or Python - does it agree with the software's values?  Also, does :math:`A A^{-1} = I` for your inverse?
 
Solution
---------
 
#.
The :math:`LU` decomposition of a square matrix :math:`A` is defined as a set of lower triangular (i.e. no elements above the diagonal) and upper triangular (i.e. no elements below the diagonal) matrices that, when multiplied together, produce :math:`A`. In the simplest case, it is easy to show that if :math:`A` contains no zero quantities along it's diagonal during a simple forward elimination process (i.e. partial pivoting is not necessary) then matrices :math:`L` and :math:`U` exist for :math:`A` and are directly calculable from :math:`A`. In the more general case, it is also easy to show that an :math:`LU` decomposition exists for any square matrix :math:`A` if partial pivoting is allowed. In this case the :math:`LU` decomposition will have an associated permutation matrix :math:`P`, and really you will have solved the sysem :math:`PA=LU`, but the difference in how these decompositions are used in further calculations (for example to calculate the inverse of a matrix) is minimal (All one must do to use the permuted :math:`LU` set to solve a linear system is multiply the system by :math:`P` --> :math:`Ax = b` --> :math:`PAx = Pb` --> :math:`LUx = Pb`. The solution of the system will be exactly the same). Finally, if one specifies that either the :math:`L` **or** :math:`U` matrix must have all ones along its diagonal (in this problem we specify that :math:`L` must have a unit diagonal), then the :math:`L` and :math:`U` set is unique to :math:`A`.
 
So how do we derive the :math:`LU` decomposition of a matrix :math:`A`. The answer lies in applying elementary row operations. There are three classic elementary row operations in linear algebra:
 
* Row Switching:
 
Performed by left multiplying a matrix :math:`A` by an identity matrix that has had its rows :math:`i` and :math:`j` switched, where :math:`i` and :math:`j` are the rows desired to be switched in A. For example, if A were a 4x4 matrix and you wanted to switch rows 2 and 3, you would left multpiply A by the following "row switching" matrix :math:`E_{1}` (i.e. :math:`P_{1}A`):
 
.. math::
E_{1} = \left[\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
\end{array}\right]
* Multiplication of a row by a constant:
 
This one is pretty straightforward. When you multiply a row by some constant :math:`c` then you multiply every element of that row by that constant on an element by element basis. This may be represented in elementary matrix form by left multiplying :math:`A` by an elementary matrix :math:`E_{2}` that contains only 1s along its diagonal (i.e. it is an identity matrix) except for the :math:`i\text{th}` element, which contains a value of :math:`c` (where :math:`i` is the row we wish to multiply by :math:`c`). So, for example, we had an 4x4 matrix :math:`A` and we wished to multiply row 4 by 5 then we would left multiply A by the following elementary matrix :math:`E_{2}`:
 
.. math::
E_{2} = \left[\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 5 \\
\end{array}\right]
 
* Addition of a multiple of one row to another:
 
Many will recognize this as the essential operation in solving a linear system of equations. The addition of one row :math:`i`, multiplied by some constant :math:`c`, to another row :math:`j` may be represented in elementary matrix form by an identity matrix with an off diagonal element equal to :math:`c` in the :math:`i\text{th}` column and the :math:`j\text{th}` row. So, if for example we wanted to add 4 times row 3 to row 1, we would left multiply A by the following elementary matrix :math:`E_{3}` (go ahead, check for yourself, everything works out):
 
.. math::
E_{3} = \left[\begin{array}{cccc}
1 & 0 & 3 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{array}\right]
 
The final pieces of information we require before we may derive the :math:`LU` decomposition of :math:`A` are given below (derivation not provided):
 
* The inverse of :math:`E_{1}` is equal to :math:`E_{1}^{T}`
* The inverse of :math:`E_{2}` is equal to a matrix :math:`E_{2'}` whose diagonal elements are equal to the inverse of those in :math:`E_{2}` (i.e. 1 over the elements in :math:`E_{2}`)
* The inverse of :math:`E_{3}` is equal to :math:`E_{3}` with all its off-diagonal elements multiplied by (-1)
* A lower triangular matrix multplied by a lower triangular matrix is also lower triangular (the reverse is also true)
* The inverse of a lower triangular matrix is also lower triangular
 
Now that we have the required background information we may derive the :math:`LU` decomposition of :math:`A`. Note that this derivation deals with the simplified case that no intermediate partial pivoting is required during the derivation.
 
So let us start by asking a simple question. If one wanted to derive an upper triangular matrix from :math:`A`, how would one do it? The obvious answer would be to perform forward elimination on the matrix until all elements below the diagonal no longer existed. But how would this look in terms elementary row operations? Essentially we would be continuously multiplying by elementary matrices of form :math:`E_{3}`. Furthermore, since we are trying to convert matrix A to an upper triangular matrix we would only be concerned with eliminating elements below the diagonal wouldnt we? Therefore all of the elementary matrices applied would be some row :math:`i` being added to some row :math:`j` where :math:`i > j`. Recall that the off-diagonal element of E_{3}
 
Use the :math:`U` matrix from question 1, then we only need to derive the :math:`L` matrix.
 
MATLAB and Python's LU will **not** the same: because they use a permutation matrix.
 
 
Question 3 [4]
================
 
.. note:: This is an example of the type of question you will get in the take-home midterm.
 
An impure gas stream can be cleaned, by absorbing the impurity into the liquid phase of an appropriate liquid solvent.  For example, carbon dioxide can be removed from flue gas by scrubbing it with monoethanolamine; or NO\ :sub:`2` can be absorbed into water to produce nitric acid.
 
In this question we will derive the model equations for a sequence of absorber stages (trays), in which a component in the gas phase is absorbed into the liquid phase.
 
First consider the :math:`n^\text{th}` stage in an absorber:
 
.. figure:: images/absorber-tray.png
:alt: images/math
:scale: 16
:align: center
 
where the molar gas flow is :math:`G` and the liquid molar flow is :math:`L`, and assume these are constant throughout the column.  Let the number of moles held-up in the liquid phase of the tray be :math:`H_n`, the hold-up. And let's only consider a binary system, where the species we are interested in is called A, and the absorbing liquid is species B. 
The mol **fraction** of species A, in the gas phase, leaving the tray is :math:`y_{A,n}`, and we also have :math:`y_{B,n}`, the mol fraction of B leaving:  this gives of course :math:`y_{A,n} + y_{B,n} = 1`.  But since we are interested only in species A in this question, and because we can always calculate B by subtraction from 1.0, we will drop the A and B subscripts, and only write -- for example -- :math:`y_n` to denote the gas mol fraction of A leaving the :math:`n^\text{th}` stage of the absorber.
Similarly, the mol fraction of A, in the liquid phase, leaving the stage is :math:`x_n`.  And we will denote the liquid mol fraction of A entering as :math:`x_{n+1}`, while the gas-phase mol fraction of A entering is defined as :math:`y_{n-1}`.
Using Raoult's Law, and assuming fairly ideal conditions, and that the liquid and vapour phase are well-mixed and in equilibrium, we may relate the concentrations in the gas and liquid phase as:
.. math::
y_n P_\text{total} &= x_n P_\text{A} \gamma_A \\
              y_n  &= \beta x_n
where :math:`P_\text{total}` is the total pressure in the absorber, and :math:`P_\text{A}` is the pure-component vapour pressure of A, at the given conditions, and :math:`\gamma_A` is an activity coefficient that corrects for non-idealities (cross-reference your thermodynamics course for more background on this equation).  In this question you may assume that :math:`\beta` is constant throughout the absorber.
 
#. Write down a **dynamic** mass balance for species A, in terms of the nomenclature above, for this :math:`n^\text{th}` stage in the absorber tower.
#. Now write down the **steady-state** mass-balance equation for the above, but only in terms of liquid-phase mol fractions.  Also divide the steady-state tray balance equations through by :math:`\beta G` and simplify them using the term :math:`\delta = \displaystyle \frac{L}{\beta G}`
#. Now repeat this previous step, and write the steady-state mass-balance equations for the five-stage system shown here.
 
.. figure:: images/absorber-column.png
:scale: 25
:align: center
#. Rewrite these mass-balance equations in matrix form, :math:`Ax = b`, where:
 
.. math::
x = \left[\begin{array}{c} y_F \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_F \\ y_5 \end{array}\right]
#. Write a function that will generate the :math:`A` matrix and :math:`b` vector, given the variables in the system.  Use this function and solve the system of equations for the two cases below:
 
======= ========= ========= ============= =========== ===========
Case    :math:`G` :math:`L` :math:`\beta` :math:`x_F` :math:`y_F`
======= ========= ========= ============= =========== ===========
Case I  100      150      0.6          0.05        0.35
------- --------- --------- ------------- ----------- -----------
Case II 100      50        0.6          0.05        0.35
======= ========= ========= ============= =========== ===========
 
In each case, we are obviously interested in the liquid-phase concentration profile along the column, :math:`x_F` down to :math:`x_1`.  Report these values and explain whether your answers make physical sense, or not, for the two cases.
 
Solution
---------
 
#. The dynamic mass balance for the :math:`n^\text{th}` tray:
 
.. math::
\frac{H dx_n}{dt} &= L x_{n+1} + Gy_{n-1} - Lx_n - Gy_n \\
H \frac{dx_n}{dt} &= L(x_{n+1} - x_n) + G(y_{n-1} - y_n)
#. At steady state we have :math:`\displaystyle \frac{dx_n}{dt} = 0`.  Further, using that :math:`y_n = \beta x_n`, and the definition of :math:`\delta = \displaystyle \frac{L}{G\beta}`, we can simplify the previous equation to:
 
.. math::
0 &= L(x_{n+1} - x_n) + G \beta (x_{n-1} - x_n) \\
0 &= x_{n-1} - \delta x_n - x_n + \delta x_{n+1} \\
0 &= x_{n-1} - (\delta + 1) x_n + \delta x_{n+1}
#. For the system with 5 trays, we have:
 
#. :math:`0 = \displaystyle \frac{y_F}{\beta} - (\delta + 1)x_1 + \delta x_2`
#. :math:`0 = x_1 - (\delta + 1)x_2 + \delta x_3`
#. :math:`0 = x_2 - (\delta + 1)x_3 + \delta x_4`
#. :math:`0 = x_3 - (\delta + 1)x_4 + \delta x_5`
#. :math:`0 = x_4 - (\delta + 1)x_5 + \delta x_F`
#. We require a square matrix for :math:`A`, so we add 3 other equations to the above 5 tray balances. These three equations correspond to the known fractions of :math:`A` in the inlet gas and liquid streams, as well as the equation that relates :math:`y_5` to :math:`x_5`.
 
 
.. math::
Ax &= b \\
\left(
\begin{array}{cccccccc}
1                            & 0  & 0 & 0 & 0 & 0 & 0 & 0 \\
\displaystyle \frac{1}{\beta} & -(\delta + 1) & \delta & 0 & 0 & 0 & 0 & 0 \\
0               & 1 & -(\delta + 1) & \delta & 0 & 0 & 0 & 0 \\
0               & 0 & 1 & -(\delta + 1) & \delta & 0 & 0 & 0 \\
0               & 0 & 0 & 1 & -(\delta + 1) & \delta & 0 & 0 \\
0               & 0 & 0 & 0 & 1 & -(\delta + 1) & \delta& 0 \\
0               & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0               & 0 & 0 & 0 & 0 & -\beta & 0 & 1 \\
\end{array}
\right)
\left(
\begin{array}{c}
y_F \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_F \\ y_5
\end{array}
\right)
&=
\left(
\begin{array}{c}
y_F \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ x_F \\ 0
\end{array}
\right)
 
**Note**: Assuming :math:`y_5=0` (complete separation) instead of making the equilibrium assumption :math:`y_5-\beta x_5=0` will change the last row of the above matrix as having zero element instead of :math:`-\beta`. This approach only affects the result of :math:`y_5` itself because :math:`y_5` does not participate in any other equations. **Both solutions will be accepted**.
#. The code below defines functions for the :math:`A` matrix and the :math:`b` vector, and then solves the system for the two cases. The liquid concentration profiles through the column are obtained as follow:
 
Case I (:math:`L=150`):  Impurity [%] in the liquid stream, top to bottom: 5.00, 5.33, 6.15, 8.21, 13.3, 26.2; :math:`y_5 = 3.20`
 
Case II (:math:`L=50`): Impurity [%] in the liquid stream, top to bottom: 5.00, 18.4, 29.5, 38.8, 46.5, 53.0; :math:`y_5 = 11.0`
 
**Discussion**: The only difference between Case I and case II is the amount of the liquid solvent :math:`L` used in the absorber.  One expects that if a lower flow rate of solvent is used, yet the hold-up on each tray is the same, that the concentrations of the impurity in the liquid phase will be higher on each tray.  This is indeed shown in the results.  However, the outlet concentration of impurity in the gas phase is going to be higher (less impurity is removed) at the lower liquid flow rate, since absorption of the impurity into the liquid phase is inversely proportional to the impurity already in the liquid phase (due to concentration gradients).
 
You can verify for yourself that as the liquid flow rate is increased, keeping the gas flow constant, that the limiting outlet concentration in the gas phase is 3.0%, the amount that is in equilibrium with an "infinite" liquid composition of 5%.  Also, the outlet liquid-phase concentration tends to 5% across all trays as :math:`L \rightarrow \infty`.
 
.. twocolumncode::
    :code1: ../che3e4/Assignments/Assignment-2/code/absorber_question.m
    :language1: matlab
    :header1: MATLAB code
    :code2: ../che3e4/Assignments/Assignment-2/code/absorber-question.py
    :language2: python
    :header2: Python code
 
Question 4 [3]
===============
 
Write a MATLAB or Python function that implements the Jacobi (simultaneous update) method for solving a general system of equations, :math:`Ax = b`.  The function should accept the following inputs:
 
* :math:`A`: any :math:`N \times N` matrix.
* :math:`b`: any :math:`N \times 1` vector.
* :math:`x_0`: any :math:`N \times 1` vector that is the initial guess for :math:`x`.
* :math:`\text{tol}`: a small number, so that we will stop the algorithm when the relative difference between successive iterations is small: i.e. stop when :math:`\displaystyle \frac{||x^{(k)} - x^{(k-1)}|| }{||x^{(k)}||} < \text{tol}`, where the norms are assumed to be the Euclidean norm.
* :math:`\text{maxiter}`: an integer, for example, 200, representing an upper bound on the number of iterations we are willing to perform.
 
The function does not have to perform pivoting to achieve diagonal dominance.  The function should calculate and print out, at every iteration, the relative difference between the :math:`k^\text{th}` and :math:`k-1^\text{th}` iteration.
 
Consider this system of equations:
 
.. math::
 
y_F &= 0.5\\
G/L y_F - (\delta+1)x_1 + x_2 &= 0 \\
\delta x_1  - (\delta+1)x_2 + x_3 &= 0 \\
\delta x_2  - (\delta+1)x_3 + x_F &= 0 \\
x_F &= 0.1
where :math:`G/L = 1.6`, and :math:`\delta = 0.8`
 
Your answer should include:
 
#. A print out of the function
#. The relative difference between the estimates for the first 5 iterations of solving the system of **five** equations, start from :math:`y^{(0)}_F = x^{(0)}_1 = x^{(0)}_2 = x^{(0)}_3 = x^{(0)}_F = 0`.
#. The number of iterations required, and the final solution, using a termination tolerance of :math:`\epsilon = 1 \times 10^{-9}`.
#. What is the condition number of :math:`A`?
 
Solution
------------
 
Given :math:`Ax = b`, the Jacobi method can be derived as shown in class, or an alternative derivation is given here, which leads to a slightly cleaner implementation.  Let :math:`A = D + R` where D is a diagonal matrix containing diagonal elements of :math:`A`. Then :math:`x^{k+1}=D^{-1}(b-Rx^k)`. This procedure is generically implemented as shown in the MATLAB code below, and gives exactly the same solution as the Python code (the Python code implements the version shown in the course notes).
 
Note the termination criteria is for either exceeding the maximum iterations **or** achieving the desired tolerance.  Now for the given system of 5 equations, we can simply call the function we have written, and provide appropriate input values as given in the problem:
 
.. twocolumncode::
    :code1: ../che3e4/Assignments/Assignment-2/code/jacobi.m
    :language1: matlab
    :header1: MATLAB code
    :code2: ../che3e4/Assignments/Assignment-2/code/jacobi.py
    :language2: python
    :header2: Python code
 
The relative difference :math:`\displaystyle \frac{||x^{(k)} - x^{(k-1)}|| }{||x^{(k)}||}` and the first five iterations is given as::
 
            X-vector updates                                    Tolerance
    Iter 0 : [ 0.5,  0.        , -0.        , -0.        ,  0.1], 1.0
    Iter 1 : [ 0.5,  0.44444444, -0.        ,  0.05555556,  0.1], 0.65995
    Iter 2 : [ 0.5,  0.44444444,  0.22839506,  0.05555556,  0.1], 0.31895
    Iter 3 : [ 0.5,  0.57133059,  0.22839506,  0.15706447,  0.1], 0.19952
    Iter 4 : [ 0.5,  0.57133059,  0.34118275,  0.15706447,  0.1], 0.13224
 
It is seen that the relative difference is decreasing.  The number of iterations required is 58 to achieve a relative error of :math:`8.604 \times 10^{-10}`, less than the required :math:`1 \times 10^{-9}`.
 
The solution at the 58th iteration is :math:`x=[0.500000, 0.695122, 0.451220, 0.256098, 0.100000]`, which agrees with the solution found using built-in functions.
 
The condition number of :math:`A` can be obtained by computing :math:`||A||||A^{-1}||` or using MATLAB's built-in function as cond(A). The result is :math:`||A||||A^{-1}||=8.7259`. Note that it is the 2-norm condition number.
 
Question 5 [2]
=================
 
.. note:: From a previous exam, 5 marks out of 50, 3 hours.  This question should be solved by hand, and is a typical question you can expect in the midterm and final.
 
For the following system of linear algebraic equations:
 
.. math::
-5x_1 + 0x_2 + 12x_3 =&\ 80 \\
4x_1 -x_2 - x_3 =&\ -2 \\
6x_1 + 8x_2  +0x_3 =&\ 45
 
#. Use the Jacobi method to compute :math:`x^{(1)}` starting from :math:`x^{(0)} = \bf 0`.
#. Use the Gauss-Seidel method to compute :math:`x^{(1)}` starting from :math:`x^{(0)} = \bf 0`.
#. Use the Gauss-Seidel method with relaxation, :math:`\omega=0.9`, to compute :math:`x^{(1)}` starting from :math:`x^{(0)} = \bf 0`.
 
Bonus question [1]
===================
 
Use the pseudo code developed in the course notes to write a MATLAB or Python function that implements Gauss elimination, without pivoting.  The function should take :math:`A` and :math:`b` as inputs, and return vector :math:`x`.
 
Use your function to solve this set of equations from question 1, and print the update :math:`A` matrix and :math:`b` vector after every pivot step.
 
 
Solution
-----------
 
A function that implements the Gauss elimination without pivoting is provided below.
 
.. twocolumncode::
    :code1: ../che3e4/Assignments/Assignment-2/code/guass.m
    :language1: matlab
    :header1: MATLAB code
    :code2: ../che3e4/Assignments/Assignment-2/code/guass.py
    :language2: python
    :header2: Python code
 
Now we use this function to solve the system of equations in Question 1. The commented lines in the MATLAB code show how the function is used. The results are: :math:`x =[1.3333, 3.1667,1.5000,-6.0000]`.
Also, the :math:`A` and :math:`b` (denoted as :math:`C=[A\quad b]` below) are updated as:
 
.. math::
C^1=
\left[ \begin{array}{ccccc}
      1  &  1  &  1  &  1  &  0\\
    0  &  -3  &  -3  &  -3  &  4\\
    0  &  3  &  -5  &  0  &  2\\
      0  &  -6 &  -6  &  -4  &  -4
\end{array} \right]
 
.. math::
C^2=
\left[ \begin{array}{ccccc}
    1  &  1  &  1  &  1 &  0\\
    0  &  -3  &  -3  &  -3 &  4\\
    0  &  0  &  -8  &  -3 &  6\\
    0  &  0  &  0  &  2 &  -12
\end{array} \right]
 
.. math::
C^3=
\left[ \begin{array}{ccccc}
      1  &  1  &  1  &  1  &  0\\
      0  &  -3  &  -3  &  -3  &  4\\
    0  &  0  &  -8  &  -3  &  6\\
    0  &  0  &  0  &  2  & -12
\end{array} \right]
The final result for :math:`x = [1.333, 3.167, 1.5, -6]`.
 
.. raw:: latex
 
\vspace{0.5cm}
\hrule
\begin{center}END\end{center}
</rst>

Latest revision as of 01:03, 16 October 2010

Due date(s): 13 October 2010
Nuvola mimetypes pdf.png (PDF) Assignment questions
Nuvola mimetypes pdf.png (PDF) Assignment solutions

Comprehensive solutions prepared by both Ali and Elliot.