Difference between revisions of "Modelling and scientific computing"

From Process Model Formulation and Solution: 3E4
Jump to navigation Jump to search
Line 12: Line 12:
* Python code used in class for ...
* Python code used in class for ...


'''calculating relative error'''
'''Calculating relative error'''
<syntaxhighlight lang="python">
<html><div data-datacamp-exercise data-lang="python" data-height="auto">
    <code data-type="sample-code">
import numpy as np
import numpy as np
y = 13.0
y = 13.0
Line 29: Line 30:
print('Used %d iterations to calculate sqrt(%f) = %.20f; '
print('Used %d iterations to calculate sqrt(%f) = %.20f; '
       'true value = %.20f\n ' % (iter, y, x, np.sqrt(y)))
       'true value = %.20f\n ' % (iter, y, x, np.sqrt(y)))
</syntaxhighlight>
    </code>
</div></html>




'''working with integers'''
'''Working with integers'''
<syntaxhighlight lang="python">
<html><div data-datacamp-exercise data-lang="python" data-height="auto">
    <code data-type="sample-code">
import numpy as np
import numpy as np


Line 45: Line 48:
# Smallest and largest 32-bit integer
# Smallest and largest 32-bit integer
print(np.iinfo(np.int32).min, np.iinfo(np.int32).max)
print(np.iinfo(np.int32).min, np.iinfo(np.int32).max)
</syntaxhighlight>
    </code>
</div></html>


'''working with floats'''
'''Working with floats'''
<syntaxhighlight lang="python">
<html><div data-datacamp-exercise data-lang="python" data-height="auto">
    <code data-type="sample-code">
import numpy as np
import numpy as np


Line 65: Line 70:
# of float is precise.
# of float is precise.
print('decimal precision = %.10g' % np.finfo(f).precision)  
print('decimal precision = %.10g' % np.finfo(f).precision)  
</syntaxhighlight>
    </code>
</div></html>


 
'''Special numbers'''
'''special numbers'''
<html><div data-datacamp-exercise data-lang="python" data-height="auto">
<syntaxhighlight lang="python">
    <code data-type="sample-code">
import numpy as np
import numpy as np


Line 100: Line 106:
# The print out here should "True", but it prints "False"
# The print out here should "True", but it prints "False"
print((1.0 + (e + e)) == (1.0 + e + e))   
print((1.0 + (e + e)) == (1.0 + e + e))   
</syntaxhighlight>
    </code>
</div></html>


==Practice questions==
==Practice questions==

Revision as of 21:29, 14 January 2019

Process modelling slides

Please download the lecture slides. 13 September 2010 (slides 1 to 8)
15 September 2010 (slides 9 to 15)
16 September 2010 (slides 16 to 19)
20 September 2010 (slides 20 to the end)


Approximation and computer representation

Please download the lecture slides. 22 and 23 September (updated)

Calculating relative error

import numpy as np y = 13.0 n = 3 # number of significant figures rel_error = 0.5 * 10 ** (2-n) # relative error calculation x = y / 2.0 x_prev = 0.0 iter = 0 while abs(x - x_prev)/x > rel_error: x_prev = x x = (x + y/x) / 2.0 print(abs(x - x_prev)/x) iter += 1 print('Used %d iterations to calculate sqrt(%f) = %.20f; ' 'true value = %.20f\n ' % (iter, y, x, np.sqrt(y)))


Working with integers

import numpy as np print(np.int16(32767)) print(np.int16(32767+1)) print(np.int16(32767+2)) # Smallest and largest 16-bit integer print(np.iinfo(np.int16).min, np.iinfo(np.int16).max) # Smallest and largest 32-bit integer print(np.iinfo(np.int32).min, np.iinfo(np.int32).max)

Working with floats

import numpy as np help(np.finfo) # Read what the np.finfo function does f = np.float32 # single precision, 32-bit float, 4 bytes f = np.float64 # double precision, 64-bit float, 8 bytes print('machine precision = eps = %.10g' % np.finfo(f).eps) print('number of exponent bits = %.10g' % np.finfo(f).iexp) print('number of significand bits = %.10g' % np.finfo(f).nmant) print('smallest floating point value = %.10g' % np.finfo(f).min) print('largest floating point value = %.10g' % np.finfo(f).max) # Approximate number of decimal digits to which this kind # of float is precise. print('decimal precision = %.10g' % np.finfo(f).precision)

Special numbers

import numpy as np # Infinities print(np.inf, -np.inf) print(np.float(1E400)) # inf, number exceeds maximum value that # is possible with a 64-bit float: overflow print(np.inf * -4.0) # -inf print(np.divide(2.4, 0.0)) # inf # NaN's a = np.float(-2.3) print(np.sqrt(a)) # nan print(np.log(a)) # nan # Negative zeros a = np.float(0.0) b = np.float(-4.0) c = a/b print(c) # -0.0 print(c * c) # 0.0 eps = np.finfo(np.float).eps e = eps/3.0 # create a number smaller than machine precision # Question: why can we create a number smaller than eps? print(e) # Interesting property: non-commutative operations can occur # when working with values smaller than eps. Why? # The print out here should "True", but it prints "False" print((1.0 + (e + e)) == (1.0 + e + e))

Practice questions

  1. From the Hangos and Cameron reference, (available here] - accessible from McMaster computers only)
    • Work through example 2.4.1 on page 33
    • Exercise A 2.1 and A 2.2 on page 37
    • Exercise A 2.4: which controlling mechanisms would you consider?
  2. Homework problem, similar to the case presented on slide 18, except
    • Use two inlet streams \(\sf F_1\) and \(\sf F_2\), and assume they are volumetric flow rates
    • An irreversible reaction occurs, \(\sf A + 3B \stackrel{r}{\rightarrow} 2C\)
    • The reaction rate for A = \(\sf -r_A = -kC_\text{A} C_\text{B}^3\)
    1. Derive the time-varying component mass balance for species B.
      • \( V\frac{dC_B}{dt} = F^{\rm in}_1 C^{\rm in}_{\sf B,1} + F^{\rm in}_2 C^{\rm in}_{\sf B,2} - F^{\rm out} C_{\sf B} + 0 - 3 kC_{\sf A} C_{\sf B}^3 V \)
    2. What is the steady state value of \(\sf C_B\)? Can it be calculated without knowing the steady state value of \(\sf C_A\)?
      • \( F^{\rm in}_1 C^{\rm in}_{\sf B,1} + F^{\rm in}_2 C^{\rm in}_{\sf B,2} - F^{\rm out} \overline{C}_{\sf B} - 3 k \overline{C}_{\sf A} \overline{C}^3_{\sf B} V \) - we require the steady state value of \(C_{\sf A}\), denoted as \(\overline{C}_{\sf A}\), to calculate \(\overline{C}_{\sf B}\).

More exercises available here