Difference between revisions of "Modelling and scientific computing"
Jump to navigation
Jump to search
m |
|||
(13 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
== Process modelling slides == | == Process modelling slides == | ||
Please [[Media:A-Modelling-20-Sept-2010.pdf|download the lecture slides]]. 13 September 2010 (slides 1 to 8)<br>15 September 2010 (slides 9 to 15)<br>16 September 2010 (slides 16 to 19)<br>20 September 2010 (slides 20 to the end) | |||
== Approximation and computer representation == | == Approximation and computer representation == | ||
< | |||
Please [[Media:A-Approximation-and-roundoff-23-Sept-2010.pdf|download the lecture slides]]. 22 and 23 September (updated) | |||
* More about [http://www.around.com/ariane.html the Ariane 5 rocket] 16-bit overflow | |||
* Python code used in class for ... | |||
</ | '''Calculating relative error''' | ||
<html><div data-datacamp-exercise data-lang="python" data-height="auto"> | |||
<code data-type="sample-code"> | |||
import numpy as np | |||
y = 13.0 | |||
n = 3 # number of significant figures | |||
rel_error = 0.5 * 10 ** (2-n) # relative error calculation | |||
x = y / 2.0 | |||
x_prev = 0.0 | |||
iter = 0 | |||
while abs(x - x_prev)/x > rel_error: | |||
x_prev = x | |||
x = (x + y/x) / 2.0 | |||
print(abs(x - x_prev)/x) | |||
iter += 1 | |||
print('Used %d iterations to calculate sqrt(%f) = %.20f; ' | |||
'true value = %.20f\n ' % (iter, y, x, np.sqrt(y))) | |||
</code> | |||
</div></html> | |||
'''Working with integers''' | |||
<html><div data-datacamp-exercise data-lang="python" data-height="auto"> | |||
<code data-type="sample-code"> | |||
import numpy as np | |||
print(np.int16(32767)) | |||
print(np.int16(32767+1)) | |||
print(np.int16(32767+2)) | |||
# Smallest and largest 16-bit integer | |||
print(np.iinfo(np.int16).min, np.iinfo(np.int16).max) | |||
# Smallest and largest 32-bit integer | |||
print(np.iinfo(np.int32).min, np.iinfo(np.int32).max) | |||
</code> | |||
</div></html> | |||
'''Working with floats''' | |||
<html><div data-datacamp-exercise data-lang="python" data-height="auto"> | |||
<code data-type="sample-code"> | |||
import numpy as np | |||
help(np.finfo) # Read what the np.finfo function does | |||
f = np.float32 # single precision, 32-bit float, 4 bytes | |||
f = np.float64 # double precision, 64-bit float, 8 bytes | |||
print('machine precision = eps = %.10g' % np.finfo(f).eps) | |||
print('number of exponent bits = %.10g' % np.finfo(f).iexp) | |||
print('number of significand bits = %.10g' % np.finfo(f).nmant) | |||
print('smallest floating point value = %.10g' % np.finfo(f).min) | |||
print('largest floating point value = %.10g' % np.finfo(f).max) | |||
# Approximate number of decimal digits to which this kind | |||
# of float is precise. | |||
print('decimal precision = %.10g' % np.finfo(f).precision) | |||
</code> | |||
</div></html> | |||
'''Special numbers''' | |||
<html><div data-datacamp-exercise data-lang="python" data-height="auto"> | |||
<code data-type="sample-code"> | |||
import numpy as np | |||
# Infinities | |||
print(np.inf, -np.inf) | |||
# inf, number exceeds maximum value that | |||
# is possible with a 64-bit float: overflow | |||
print(np.float(1E400)) | |||
print(np.inf * -4.0) # -inf | |||
print(np.divide(2.4, 0.0)) # inf | |||
# NaN's | |||
a = np.float(-2.3) | |||
print(np.sqrt(a)) # nan | |||
print(np.log(a)) # nan | |||
# Negative zeros | |||
a = np.float(0.0) | |||
b = np.float(-4.0) | |||
c = a/b | |||
print(c) # -0.0 | |||
print(c * c) # 0.0 | |||
eps = np.finfo(np.float).eps | |||
# Create a number smaller than machine precision | |||
e = eps/3.0 | |||
# Question: why can we create a number smaller than eps? | |||
print(e) | |||
# Interesting property: non-commutative operations can occur | |||
# when working with values smaller than eps. Why? | |||
# The print out here should "True", but it prints "False" | |||
print((1.0 + (e + e)) == (1.0 + e + e)) | |||
</code> | |||
</div></html> | |||
==Practice questions== | ==Practice questions== | ||
Line 36: | Line 132: | ||
#* \( F^{\rm in}_1 C^{\rm in}_{\sf B,1} + F^{\rm in}_2 C^{\rm in}_{\sf B,2} - F^{\rm out} \overline{C}_{\sf B} - 3 k \overline{C}_{\sf A} \overline{C}^3_{\sf B} V \) - we require the steady state value of \(C_{\sf A}\), denoted as \(\overline{C}_{\sf A}\), to calculate \(\overline{C}_{\sf B}\). | #* \( F^{\rm in}_1 C^{\rm in}_{\sf B,1} + F^{\rm in}_2 C^{\rm in}_{\sf B,2} - F^{\rm out} \overline{C}_{\sf B} - 3 k \overline{C}_{\sf A} \overline{C}^3_{\sf B} V \) - we require the steady state value of \(C_{\sf A}\), denoted as \(\overline{C}_{\sf A}\), to calculate \(\overline{C}_{\sf B}\). | ||
</ol> | </ol> | ||
[[Practice questions | More exercises available here]] |
Latest revision as of 21:31, 14 January 2019
Process modelling slides
Please download the lecture slides. 13 September 2010 (slides 1 to 8)
15 September 2010 (slides 9 to 15)
16 September 2010 (slides 16 to 19)
20 September 2010 (slides 20 to the end)
Approximation and computer representation
Please download the lecture slides. 22 and 23 September (updated)
- More about the Ariane 5 rocket 16-bit overflow
- Python code used in class for ...
Calculating relative error
import numpy as np
y = 13.0
n = 3 # number of significant figures
rel_error = 0.5 * 10 ** (2-n) # relative error calculation
x = y / 2.0
x_prev = 0.0
iter = 0
while abs(x - x_prev)/x > rel_error:
x_prev = x
x = (x + y/x) / 2.0
print(abs(x - x_prev)/x)
iter += 1
print('Used %d iterations to calculate sqrt(%f) = %.20f; '
'true value = %.20f\n ' % (iter, y, x, np.sqrt(y)))
Working with integers
import numpy as np
print(np.int16(32767))
print(np.int16(32767+1))
print(np.int16(32767+2))
# Smallest and largest 16-bit integer
print(np.iinfo(np.int16).min, np.iinfo(np.int16).max)
# Smallest and largest 32-bit integer
print(np.iinfo(np.int32).min, np.iinfo(np.int32).max)
Working with floats
import numpy as np
help(np.finfo) # Read what the np.finfo function does
f = np.float32 # single precision, 32-bit float, 4 bytes
f = np.float64 # double precision, 64-bit float, 8 bytes
print('machine precision = eps = %.10g' % np.finfo(f).eps)
print('number of exponent bits = %.10g' % np.finfo(f).iexp)
print('number of significand bits = %.10g' % np.finfo(f).nmant)
print('smallest floating point value = %.10g' % np.finfo(f).min)
print('largest floating point value = %.10g' % np.finfo(f).max)
# Approximate number of decimal digits to which this kind
# of float is precise.
print('decimal precision = %.10g' % np.finfo(f).precision)
Special numbers
import numpy as np
# Infinities
print(np.inf, -np.inf)
# inf, number exceeds maximum value that
# is possible with a 64-bit float: overflow
print(np.float(1E400))
print(np.inf * -4.0) # -inf
print(np.divide(2.4, 0.0)) # inf
# NaN's
a = np.float(-2.3)
print(np.sqrt(a)) # nan
print(np.log(a)) # nan
# Negative zeros
a = np.float(0.0)
b = np.float(-4.0)
c = a/b
print(c) # -0.0
print(c * c) # 0.0
eps = np.finfo(np.float).eps
# Create a number smaller than machine precision
e = eps/3.0
# Question: why can we create a number smaller than eps?
print(e)
# Interesting property: non-commutative operations can occur
# when working with values smaller than eps. Why?
# The print out here should "True", but it prints "False"
print((1.0 + (e + e)) == (1.0 + e + e))
Practice questions
- From the Hangos and Cameron reference, (available here] - accessible from McMaster computers only)
- Work through example 2.4.1 on page 33
- Exercise A 2.1 and A 2.2 on page 37
- Exercise A 2.4: which controlling mechanisms would you consider?
- Homework problem, similar to the case presented on slide 18, except
- Use two inlet streams \(\sf F_1\) and \(\sf F_2\), and assume they are volumetric flow rates
- An irreversible reaction occurs, \(\sf A + 3B \stackrel{r}{\rightarrow} 2C\)
- The reaction rate for A = \(\sf -r_A = -kC_\text{A} C_\text{B}^3\)
- Derive the time-varying component mass balance for species B.
- \( V\frac{dC_B}{dt} = F^{\rm in}_1 C^{\rm in}_{\sf B,1} + F^{\rm in}_2 C^{\rm in}_{\sf B,2} - F^{\rm out} C_{\sf B} + 0 - 3 kC_{\sf A} C_{\sf B}^3 V \)
- What is the steady state value of \(\sf C_B\)? Can it be calculated without knowing the steady state value of \(\sf C_A\)?
- \( F^{\rm in}_1 C^{\rm in}_{\sf B,1} + F^{\rm in}_2 C^{\rm in}_{\sf B,2} - F^{\rm out} \overline{C}_{\sf B} - 3 k \overline{C}_{\sf A} \overline{C}^3_{\sf B} V \) - we require the steady state value of \(C_{\sf A}\), denoted as \(\overline{C}_{\sf A}\), to calculate \(\overline{C}_{\sf B}\).