Assignment 4  2010  Solution
Due date(s):  08 November 2010 
(PDF)  Solutions, prepared by Ali and Kevin 
Other instructions  Handin at class. 
Solutions by Kevin and Ali
Question 1 [2]
A new type of thermocouple is being investigated by your group. These devices produce an almost linear voltage (millivolt) response at different temperatures. In practice though it is used the other way around: use the millivolt reading to predict the temperature. The process of fitting this linear model is called calibration.
Use the following data to calibrate a linear model:
Temperature [K] 273 293 313 333 353 373 393 413 433 453 Reading [mV] 0.01 0.12 0.24 0.38 0.51 0.67 0.84 1.01 1.15 1.31 Show the linear model and provide the predicted temperature when reading 1.00 mV.
Are you satisfied with this model, based on the coefficient of determination (\(R^2\)) value?
What is the model's standard error? Now, are you satisfied with the model's prediction ability, given that temperatures can usually be recorded to an accuracy of \(\pm 0.1\) K with most thermocouples.
Solution
The linear model is used to predict temperature given the reading in millivolts. The reason is that in modelling, in general, we specify as \(X\) the variable(s) we always have available, while \(y\) is the variable we would like to predict from the \(X\)'s.
The model has the form: \(T = a_0 + a_1V\), where \(T\) is temperature and \(V\) is the voltage reading. Coefficients in the linear model are:
\[T = 278.6 + 135.3 V\]indicating that an increase in 0.1 mV leads to an increase, on average, of 13.53 K in the temperature prediction.
The following code was used to fit the model and draw the plot.
MATLAB code Python code x = [0.01, 0.12, 0.24, 0.38, 0.51, 0.67, 0.84, 1.01, 1.15, 1.31]; y = [273, 293, 313, 333, 353, 373, 393, 413, 433, 453]; y = y(:); n = numel(x); % the number of observations X = [ones(n,1) x(:)]; a = inv(X'*X)*X'*y; % Contains a_0=a(1) and a_1=a(2) %a = regress(y, X); % only if you have the Statistics Toolbox % Plot the data along with the fitted line: plot(x, y, 'o') hold('on') grid('on') plot(x, X*a, 'r') xlabel('Voltage [mV]') ylabel('Temperature [K]') legend({'Original data', 'Fitted line'}, 'Location', 'Best') % Additional calculations resids = y  X*a; % resids = e = y  Xa RSS = resids' * resids; % residual sum of squares TSS = sum((y  mean(y)).^2); % total sum of squares R2 = 1  RSS/TSS; std_error = sqrt(RSS/(nnumel(a))); std_y = sqrt(TSS/(n1)); % just the same as std(y) R2_adj = 1  (std_error/std_y)^2;
import numpy as np from matplotlib.pyplot import * x = np.array([0.01, 0.12, 0.24, 0.38, 0.51, 0.67, 0.84, 1.01, 1.15, 1.31]) y = np.array([273, 293, 313, 333, 353, 373, 393, 413, 433, 453]) n = np.max(x.shape) # the number of observations X = np.vstack([np.ones(n), x]).T # More complex, and less accurate in some cases: y = y.reshape((n,1)) XTX_inv = np.linalg.inv(np.dot(X.T, X)) a = np.dot(np.dot(XTX_inv, X.T), y) # Simpler, and more accurate way: a = np.linalg.lstsq(X, y)[0] # Additional calculations resids = y  np.dot(X,a) # e = y  Xa; RSS = sum(resids**2) # residual sum of squares TSS = sum((y  np.mean(y))**2) # total sum of squares R2 = 1  RSS/TSS std_error = np.sqrt(RSS/(nlen(a))) std_y = np.sqrt(TSS/(n1)) R2_adj = 1  (std_error/std_y)**2 # Plot the data along with the fitted line: fig = figure() plot(x, y, 'o', label='Original data', markersize=10) plot(x, np.dot(X,a), 'r', label='Fitted line') xlabel('Voltage [mV]') ylabel('Temperature [K]') text(0.8, 325, 'Standard error = %0.1f K' % std_error) grid('on') legend(loc=0) plot(x, np.dot(X,a)+2*std_error, 'r') plot(x, np.dot(X,a)2*std_error, 'r') fig.savefig('../images/voltagelinearmodel.png')
If you used
R
to fit the model, you would written something like this:> V < c(0.01, 0.12, 0.24, 0.38, 0.51, 0.67, 0.84, 1.01, 1.15, 1.31) > T < c(273, 293, 313, 333, 353, 373, 393, 413, 433, 453) > model < lm(T ~ V) > summary(model) Call: lm(formula = T ~ V) Residuals: Min 1Q Median 3Q Max 6.9272 2.1212 0.1954 2.7480 5.4239 Coefficients: Estimate Std. Error t value Pr(>t) (Intercept) 278.574 2.204 126.39 1.72e14 *** V 135.298 2.922 46.30 5.23e11 ***  Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.916 on 8 degrees of freedom Multiple Rsquared: 0.9963, Adjusted Rsquared: 0.9958 Fstatistic: 2144 on 1 and 8 DF, pvalue: 5.229e11
The \(R^2\) value from this linear fit is \(R^2 = 0.996\), which being so close to 1.0, implies the model fits the data very well  that's all. However, one cannot be satisfied with only an \(R^2\) value, as explained in class, it has nothing to do with whether the model's prediction accuracy is any good. So we can't tell anything from this number.
The model's standard error is 3.9 K. If we assume the prediction error is normally distributed around the linear fit, this corresponds to one standard deviation. So 95% of our prediction error lies roughly within a range of \(\pm 2\times 3.92\) or \(\pm 7.8\) K. These are the dashed red lines drawn on the figure. (Please note: the true error intervals are not parallel to the regression line, they are curved; however the \(\pm 2S_E\) limits are goodenough for this course and most engineering applications.
This prediction ability of \(\pm 8\) K is probably not satisfying for most engineering applications, since we can predict temperatures far more accurately, over the range from 273K to 453K, using offtheshelf commercial thermocouples.
The purpose of this question is to mainly point out the misleading nature of \(R^2\)  this value looks really good: 99.6%, yet the actual purpose of the model, the ability to predict temperature from the millivolt reading has no relationship at all to this \(R^2\) value.
Question 2 [2]
Batch reactors are often used to calculate reaction rate data, since the dynamic balances are a direct function of the reaction rate:
assuming constant volume and assuming all energyrelated terms are negligible. We are unsure about the order of our reaction rate, \(\alpha\), and we also don't know the reaction rate constant, \(k\) in the expression \(r_A =  k C_{\sf A}^{\alpha}\).
Integrating the above differential equation from time \(t=0\) to time \(t=t\) gives:
The following concentrations are collected as a function of time
Time [hours]  0  0.5  1  2  3  5  7.5  10  17 
Concentration [mol/L]  3.5  1.5  0.92  0.47  0.29  0.16  0.09  0.06  0.03 
 Use these concentrationtime data to determine the values of \(\alpha\) and \(k\), using a least squares model.
 Once you have the rate constants, simulate the batch reactor (use the
ode45
or Python integrator) and superimpose the data from the above table on your concentrationtime trajectory. Do they agree? Which regions of the plot have the worst agreement (this is where you would collect more data in the future).
Solution
There are to approaches to this problem. One is to calculate finite differences \(\left(\frac{dC_{\sf A}(t)}{dt}\right)_k\) and plot these against the assumed rate expression \(\left( k C_{\sf A}^{\alpha}\right)_k\). The other approach is to first integrate the differential equation, as partly shown in the question, and then plot some function of concentration against time.
The former approach is usually less accurate, because we are using concentration measurement (already measured with error) to estimate finite differences (which will have error), and then fit a linear model (which will have error). The latter approach, called the integral method in reactor design, is more accurate because it works directly with the concentration and time measurements. In this case the integral method is really messy, so we will use the easier, finitedifference method.
The finite difference approximations are:
\[\begin{split}\begin{array}{llll} t & \text{Lowest error method} & \text{Calculation} & \text{Result}\\ \hline 0.5 & \text{Central diff} & (0.923.5)/1 & 2.58\\ 1 & \text{Backward diff} & (0.921.5)/0.5 & 1.16\\ 2 & \text{Central diff} & (0.290.92)/2 & 0.315 \\ 3 & \text{Backward diff} & (0.29 0.47)/1 & 0.18 \\ 5 & \text{Backward diff} & (0.160.29)/2 & 0.065 \\ 7.5 & \text{Central diff} & (0.060.16)/5 & 0.02 \\ 10 & \text{Backward diff} & (0.060.09)/2.5& 0.012\\ 17 & \text{Backward diff} & (0.030.06)/7 & .0043 \\\hline \end{array}\end{split}\]The linear model can now be derived as follows:
\[\begin{split}\frac{dC_{\sf A}(t)}{dt} &=  k C_{\sf A}^{\alpha} \\ \ln\left(\frac{dC_{\sf A}(t)}{dt}\right) &= \ln(k) + \alpha \ln(C_{\sf A}) \\ y_k &= a_0 + a_1 x_k\end{split}\]Then in
R
one can write:> log_dCdt < log(c(2.58, 1.16, 0.315, 0.18, 0.065, 0.02, 0.012, 0.0043)) > log_C < log(c(1.5, 0.92, 0.47, 0.29, 0.16, 0.09, 0.06, 0.03)) > model < lm(log_dCdt ~ log_C) > summary(model) Call: lm(formula = log_dCdt ~ log_C) Residuals: Min 1Q Median 3Q Max 0.17327 0.04749 0.04231 0.06375 0.10477 Coefficients: Estimate Std. Error t value Pr(>t) (Intercept) 0.23971 0.06295 3.808 0.00888 ** log_C 1.65222 0.03165 52.202 3.32e09 ***  Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1139 on 6 degrees of freedom Multiple Rsquared: 0.9978, Adjusted Rsquared: 0.9974 Fstatistic: 2725 on 1 and 6 DF, pvalue: 3.316e09
or in MATLAB and Python we can find the same result:
\[\begin{split}\ln\left(\frac{dC_{\sf A}(t)}{dt}\right) &= \ln(k) + \alpha \ln(C_{\sf A}) \\ \ln\left(\frac{dC_{\sf A}(t)}{dt}\right) &= 0.2397 + 1.652 \ln(C_{\sf A}) \\ k &= {\bf 1.271\,\,\text{hours}^{1}}\\ \alpha &= {\bf 1.65}\end{split}\]We can simulate this and superimpose it on the given data to give the following plot. The agreement is good, probably OK for the purposes which we want this for: to know the rate constant and reaction rate. There is low agreement as the change in concentration is the greatest (probably due to error in our finite difference approximations).
We should collect more data in the region between \(t=0\) and \(t=5\) hours, using equally spaced time steps, so we can accurately estimate the derivatives using the central difference approximation.
The code to calculate the linear model and simulate the system is also provided.
MATLAB code Python code % Build the linear model first log_dCdt = log([2.58,1.16,0.315,0.18,0.065,0.02,0.012,0.0043])'; log_C = log([1.5, 0.92, 0.47, 0.29, 0.16, 0.09, 0.06, 0.03]); log_C = log_C(:); n = max(size(log_C)); % number of observations X = [ones(n,1), log_C]; a = inv(X' * X) * X' * log_dCdt; % Plot the data along with the fitted line: plot(log_C, log_dCdt, 'o') hold('on') plot(log_C, X*a, 'r') xlabel('ln(dC/dt)') ylabel('ln(C)') grid('on') legend('Original data', 'Fitted data', 'Location', 'Best') % Additional calculations to display on plot resids = log_dCdt  X*a; RSS = resids' * resids; TSS = sum((log_dCdt  mean(log_dCdt)).^2); R2 = 1  RSS/TSS; std_error = sqrt(RSS/(nnumel(a))); std_y = sqrt(TSS/(n1)); % just the same as std(y) R2_adj = 1  (std_error/std_y)^2; k = exp(a(1)); alpha = a(2); % This shows how to call an ODE function with additional parameters % Create two function handles. The second one is used by the ode45 % function, which calls the first one with the current values of % k and alpha. batch_reactor = @(t, y, k, alpha) k * power(y(1), alpha); rate_ode = @(t, y) batch_reactor(t, y, k, alpha); % Integrate the ODE %  % Set the time range: t_start = 0; t_final = 20.0; % Set the initial condition(s): CA_t_zero = 3.5; % Integrate the ODE(s): [t, y] = ode45(rate_ode, [t_start, t_final], [CA_t_zero]); % All done! Plot the trajectories in two separate plots: time_raw = [0, 0.5, 1, 2, 3, 5, 7.5, 10, 17]; conc_raw = [3.5, 1.5, 0.92, 0.47, 0.29, 0.16, 0.09, 0.06, 0.03]; figure() plot(t, y(:,1)) hold('on') plot(time_raw, conc_raw, 'ko') xlim([t_start0.3, t_final]) ylim([0.1, 3.6]) xlabel('Time [hours]') ylabel('Concentration [mol/L]') grid('on') legend('Simulated trajectory', 'Raw data', 'Location', 'Best') import numpy as np from scipy import integrate from matplotlib.pylab import * def batch_tank(t, y, params): """ Dynamic balance for a CSTR batch reactor. """ # Assign some variables for convenience of notation CA = y[0] k, alpha = params # extract the k and alpha parameter values # Output from ODE function must be a COLUMN vector, with n rows n = len(y) dydt = np.zeros((n,1)) dydt[0] = k * np.power(CA, alpha) return dydt # Build the linear model first log_dCdt = np.log(np.array([2.58,1.16,0.315,0.18,0.065,0.02,0.012,0.0043])) log_C = np.log(np.array([1.5, 0.92, 0.47, 0.29, 0.16, 0.09, 0.06, 0.03])) n = np.max(log_C.shape) # number of observations X = np.vstack([np.ones(n), log_C]).T a = np.linalg.lstsq(X, log_dCdt)[0] # Plot the data along with the fitted line: fig = figure() plot(log_C, log_dCdt, 'o', label='Original data', markersize=10) plot(log_C, np.dot(X,a), 'r', label='Fitted line') xlabel('ln(dC/dt)') ylabel('ln(C)') grid('on') legend(loc=0) # Additional calculations to display on plot resids = log_dCdt  np.dot(X,a) # e = y  Xa; RSS = sum(resids**2) # residual sum of squares TSS = sum((log_dCdt  np.mean(log_dCdt))**2) # total sum of squares R2 = 1  RSS/TSS std_error = np.sqrt(RSS/(nlen(a))) std_y = np.sqrt(TSS/(n1)) R2_adj = 1  (std_error/std_y)**2 text(1.5, 4, 'S_E = %0.2f' % std_error) text(1.5, 4.5, 'R2 = %0.3f' % R2) fig.savefig('../images/ratefittinglinearmodel.png') k = np.exp(a[0]) alpha = a[1] # Set the integrator r = integrate.ode(batch_tank).set_integrator('vode', method='bdf') r.set_f_params((k, alpha)) # Set the time range t_start = 0.0 t_final = 20.0 delta_t = 0.1 # Number of time steps: 1 extra for initial condition num_steps = np.floor((t_final  t_start)/delta_t) + 1 # Set initial condition(s): for integrating variable and time! CA_t_zero = 3.5 # mol/L r.set_initial_value([CA_t_zero], t_start) # Additional Python step: create vectors to store trajectories t = np.zeros((num_steps, 1)) CA = np.zeros((num_steps, 1)) t[0] = t_start CA[0] = CA_t_zero # Integrate the ODE(s) across each delta_t timestep k = 1 while r.successful() and k < num_steps: r.integrate(r.t + delta_t) # Store the results to plot later t[k] = r.t CA[k] = r.y[0] k += 1 # All done! Plot the trajectories in two separate plots: time_raw = np.array([0, 0.5, 1, 2, 3, 5, 7.5, 10, 17]) conc_raw = np.array([3.5, 1.5, 0.92, 0.47, 0.29, 0.16, 0.09, 0.06, 0.03]) fig = figure() plot(t, CA, lw=3, label='Simulated trajectory') plot(time_raw, conc_raw, 'ko', ms=10, label="Raw data") xlim(t_start0.3, t_final) ylim([0.1, 3.6]) xlabel('Time [hours]') ylabel('Concentration [mol/L]') grid('on') legend(loc=0) fig.savefig('../images/ratefittingsimulation.png')
Question 3 [3]
The yield from your labscale bioreactor, \(y\), is a function of reactor temperature, impeller speed and reactor type (one with with baffles and one without). You have collected these data from various experiments.
Temp = \(T\) [°C] Speed = \(S\) [RPM] Baffles = \(B\) [Yes/No] Yield = \(y\) [g] 82 4300 No 51 90 3700 Yes 30 88 4200 Yes 40 86 3300 Yes 28 80 4300 No 49 78 4300 Yes 49 82 3900 Yes 44 83 4300 No 59 65 4200 No 61 73 4400 No 59 60 4400 No 57 60 4400 No 62 101 4400 No 42 92 4900 Yes 38
Use a builtin software function in MATLAB or Python (please don't use Excel) to fit a linear model that predicts the bioreactor yield from the above variables. Your model should be of the form:
\[\hat{y} = a_0 + a_\text{T} T + a_\text{S} S + a_\text{B} B\]Hint: convert the No/Yes variables to integers with
0=No
and1=Yes
.Interpret the meaning of each coefficient in the model and comment on the standard error.
Calculate and show the \(\mathbf{X}^T\mathbf{X}\) and \(\mathbf{X}^T\mathbf{y}\) matrices for this linear model in order to calculate the 4 coefficients in \({\bf a} = \left[a_0, a_T, A_S, A_B\right]\) yourself. Your answer must include the MATLAB or Python code that computes these matrices and the \({\bf a}\) solution vector.
Solution
Note
Initially, I assumed the lab computers had the Statistics Toolbox function in MATLAB so you could use the builtin regress.m
function in MATLAB. Python users should however use the builtin np.linalg.lstsq(...)
function. As a result all questions will be answered together below.
After converting the Yes/No variable to integers you can fit a linear model of the form \(\hat{y} = a_0 + a_\text{T} T + a_\text{S} S + a_\text{B} B\). In R:
> y_yield < c(51,30,40,28,49,49,44,59,61,59,57,62,42,38)
> x_temp < c(82,90,88,86,80,78,82,83,65,73,60,60,101,92)
> x_speed < c(4300,3700,4200,3300,4300,4300,3900,4300,4200,4400,4400,4400,4400,4900)
> x_baffles < c(0,1,1,1,0,1,1,0,0,0,0,0,0,1)
> model = lm(y_yield ~ x_temp + x_speed + x_baffles)
> summary(model)
Call:
lm(formula = y_yield ~ x_temp + x_speed + x_baffles)
Residuals:
Min 1Q Median 3Q Max
6.0790 3.4265 0.7974 2.2026 7.9710
Coefficients:
Estimate Std. Error t value Pr(>t)
(Intercept) 54.804336 18.404567 2.978 0.01386 *
x_temp 0.486872 0.121692 4.001 0.00251 **
x_speed 0.008520 0.003784 2.252 0.04804 *
x_baffles 9.271748 3.073260 3.017 0.01296 *

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.708 on 10 degrees of freedom
Multiple Rsquared: 0.8647, Adjusted Rsquared: 0.8241
Fstatistic: 21.3 on 3 and 10 DF, pvalue: 0.0001156
which indicates that the model is given by:
The coefficients are interpreted as:
 \(a_\text{S} = 0.0085\): a 100rpm increase in impeller speed serves to increase yield by 0.85g on average, keeping all other variables constant
 \(a_\text{B} = 9.3\): the use of baffles decreases yield, on average, by 9.1g, keeping all other variables constant
 \(a_\text{T} = 0.49\): each one degree increase in temperature lowers yield by 0.49g on average, keeping all other variables constant
This information can be quite valuable: we can now decide, using process economics, which is the most costeffective way to boost the yield from our bioreactor.
The standard error is 4.7 g. Since we aren't given any information on the accuracy of the yield measurement, it is hard to quantify whether this is reasonable or not. In practice one would need to run several experiments under the same conditions and measure the yield. The standard deviation of those repeated experimental yields would tell us what the noise in our measurement is.
We can fit the model in MATLAB or Python. Let the coefficient vector be \(\mathrm{a} = [a_0, a_\text{S}, a_\text{B}, a_\text{T}]\), then we can write down the following X matrix to estimate it:
\[\begin{split}\mathrm{X} = \begin{bmatrix} 1 & 82 & 4300 & 0\\ 1 & 90 & 3700 & 1\\ 1 & 88 & 4200 & 1\\ 1 & 86 & 3300 & 1\\ 1 & 80 & 4300 & 0\\ 1 & 78 & 4300 & 1\\ 1 & 82 & 3900 & 1\\ 1 & 83 & 4300 & 0\\ 1 & 65 & 4200 & 0\\ 1 & 73 & 4400 & 0\\ 1 & 60 & 4400 & 0\\ 1 & 60 & 4400 & 0\\ 1 & 101 & 4400 & 0\\ 1 & 92 & 4900 & 1 \end{bmatrix}\end{split}\]
Then we can construct the \(\mathrm{X}^T\mathrm{X}\) matrix (called the correlation matrix), and the \(\mathrm{X}^T\mathrm{y}\) vector (called the covariance vector).
\[\begin{split}\mathrm{X}^T\mathrm{X} = \begin{bmatrix} 14 & 1120 & 59000 & 6 \\ 1120 & 91480 & 4712500 & 516 \\ 59000 & 4712500 & 250480000 & 24300 \\ 6 & 516 & 24300 & 6 \end{bmatrix} \qquad \text{and} \qquad \mathrm{X}^T\mathrm{y} = \begin{bmatrix} 669 \\ 52207 \\ 2847800 \\ 229 \end{bmatrix}\end{split}\]
Using these matrices to solve for \(\mathrm{a}\)
\[\begin{split}\mathrm{a} = \left(\mathrm{X}^T\mathrm{X} \right)^{1}\mathrm{X}^T\mathrm{y} = \begin{bmatrix} 54.80 \\ 0.4869 \\ 0.00852 \\ 9.271 \end{bmatrix}\end{split}\]
This result matches the results from R
. The MATLAB and Python code is shown below.
MATLAB code  Python code 

% Build the linear model. Note that each of these are
% column vectors!
y_yield = [51,30,40,28,49,49,44,59,61,59,57,62,42,38]';
x_temp = [82,90,88,86,80,78,82,83,65,73,60,60,101,92]';
x_speed = [4300,3700,4200,3300,4300,4300,3900,4300, ...
4200,4400,4400,4400,4400,4900]';
x_baffles = [0,1,1,1,0,1,1,0,0,0,0,0,0,1]';
n = max(size(x_baffles)); % number of observations
X = [ones(n,1), x_temp, x_speed, x_baffles];
% If you have the Statistics Toolbox:
a = regress(y_yield, X);
% otherwise ....
XTX = X' * X;
XTX_inv = inv(XTX);
XTy = X' * y_yield;
a = XTX_inv * XTy;
disp(a)

import numpy as np
# Build the linear model
y_yield = np.array([51,30,40,28,49,49,44,59,61,59,57,62,42,38])
x_temp = np.array([82,90,88,86,80,78,82,83,65,73,60,60,101,92])
x_speed = np.array([4300,3700,4200,3300,4300,4300,3900,4300,
4200,4400,4400,4400,4400,4900])
x_baffles = np.array([0,1,1,1,0,1,1,0,0,0,0,0,0,1])
n = np.max(x_baffles.shape) # number of observations
X = np.vstack([np.ones(n), x_temp, x_speed, x_baffles]).T
a = np.linalg.lstsq(X, y_yield)[0]
y_yield = y_yield.reshape((n,1))
XTX = np.dot(X.T, X)
XTX_inv = np.linalg.inv(XTX)
XTy = np.dot(X.T, y_yield)
a = np.dot(XTX_inv, XTy)
print(a)

Question 4 [2]
The viscosity of sulphuric acid, \(\nu\), varies with purity, \(p\) in the following manner:
\(p\) [%]  20  60  80 
\(\nu\) [millipascal]  1.40  5.37  17.4 
 Express \(\nu(p)\) as a quadratic function using Lagrange interpolating polynomials. Do not simplify the polynomial.
 Express \(\nu(p)\) as a quadratic function using Newton interpolating polynomials. Do not simplify the polynomial.
 Fit a cubic spline through the data points: clearly show your \({\bf Xa = y}\) linear system of equations, then solve them using computer software; finally report your spline coefficients.
 Use computer software to plot:
 the Newton interpolating polynomial
 the cubic spline,
 and the 3 data points on the same graph.
 What is the estimated viscosity at \(p = 40\%\) purity using linear interpolation?
 Which of the estimation procedures that you used above has the closest estimate to the true value of 2.51 millipascal?
Solution
Recall that an \(n1\) order Lagrange interpolating polynomial, for a set of \(n\) data points, is given by the following relation:
\[P_{n1}(x) = y_{1}\ell_{1}(x) + y_{2}\ell_{2}(x) + \ldots + y_{n}\ell_{n}(x) = \sum_{i=1}^{n}y_{i}\ell_{i}(x) \qquad \text{where} \qquad \ell_{i}(x) = \prod_{j=1, j\neq i}^{n} \frac{x  x_{i}}{x_{i}  x_{j}}\]For our system the quadratic Lagrange interpolating polynomial is the following:
\[\begin{split}P_{2}(x) &= y_{1}\ell_{1}(x) + y_{2}\ell_{2}(x) + \ldots + y_{n}\ell_{n}(x) \\ &= 1.40 \cdot \frac{(x60)(x80)}{(2060)(2080)} + 5.37 \cdot \frac{(x20)(x80)}{(6020)(6080)} + 17.4 \cdot \frac{(x20)(x60)}{(8020)(8060)}\\\end{split}\]Recall that an \(n1\) order Newton interpolating polynomial, for a set of \(n\) data points, is given by the following relation:
\[P_{n1}(x) = b_{0}+b_{1}(xx_{1})+b_{2}(xx_{1})(xx_{2})+ \ldots +b_{n1}(xx_{1})\cdots(xx_{n1})\]The coefficients, \(b_i\) are best calculated in table form, known as "Newton's divided differences":
\[\begin{split}\begin{array}{cccc} \hline \hline p_{i} & \nu_{i} & \nu[p_{i},p_{i1}] & \nu[p_{i}, p_{i1}, p_{i2}] \\ \hline \hline 20 & \textbf{1.40} & \frac{5.371.40}{6020}=\textbf{0.09925} & \frac{0.60150.09925}{8020}=\textbf{0.008371} \\ 60 & 5.37 & \frac{17.45.37}{8060}=0.6015 & \\ 80 & 17.4 & & \\ \hline \hline \end{array}\end{split}\]So the Newton interpolating polynomial is:
\[P_{2}(x) = 1.40 + 0.09925(xx_{1}) + 0.008371(xx_{1})(xx_{2})\]The cubic spline through 3 data points consists of two cubic polynomials  one connecting points 1 and 2, and another connecting points 2 and 3.
\[\begin{split}s_3(x) = \left\{ \begin{array}{l} a_1 + b_1x + c_1x^2 + d_1x^3 \qquad\,\,x_1 \leq x \leq x_2 \\ a_2 + b_2x + c_2x^2 + d_2x^3 \qquad\,\,x_2 \leq x \leq x_3 \end{array} \right.\end{split}\]So we have 8 unknown coefficients in these 2 polynomials. The 8 linear equations that can be used to solve for the coefficients are:
\[\begin{split}\begin{array}{ll} \text{Starting knot} & s_3(x_1) = y_1 = a_1 + b_1x_1 + c_1x_1^2 + d_1x_1^3\\ \text{End knot} & s_3(x_3) = y_3 = a_2 + b_2x_3 + c_2x_3^2 + d_2x_3^3\\ \text{Continuity} & s_3(x_2) = \left\{ \begin{array}{l} y_2 = a_1 + b_1x_2 + c_1x_2^2 + d_1x_2^3 \\ y_2 = a_2 + b_2x_2 + c_2x_2^2 + d_2x_2^3 \end{array} \right.\\ \text{1st derivative continuity} & s'_3(x_2) = b_1 + 2c_1x_2 + 3d_1x_2^2 = b_2 + 2c_2x_2 + 3d_2x_2^2 \\ \text{2nd derivative continuity} & s''_3(x_2) = 2c_1 + 6d_1x_2 = 2c_2 + 6d_2x_2 \\ \text{$s''_3(x)=0$ at the ends} & s''_3(x_1) = 0 = 2c_1 + 6d_1x_1 \\ & s''_3(x_3) = 0 = 2c_2 + 6d_2x_3 \end{array}\end{split}\]Expressing this as a linear system of equations, \({\bf Xa = y}\):
\[\begin{split}{\bf Xa} &= {\bf y} \\ \left( \begin{array}{cccccccc} 0 & 0 & 2 & 6x_1 \\ 1 & x_1 & x_1^2 & x_1^3 \\ 1 & x_2 & x_2^2 & x_2^3 \\ 0 & 1 & 2x_2 & 3x_2^2 & 0 & 1 & 2x_2 & 3x_2^2 \\ 0 & 0 & 2 & 6x_2 & 0 & 0 & 2 & 6x_2 \\ & & & & 1 & x_2 & x_2^2 & x_2^3 \\ & & & & 1 & x_3 & x_3^2 & x_3^3 \\ & & & & & & 2 & 6x_3 \end{array}\right) \left( \begin{array}{c} a_1 \\ b_1 \\ c_1 \\ d_1 \\ a_2 \\ b_2 \\ c_2 \\ d_2 \end{array} \right) &= \left( \begin{array}{c} 0 \\ y_1 \\ y_2 \\ 0 \\ 0 \\ y_2 \\ y_3 \\ 0 \end{array} \right) \\ \left( \begin{array}{cccccccc} 0 & 0 & 2 & 120 \\ 1 & 20 & 400 & 8000 \\ 1 & 60 & 3600 & 216000 \\ 0 & 1 & 120 & 10800 & 0 & 1 & 120 & 10800 \\ 0 & 0 & 2 & 360 & 0 & 0 & 2 & 360 \\ & & & & 1 & 60 & 3600 & 216000 \\ & & & & 1 & 80 & 6400 & 512000 \\ & & & & 0 & 0 & 2 & 480 \end{array}\right) \left( \begin{array}{c} a_1 \\ b_1 \\ c_1 \\ d_1 \\ a_2 \\ b_2 \\ c_2 \\ d_2 \end{array} \right) &= \left( \begin{array}{c} 0 \\ 1.40 \\ 5.37 \\ 0 \\ 0 \\ 5.37 \\ 17.4 \\ 0 \end{array} \right)\end{split}\]Solving this system of equations with computer software gives:
\[\begin{split}s_3(x) = \left\{ \begin{array}{l} 1.93 + 0.0579x  6.28\times10^{3} x^2 + 1.05\times10^{4}x^3 \qquad\,\,x_1 \leq x \leq x_2 \\ 69.73  3.33x + 5.02\times10^{2} x^2  2.09\times10^{4}x^3 \qquad\,\,x_2 \leq x \leq x_3 \end{array} \right.\end{split}\]A plot of the Lagrange (or Newton) polynomial, the spline, the linear interpolation (not required) superimposed on the 3 original data points is shown below.
The code used to generate this plot is similar to that shown on course software tutorial.
The linear interpolation for \(\nu(p=40)\) has slope = \(\displaystyle \frac{5.37  1.40}{60  20} = 0.09925\) and intercept = \(1.40  0.09925 \times 20 = 0.585\). So \(\nu(p=40) = 0.09925 \times 40  0.585 = 3.385\) millipascal.
The question was a bit unclear; the intention was to compare all estimates of \(\nu(p=40)\):
 Lagrange polynomial estimate = \(1.40 \cdot \frac{(4060)(4080)}{(2060)(2080)}+ 5.37 \cdot \frac{(4020)(4080)}{(6020)(6080)} + 17.4 \cdot \frac{(4020)(4060)}{(8020)(8060)} = 1.40(0.3333) + 5.37(1.0) + 17.4(0.333) = 0.0366\) millipascals.
 Newton polynomial estimate = \(1.40 + 0.09925(4020) + 0.008371(4020)(4060) = 0.0366\) millipascals.
 Cubic spline estimate = \(1.93 + 0.0579\times 40  6.28\times10^{3} \times 40^2 + 1.05\times10^{4} \times 40^3 = 0.918\) millipascals.
 Linear interpolation = 3.385 millipascal.
All estimates had large error when compared to the true value of 2.51, because the viscosity is changing fairly rapidly in this neighbourhood. The estimates would be improved if there was another data point close by. The Lagrange/Newton estimates are especially poor, since they are negative in the range from approximately 30% to 40% purity.
Question 5 [2]
The following data are collected from a bioreactor experiment, during the growth phase.
Time [hours]  0  1.0  2.0  4.0  6.0 
\(C\) [g/L]  0.1  0.341  1.102  4.95  11.24 
Fit a natural cubic spline for these data and use it to estimate the number of cells at time 3, 5, and 7 hours.
Show your matrix derivation for the linear system of equations, and solve it using computer software. Plot the cubic spline over the time range 0 to 8 hours.
Solution
The cubic spline through 5 data points consists of four cubic polynomials  connecting points 1 and 2, 2 and 3, 3 and 4, and 4 and 5.
\[\begin{split}s_3(x) = \left\{ \begin{array}{l} a_1 + b_1x + c_1x^2 + d_1x^3 \qquad\,\,x_1 \leq x \leq x_2 \\ a_2 + b_2x + c_2x^2 + d_2x^3 \qquad\,\,x_2 \leq x \leq x_3\\ a_3 + b_3x + c_3x^2 + d_3x^3 \qquad\,\,x_3 \leq x \leq x_4\\ a_4 + b_4x + c_4x^2 + d_4x^3 \qquad\,\,x_4 \leq x \leq x_5 \end{array} \right.\end{split}\]So we have 16 unknown coefficients in these 4 polynomials. The 16 linear equations that can be used to solve for the coefficients are:
\[\begin{split}\begin{array}{ll} \text{Starting knot} & s_3(x_1) = y_1 = a_1 + b_1x_1 + c_1x_1^2 + d_1x_1^3\\ \text{End knot} & s_3(x_5) = y_5 = a_2 + b_2x_5 + c_2x_5^2 + d_2x_5^3\\ \text{Continuity} & s_3(x_2) = \left\{ \begin{array}{l} y_2 = a_1 + b_1x_2 + c_1x_2^2 + d_1x_2^3 \\ y_2 = a_2 + b_2x_2 + c_2x_2^2 + d_2x_2^3 \end{array} \right.\\ \text{Continuity} & s_3(x_3) = \left\{ \begin{array}{l} y_3 = a_2 + b_2x_3 + c_2x_3^2 + d_2x_3^3 \\ y_3 = a_3 + b_3x_3 + c_3x_3^2 + d_3x_3^3 \end{array} \right.\\ \text{Continuity} & s_3(x_4) = \left\{ \begin{array}{l} y_4 = a_3 + b_3x_4 + c_3x_4^2 + d_3x_4^3 \\ y_4 = a_4 + b_4x_4 + c_4x_4^2 + d_4x_4^3 \end{array} \right.\\ \text{1st derivative continuity} & s'_3(x_2) = b_1 + 2c_1x_2 + 3d_1x_2^2 = b_2 + 2c_2x_2 + 3d_2x_2^2 \\ \text{1st derivative continuity} & s'_3(x_3) = b_2 + 2c_2x_3 + 3d_2x_3^2 = b_3 + 2c_3x_3 + 3d_3x_3^2 \\ \text{1st derivative continuity} & s'_3(x_4) = b_3 + 2c_3x_4 + 3d_3x_4^2 = b_4 + 2c_4x_4 + 3d_4x_4^2 \\ \text{2nd derivative continuity} & s''_3(x_2) = 2c_1 + 6d_1x_2 = 2c_2 + 6d_2x_2 \\ \text{2nd derivative continuity} & s''_3(x_3) = 2c_2 + 6d_2x_3 = 2c_3 + 6d_3x_3 \\ \text{2nd derivative continuity} & s''_3(x_4) = 2c_3 + 6d_3x_4 = 2c_4 + 6d_4x_4 \\ \text{$s''_3(x)=0$ at the ends} & s''_3(x_1) = 0 = 2c_1 + 6d_1x_1 \\ & s''_3(x_5) = 0 = 2c_2 + 6d_2x_5 \end{array}\end{split}\]Expressing this as a linear system of equations, \({\bf Xa = y}\) (see the lecture notes for the matrix pattern):
\[\begin{split}{\bf Xa} &= {\bf y} \\ \left( \begin{array}{cccccccccccccccc} 0 & 0 & 2 & 6x_1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & x_1 & x_1^2 & x_1^3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & x_2 & x_2^2 & x_2^3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 2x_2 & 3x_2^2 & 0 & 1 & 2x_2 & 3x_2^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 2 & 6x_2 & 0 & 0 & 2 & 6x_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & x_2 & x_2^2 & x_2^3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & x_3 & x_3^2 & x_3^3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 2x_3 & 3x_3^2 & 0 & 1 & 2x_3 & 3x_3^2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 2 & 6x_3 & 0 & 0 & 2 & 6x_3 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & x_3 & x_3^2 & x_3^3 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & x_4 & x_4^2 & x_4^3 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2x_4 & 3x_4^2 & 0 & 1 & 2x_4 & 3x_4^2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 6x_4 & 0 & 0 & 2 & 6x_4 \\ 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & x_4 & x_4^2 & x_4^3 \\ 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & x_5 & x_5^2 & x_5^3 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 6x_5 \end{array}\right) \left( \begin{array}{c} a_1 \\ b_1 \\ c_1 \\ d_1 \\ a_2 \\ b_2 \\ c_2 \\ d_2 \\ a_3 \\ b_3 \\ c_3 \\ d_3 \\ a_4 \\ b_4 \\ c_4 \\ d_4 \end{array} \right) &= \left( \begin{array}{c} 0 \\ y_1 \\ y_2 \\ 0 \\ 0 \\ y_2 \\ y_3 \\ 0 \\ 0 \\ y_3 \\ y_4 \\ 0 \\ 0 \\ y_4 \\ y_5 \\ 0 \\ \end{array} \right) \\ \left( \begin{array}{cccccccccccccccc} 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 2 & 3 & 0 & 1 & 2 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 2 & 6 & 0 & 0 & 2 & 6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 2 & 4 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 4 & 12 & 0 & 1 & 4 & 12 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 2 & 12 & 0 & 0 & 2 & 12 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 4 & 8 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 4 & 16 & 64 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 8 & 48 & 0 & 1 & 8 & 48 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 24 & 0 & 0 & 2 & 24 \\ 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 4 & 16 & 64 \\ 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 6 & 36 & 216 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 36 \end{array}\right) \left( \begin{array}{c} a_1 \\ b_1 \\ c_1 \\ d_1 \\ a_2 \\ b_2 \\ c_2 \\ d_2 \\ a_3 \\ b_3 \\ c_3 \\ d_3 \\ a_4 \\ b_4 \\ c_4 \\ d_4 \end{array} \right) &= \left( \begin{array}{c} 0 \\ 0.1 \\ 0.341 \\ 0 \\ 0 \\ 0.341 \\ 1.102 \\ 0 \\ 0 \\ 1.102 \\ 4.95 \\ 0 \\ 0 \\ 4.95 \\ 11.24 \\ 0 \\ \end{array} \right) \\\end{split}\]Solving this system of equations with computer software gives:
\[\begin{split}s_3(x) = \left\{ \begin{array}{l} 0.1000 + 0.1457x + 0\times x^2 + 0.0953x^3 \qquad\,\,0 \leq x \leq 1 \\ 0.1521 0.0106x + 0.1562 x^2 + 0.0433x^3 \qquad\,\,1 \leq x \leq 2\\ 0.5809 0.6537x + 0.4778x^2 0.0103x^3 \qquad\,\,2 \leq x \leq 4\\ 3.6951 + 2.9895x + 1.0617x^2 + 0.0590x^3 \qquad\,\,4 \leq x \leq 6 \end{array} \right.\end{split}\]
Now we can estimate the number of cells at time 3, 5, and 7 hours. All we need is to see which of the above cubic polynomials are appropriate for each point:
\(2\leq x=3\leq 4\rightarrow\) \(C(3)\approx 0.5809 0.6537(3) + 0.4778(3)^2 0.0103(3)^3=2.6411\)
\(4\leq x=5\leq 6\rightarrow\) \(C(5)\approx 3.6951 + 2.9895(5) + 1.0617(5)^2 + 0.0590(5)^3=7.9180\)
The last point 7 is not within the domain of interpolation for any of the intervals. So, we need to extrapolate the last polynomial (which is the closet to \(x=7\)):
\(C(7)\approx 3.6951 + 2.9895(7) + 1.0617(7)^2 + 0.0590(7)^3=14.5620\)
The cubic spline is plotted below, followed by the code used to generate the estimate values.
Note: Polynomial coefficients in MATLAB and Python are sorted from the highest order to the lowest one, which is in the reverse direction we wrote the above polynomials. So, make sure you flip the coefficients (in the decreasing order) when using these polynomials in software.
MATLAB code  Python code 

function y = cubic_spline_5pts(x, xx)
% Evaluates the cubic spline that was built from 5 nodes.
% Can be extended to splines built from more data nodes.
% >>> y = cubic_spline_3pts(x, xx)
%
% INPUTS:
% x: New value(s) of x at which to evaluate the
% polynomial. Can be a scalar, or a vector.
% xx: The xvalue of the nodes at which the cubic
% spline was built.
% Since ``x`` can be a vector, we must iterate through the vector
% and evalute the polynomial at each entry in ``x``. Use the
% ``enumerate`` function in Python.
y = zeros(size(x));
k = 1;
for val = x
% Find which polynomial to use, based on the value of ``val``:
% sp stack of a sequence: ifelse ifelse if ... else ifelse
if val < xx(2)
y(k) = s1(val);
elseif val < xx(3)
y(k) = s2(val);
elseif val < xx(4)
y(k) = s3(val);
else
y(k) = s4(val);
end
k = k + 1;
end
function y = s1(x)
% Spline connecting nodes 1 and 2
coef = [ 0.0953 0 0.1457 0.1000];
% Note: polynomial coefficients in MATLAB go from the
% highest power to the lower power.
y = polyval(coef, x);
function y = s2(x)
% Spline connecting nodes 2 and 3
coef = [0.0433 0.1562 0.0106 0.1521];
y = polyval(coef, x);
function y = s3(x)
% Spline connecting nodes 2 and 3
coef = [ 0.0103 0.4778 0.6537 0.5809];
y = polyval(coef, x);
function y = s4(x)
% Spline connecting nodes 2 and 3
coef = [0.0590 1.0617 2.9895 3.6951];
y = polyval(coef, x);

import numpy as np
from matplotlib.pylab import *
def cubic_spline_5pts(x, xx):
"""
Evaluates the cubic spline that was built from 3 nodes.
Can be extended to splines built from more data nodes.
>>> y = cubic_spline_5pts(x, xx)
INPUTS:
x: New value(s) of x at which to evaluate the
polynomial. Can be a scalar, or a vector.
xx: The xvalue of the nodes at which the cubic
spline was built.
"""
def s1(x):
""" Spline connecting nodes 1 and 2 """
coef = np.array([ 0.0953, 0, 0.1457, 0.1000])
# Note: polynomial coefficients in Python go from the
# highest power to the lower power.
return np.polyval(coef, x)
def s2(x):
""" Spline connecting nodes 2 and 3 """
coef = np.array([0.0433, 0.1562, 0.0106, 0.1521])
return np.polyval(coef, x)
def s3(x):
""" Spline connecting nodes 3 and 4 """
coef = np.array([ 0.0103, 0.4778, 0.6537, 0.5809])
return np.polyval(coef, x)
def s4(x):
""" Spline connecting nodes 4 and 5 """
coef = np.array([0.0590, 1.0617, 2.9895, 3.6951])
return np.polyval(coef, x)
# Since ``x`` can be a vector, we must iterate through the vector
# and evalute the polynomial at each entry in ``x``. Use the
# ``enumerate`` function in Python.
y = np.zeros(x.shape)
for k, val in enumerate(x):
# Find which polynomial to use, based on the value of ``val``:
if val < xx[1]:
y[k] = s1(val)
elif val < xx[2]:
y[k] = s2(val)
elif val < xx[3]:
y[k] = s3(val)
else:
y[k] = s4(val)
return y
if __name__ == '__main__':
# Experimental data points
nodes = [0.0, 1.0, 2.0, 4.0, 6.0]
C_exp = [0.1, 0.341, 1.102, 4.95, 11.24]
time = np.arange(0.0, 8.0, 0.1)
plot(time, cubic_spline_5pts(time, nodes), 'g', linewidth=2)
plot(nodes, C_exp, 'bo', markersize=10)
grid('on')
xlabel('Time (hr)')
ylabel('C (g/L)')
title('Cubic spline using 5 data points')
axis([.1, 8.1, 0.1, 18.0])
legend(['Spline polynomial', 'Data points'],0)

Bonus question [0.5]
Use the cubic spline from the previous question and find the time where the cell count was approximately 10.0 g/L. Do not solve the equation by hand, but investigate MATLAB's or Python's polynomial root finding function: roots(...)
Solution
The value of \(C=10\) falls between 4.95 (at \(t=4\)) and 11.24 (at \(t=6\)) in the table of experimental data . So, we use the last spline polynomial that represents the points in this range:
\(C(t)\approx 3.6951 + 2.9895(t) + 1.0617(t)^2 + 0.0590(t)^3=10.0 \rightarrow\)
\(C(t)\approx 6.3049 + 2.9895(t) + 1.0617(t)^2 + 0.0590(t)^3=0\)
This is a simple polynomial root finding problem which can be solved using either
 MATLAB:
roots([0.0590, 1.0617, 2.9895, 6.3049])
 Python:
np.roots([0.0590, 1.0617, 2.9895, 6.3049])
Since it is a cubic function, three roots are returned: \(t_1 = 13.7417, t_2 = 5.6336, t_3 = 1.3804\).
In order to decide which one is the desired solution, we again look at the tabulated data. Since the \(C\) value falls between 4.95 and 11.24, the corresponding \(t\) value must also fall between 4.0 and 6.0. So, the desired root is \(t_2 = 5.6336\).