Assignment 7 - 2014
Due date(s): | 07 April 2014, in class. |
(PDF) | Assignment questions |
(PDF) | Assignment solutions (full solutions) |
Note
Source code will not be graded in this assignment. The purpose of this assignment is to show your understanding of the software output.
Assignment objectives
Question 1 [11]
Your company is developing a microgel-hydrogel composite, used for controlled drug delivery with a magnetic field. A previous employee did the experimental work but she has since left the company. You have been asked to analyze the existing experimental data.
The response variable, \(y\) = sodium fluorescein (SF) released [mg], per gram of gel and the data collected, in the original units are:
Experiment Order M = microgel weight [%] H = hydrogel weight [%] \(y\) 1 4 4 10 119 2 1 8 10 93 3 6 4 16 154 4 3 8 16 89 5 2 6 13 85 6 5 6 13 88 7 9 3.2 13 125 8 7 8.8 13 111 9 10 6 17.2 136 10 8 6 8.8 98
What was likely the reason the experimenter added experiments 5 and 6?
Why might the experimenter have added experiments 7, 8, 9 and 10 after the first six? Provide a rough sketch of the design, and all necessary calculations to justify your answer.
What is the name of the type of experimental design chosen by the employee for all 10 experiments in the table?
Using these data, you wish to estimate a nonlinear approximation of the response surface using a model with quadratic terms. Write out the equation of such a model that can be calculated from these 10 experiments (also read the next question).
Write out
- the \(\mathbf{X}\) matrix,
- the corresponding symbolic entries in \(\mathbf{b}\)
- and the \(\mathbf{y}\) vector
that you would use to solve the set of linear equations \(\mathbf{b} = \left(\mathbf{X}^T \mathbf{X} \right)^{-1} \mathbf{X}^T \mathbf{y}\) to obtain the parameter estimates of the model you proposed in the previous part. You must use data from all 10 experiments.
How many degrees of freedom will be available to estimate the standard error and confidence intervals?
Now calculate the coefficients in the linear model using computer software. Which terms in the model are statistically significant?
Note: "linear" implies the model is linear in the coefficients, not in the terms; that is because the coefficients in front of the nonlinear terms can still be found from solving a set of linear equations).
Brief answers are provided.
These are centerpoint (baseline) runs. They may have been run for some of the following reasons:
- To give degrees of freedom for calculating the standard error and then confidence intervals for the slopes.
- Trial runs, though they were not done first, so that's unlikely.
- To obtain baseline values for later response surface optimization.
- To test the factorial model's predictive ability.
- To assess repeatability (i.e. calculate measurement noise) at the center point.
- There might be one or more days that elapsed between the runs, so this assesses the robustness of the experimental system over time.
- To test for curvature: if the average of the centerpoints, 87, is very different from the model's intercept, \(b_0 = 0.25(119+93+154+89) = 114\), as it is in this case, then there is evidence of curvature.
It's clear that there is evidence of curvature, also, it it is feasible the employee was wanting to optimize the response variable. In that case, she would likely use response surface techniques of climbing the path of steepest ascent.
In this model, the presence of curvature at the center point has already been shown. Also a quick calculation from the 4 corner points shows a significant 2 factor interaction.
Using response surface methods with only linear terms will be misleading in this case. That's why the employee decided to add the extra experiments; they are the axial experiments to support quadratic terms.
Central composite design, with the full factorial experiment, in two factors.
\(y = b_0 + b_M x_M + b_H x_H + b_{MH}x_M x_H + b_{MM} x_M^2 + b_{HH}x_H^2\)
The matrix \(\mathbf{X}\) has 10 rows and 6 columns to support the 6 terms in the above model.
\[\begin{split}\mathbf{y} &= \mathbf{X} \mathbf{b} + \mathbf{e}\\ \begin{bmatrix} 119 \\93 \\154 \\89 \\85 \\88 \\125 \\111 \\136 \\98 \end{bmatrix} &= \begin{bmatrix} 1 & -1 & -1 & +1 & +1 & +1\\ 1 & +1 & -1 & -1 & +1 & +1\\ 1 & -1 & +1 & -1 & +1 & +1\\ 1 & +1 & +1 & +1 & +1 & +1\\ 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0\\ 1 & -1.41&0 & 0 & 2 & 0\\ 1 & 1.41& 0& 0 & 2 & 0\\ 1 & 0 & 1.41& 0 & 0 & 2\\ 1 & 0 & -1.41& 0 & 0 & 2 \end{bmatrix} \begin{bmatrix} b_0 \\ b_M \\ b_H \\ b_{MH} \\ b_{MM} \\ b_{HH} \end{bmatrix} + \mathbf{e}\\\end{split}\]There 4 degrees of freedom (10 observations, 6 parameters).
See the code and output on the next two pages [credit: Ghassan Marjaba].
Question 2 [7]
The following diagram shows data from a central composite design. The factors were run at their standard levels, and there were 4 runs at the center point.
- Calculate the parameters for a suitable quadratic model in these factors. Show your matrices for \(\mathbf{X}\) and \(\mathbf{y}\).
- Draw a response surface plot of A vs B over a suitably wide range beyond the experimental region.
- Where would you move A and B if your objective is to increase the response value?
- Report your answer in coded units.
- Report your answer in real-world units, if the full factorial portion of the experiments were ran at:
- A = stirrer speed, 200rpm and 340 rpm
- B = stirring time, 30 minutes and 40 minutes
You might feel more comfortable setting up the problem in MATLAB. You can use the contour plot functions in MATLAB to visualize the results.
If you are using R, you can use the rbind(...)
or cbind(...)
functions to build up your \(\mathbf{X}\) matrix row-by-row or column-by-column. The equivalent of meshgrid in R is the expand.grid(...)
function. Pee the R code on the course website that shows how to generate surface plots in R.
The matrices below [credit: Ghassan Marjaba] lead to the model of:
\[y = 261 - 4 x_A +1.7 x_B -15 x_Ax_B -6.6 x_A^2 -8.8 x_B^2\]The code below was used to generate the response surface:
y <- c(234, 252, 273, 231, 261, 258, 267, 258, 243, 243, 240, 249) A <- c( -1, +1, -1, +1, 0, 0, 0, 0, 0, sqrt(2), 0, -sqrt(2)) B <- c( -1, -1, +1, +1, 0, 0, 0, 0, -sqrt(2), 0, sqrt(2), 0) mod.ccd <- lm ( y ~ A + B + A*B + I(A^2) + I(B^2)) summary(mod.ccd) N <- 100 # resolution of surface # (higher values give smoother plots) # The lower and upper bounds, in coded units, # over which we want # to visualize the surface bound <- 3 A_plot <- seq(-bound, bound, length=N) B_plot <- seq(-bound, bound, length=N) grd <- expand.grid(A=A_plot, B=B_plot) # Predict directly from least squares model grd$y <- predict(mod.ccd, grd) library(lattice) bitmap('central-composite-question-surface.png', type="png256", width=8, height=8, res=300, pointsize=14) contourplot(y ~ A * B, data = grd, cuts = 10, region = TRUE, col.regions = terrain.colors, xlab = "A", ylab = "B", main = "Predicted response" ) trellis.focus("panel", 1, 1, highlight=FALSE) lpoints(A, B, pch="O", col="red", cex=3) llines(c(-1, +1, +1, -1, -1), c(-1, -1, +1, +1, -1), col="red", lwd=3) llines(c(-sqrt(2), +sqrt(2)), c(0, 0), col="red", lwd=3) llines(c(0, 0), c(-sqrt(2), +sqrt(2)), col="red", lwd=3) trellis.unfocus() dev.off()
Several options are possible, but I would chose something not too far away, but in the direction of the optimum:
- A = -2 and B = +2, perhaps, in coded units
- in real-world units this would correspond to:
- A = 270 + (-2)*(70) = 130 rpm
- B = 35 + (+2)*(5) = 45 minutes
The predicted response at that location is when \(x_A = -2\) and \(x_B = + 2\)
\[\hat{y} = 261 - 4(-2) + 1.7(2) - 15 (-2)(2) -6.6 (-2)^2 -8.8 (2)^2 = 271\]
Question 3 [30]
The next question is posted on the course website. It is due as a separate hand-in.