Answers in red or as R code.

Question 1: Simulating regression

We will look at regression and how it works with some simulated data. First we have to, um, simulate some data

alpha <- 5 # intercept
beta <- 5 # slope
sigma <- 1 # standard deviation
x <- runif(50, 0, 1) # random uniform distribution
mu <- alpha + beta*x
y <- rnorm(length(mu), mu, sigma)
plot(x, y)
lines(x, mu, col=2) # add mean line

We can fit the model. Remember, we are using a maximum likelihood approach, so the ‘best’ parameters are those which have the highest probability of having produced the data. We know the true parameter values, but the estimates obviously are slightly different:

mod <- lm(y ~ x)
coef(mod)
## (Intercept)           x 
##    4.897539    5.260016

How certain can we be about the estimates? We can answer this by calculating confidence intervals. Remember the definition of a confidence interval:

If we repeat our sample many times, 95% of the confidence intervals drawn would contain the true population value

So, we assume the MLEs are the true values, and from these and the model, we know that every time we repeated sampling to get the data, we would get another sample, and from this we could calculate the slope. If we do this lots of times, we get a distribution of slope estimates. We can do this virtually by simulation. First we have a function to simulate the data, and calculate an estimate of the slope:

alpha.hat <- coef(mod)[1]
beta.hat <- coef(mod)[2]
sigma.hat <- summary(mod)$sigma # std deviation

# function to simulate the data, fit the model & return the slope estimate
SimData <- function(alpha, beta, sigma, X) {
  E.y <- alpha + beta*X # calculate the line
  y.sim <- rnorm(length(X), E.y, sigma) # simulate
  m <- lm(y.sim ~ X) # fit the model
  coef(m)[2] # return the estimated slope
}
# Run one simulation, to check the function works
SimData(alpha=alpha.hat, beta=beta.hat, sigma=sigma.hat, X=x)
##        X 
## 4.690532

You should read the function to check that you understand what each line does, and what the function produces.

With this function, we can simulate the data lots of times, and get a sampling distribution for the slope:

Rep.beta <- replicate(1e3, SimData(alpha=alpha.hat, beta=beta.hat, sigma=sigma.hat, X=x))
hist(Rep.beta)
# Calculate the 95% confidence interval
quantile(Rep.beta, c(0.025, 0.975))

So, you should run this, and plot the histogram. This is the distribution of the estimates of the slope that would come if the data were repeatedly sampled. The confidence intervals are a way of summarising the whole distribution: they put limits on the likely values).

Your tasks: - discuss the interpretation of the confidence interval. Explain exactly what it means

If we repeat our sampling many times and each time make a confidence interval, 95% of the confidence intervals drawn would contain the true population value. This says nothing about how likely our current confidence interval is to contain the true population value. But it does give us a range of plausible values. It helps us measure uncertainty in our estimate. The wider the confidence intervals the higher the uncertainty.

The code to calculate the confidence intervals is below. You should find that they are very similar to those you got from simulation.

confint(mod)
##                2.5 %   97.5 %
## (Intercept) 4.326693 5.468384
## x           4.178829 6.341202

Now we can do some prediction. For some reason, we want to predict \(y\) when \(x=1\). We do this using the model:

\[ y_{pred} = \hat{\alpha} + \hat{\beta} x \] where \(\hat{\alpha}\) and \(\hat{\beta}\) are the MLEs, and \(x\) is the value we want to predict for. If we use the true values, we would have \(y_{pred} = 5 + 5 \times 1 = 10\). But the MLEs are slightly different

To predict you need to use the equation above and replace alpha hat and beta hat with the coefficients you estimated above, 5.04174 and 4.62382, and x with 1. e.g. y = 5.04174+(4.62382*1) = 9.66556.

We can summarise our uncertainty in the prediction with more confidence intervals.

The 95% prediction interval represents a range within which we would expect a future observation to occur within 95% of the time. If we predict a new value and draw a 95% prediction interval many times, 95% of the those intervals will contain the actual new observation for the specified response variable. It takes account of uncertainty in the parameters and also the data.

The function below is a modification of the function above: this one simulates the prediction.

# function to simulate the data, fit the model & return a prediction
SimPrediction <- function(alpha, beta, sigma, X, x.pred) {
  E.y <- alpha + beta*X
  y.sim <- rnorm(length(X), E.y, sigma)
  m <- lm(y.sim ~ X)
  pred.mu <- coef(m)[1] + coef(m)[2]*x.pred # mean prediction
  prediction <- rnorm(length(x.pred), pred.mu, sigma) # prediction
  prediction
}
# Run one simulation, to check the function works
SimPrediction(alpha=alpha.hat, beta=beta.hat, sigma=sigma.hat, X=x, x.pred=1)
## [1] 10.92794
Rep.pred <- replicate(1e3, SimPrediction(alpha=alpha.hat, beta=beta.hat, sigma=sigma.hat, X=x, x.pred=1))
hist(Rep.pred)
# Calculate the 95% confidence interval
quantile(Rep.pred, c(0.025, 0.975))

A prediction of the mean, we need the variance too.

See above for code.

See above.

R will calculate the prediction itself using the predict() function:

# create new data to predict for
PredictionData <- data.frame(x=1)
# get a point prediction
predict(mod, newdata = PredictionData)
# get a point prediction plus confidence interval
predict(mod, newdata = PredictionData, interval="prediction")

Code above, should be very close to simulated ones.

If you had used confidence not prediction, you would get a confidence interval accounting for uncertainty in alpha and beta but not accounting for variance of residuals. Therefore it would not correctly estimate uncertainty of the prediction. It would just show uncertainty in the mean estimate.

Question 2: How long will the Deathly Hallows be?

Here we have the lengths of the Harry Potter book plotted against their order. This analysis was originally done in 2006, where the question was how long the next book would be.

# Make the data frame 
HPbooks <- data.frame(
  Book = c("Philosopher's Stone", "Chamber of Secrets", "Prisoner of Azkaban",
          "Goblet of Fire", "Order of the Phoenix", "Half-Blood Prince", 
          "Deathly Hallows"),
  Length = c(223, 251, 317, 636, 766, 607, NA), # 607
  Date = c(1997, 1998, 1999, 2000, 2003, 2005, 2007)
)
HPbooks$BookOrder <- 1:nrow(HPbooks) # nrow() = count number of rows
# This will make the book titles easier to read
#  Feel free to ignore this: you don't need to know about gsub() for this course
HPbooks$BookLabels <- gsub("Harry Potter and the ", "", HPbooks$Book)

par(mar=c(6.1,4.1,1,1))
plot(HPbooks$BookOrder, HPbooks$Length, xlab="", ylab="Number of pages", xaxt="n")
axis(1, at=HPbooks$BookOrder, padj=0.8, cex.axis=0.8, labels = gsub(" ", "\n", HPbooks$BookLabels))
text(7, 600, "?", col=2, cex=5)

Your task is to look at the relationship between the length of a book and the order it is in the series. In particular, do later books tend to be longer than earlier ones? You should use lm() to regress the number of pages (Length) against where the book is in the series (BookOrder). Question 1 should help you with using the right code (and, yes, you can use the easy built in R functions).

Yes, the results of the lm() show an increase of 108 pages per book order. The confidence intervals for this effect do not span 0, so we would be unlikely to get see this level of effect if there was no trend.

108 pages on average, with uncertainty of 25.68 to 190.54.

predict(mod, newdata= data.frame(BookOrder = 7), interval=“prediction”). The answer is 845 pages with an interval of 373.94 and 1316.195.

It was good in that the actual value was within the CI, but the interval was so broad that it would include most modern fantasies, as well as War and Peace and all of the A Song of Ice and Fire books.

Question 3: The Climate Change of Crime

There are many effects of the changing climate, with species moving futher north, and great tits (Parus major) becoming more murderous. But even humans are not immune. There have been suggestions that people also become more violent when the temperature get hotter. We can explore some of the evidence about this, thanks to the Chicago Police Department. Our data comes from their online resources: these are the daily mean temperatures and number of crimes between January 1st 2001 and January 23rd 2019. We have removed data where the man temperature was below freezing (the full data will appear later in the course). We can read in the data from the web (check that the link is correct!):

ChicagoCrime <- read.csv("https://www.math.ntnu.no/emner/ST2304/2019v/Week5/NoFreezeTempCrimeChicago.csv")
str(ChicagoCrime)
## 'data.frame':    57 obs. of  2 variables:
##  $ TempC: num  0.556 1.111 1.667 2.222 2.778 ...
##  $ Crime: num  888 969 950 995 891 ...

The data consists of the temperature in Celcius, and for that temperature, the average the number of crimes reported (over the whole period).

A higher temperature means more crime.

Looking at the plot, there are 3 points with more crime than the trend: row numbers 25, 52, 56. Looking at the plot, there might be some cuvrature in the trend. Even thought they are averages, the number of crimes are integers, so have been rounded.

Yes it does: the relationship is fairly straight, and is continuous.

Now you should use lm()to fit a regression line.

All code below.