Question 1: Dragonfire

As part of our investigation of the re-introduction of dragons into Trondheim, we have been looking at fire precautions. We sent a student out to measure the lengths of 50 dragons, and (from a safe distance) recorded how far they could breath fire after being poked with a very long stick. Each dragon was only tested once. Note that no major cities were destroyed during this experiment.

This is the data:

FileFlames <- "https://www.math.ntnu.no/emner/ST2304/2020v/Week05/DragonFire.csv"
DragonFire <- read.csv(file=FileFlames)

plot(DragonFire$Size, DragonFire$Distance, xlab="Length (m)", ylab="Distance (m)")
abline(lm(Distance~Size, data=DragonFire), col=2) # add mean line

We can fit the model. Remember, we are using a maximum likelihood approach, so the ‘best’ parameters are those which have the highest probability of having produced the data.

mod <- lm(Distance~Size, data=DragonFire)
coef(mod)
## (Intercept)        Size 
##   6.1453222   0.6359166

How certain can we be about the estimates? We can answer this by calculating confidence intervals. Remember the definition of a confidence interval:

If we repeat our sample many times, 95% of the confidence intervals drawn would contain the true population value

So, we assume the MLEs are the true values, and from these and the model, we know that every time we repeated sampling to get the data, we would get another sample, and from this we could calculate the slope. If we do this lots of times, we get a distribution of slope estimates. We can do this virtually by simulation. First we have a function to simulate the data, and calculate an estimate of the slope.

alpha.hat <- coef(mod)[1]
beta.hat <- coef(mod)[2]
sigma.hat <- summary(mod)$sigma # std deviation

# function to simulate the data, fit the model & return the slope estimate
SimData <- function(alpha, beta, sigma, X) {
  E.y <- alpha + beta*X # calculate the line
  y.sim <- rnorm(length(X), E.y, sigma) # simulate the data
  m <- lm(y.sim ~ X) # fit the model
  coef(m)[2] # return the estimated slope
}
# Run one simulation, to check the function works
SimData(alpha=alpha.hat, beta=beta.hat, sigma=sigma.hat, X=DragonFire$Size)
##         X 
## 0.6144988

You should read the function to check that you understand what each line does, and what the function produces.

With this function, we can simulate the data lots of times, and get a sampling distribution for the slope:

Rep.beta <- replicate(1e3, SimData(alpha=alpha.hat, beta=beta.hat, sigma=sigma.hat, X=DragonFire$Size))
hist(Rep.beta)
# Calculate the 95% confidence interval
quantile(Rep.beta, c(0.025, 0.975))

This is the distribution of the estimates of the slope that would come if the data were repeatedly sampled. The confidence intervals are a way of summarising the whole distribution: they put limits on the likely values).

R will calculate the confidence interval itself using the confint() function.

Now we can do some prediction. One released dragon has taken residence in a small lake, and it will sometimes sit at the water’s edge and breath fire. We want to know what distance away from the lake is safe, and for example where should trees be cut down so they aren’t burnt down. We know this dragon is 12m long, so we want to predict how far it can breathe fire. We do this using the model:

\[ y_{pred} = \hat{\alpha} + \hat{\beta} x \] where \(\hat{\alpha}\) and \(\hat{\beta}\) are the MLEs, and \(x\) is the value we want to predict for.

We can summarise our uncertainty in the prediction with more confidence intervals.

The function below is a modification of the function above: this one simulates the prediction. By simulate, we mean that this mimics what would happen if we repeated our sampling (to get our data) many many times and each time made a prediction. In reality, we don’t do this, but we use the maths involved in maximum likelihood estimation to represent this. R will do this with mathematics.

# function to simulate the data, fit the model & return a prediction
SimPrediction <- function(alpha, beta, sigma, X, x.pred) {
  E.y <- alpha + beta*X
  y.sim <- rnorm(length(X), E.y, sigma)
  m <- lm(y.sim ~ X)
  pred.mu <- coef(m)[1] + coef(m)[2]*x.pred # mean prediction
  prediction <- rnorm(length(x.pred), pred.mu, sigma) # prediction
  prediction
}
# Run one simulation, to check the function works
SimPrediction(alpha=alpha.hat, beta=beta.hat, sigma=sigma.hat, X=DragonFire$Size, x.pred=1)
## [1] 4.907338
Rep.pred <- replicate(1e3, SimPrediction(alpha=alpha.hat, beta=beta.hat, sigma=sigma.hat, X=DragonFire$Size, x.pred=20))
hist(Rep.pred)
# Calculate the 95% confidence interval
quantile(Rep.pred, c(0.025, 0.975))

R will calculate the prediction itself using the predict() function:

# create new data to predict for
PredictionData <- data.frame(Size=20)
# get a point prediction
predict(mod, newdata = PredictionData)
# get a point prediction plus confidence interval
predict(mod, newdata = PredictionData, interval="prediction")

Question 2: How long will the Deathly Hallows be?

Here we have the lengths of the Harry Potter book plotted against their order. This analysis was originally done in 2006, where the question was how long the next book would be. First, you can just run the following code, to create the data and plot it.

# Make the data frame 
HPbooks <- data.frame(
  Book = c("Philosopher's Stone", "Chamber of Secrets", "Prisoner of Azkaban",
          "Goblet of Fire", "Order of the Phoenix", "Half-Blood Prince", 
          "Deathly Hallows"),
  Length = c(223, 251, 317, 636, 766, 607, NA), # 607
  Date = c(1997, 1998, 1999, 2000, 2003, 2005, 2007)
)
HPbooks$BookOrder <- 1:nrow(HPbooks) # nrow() = count number of rows
# This will make the book titles easier to read
#  Feel free to ignore this: you don't need to know about gsub() for this course
HPbooks$BookLabels <- gsub("Harry Potter and the ", "", HPbooks$Book)

par(mar=c(6.1,4.1,1,1))
plot(HPbooks$BookOrder, HPbooks$Length, xlab="", ylab="Number of pages", xaxt="n")
axis(1, at=HPbooks$BookOrder, padj=0.8, cex.axis=0.8, labels = gsub(" ", "\n", HPbooks$BookLabels))
text(7, 600, "?", col=2, cex=5)

Your task is to look at the relationship between the length of a book and the order it is in the series. In particular, do later books tend to be longer than earlier ones? You should use lm() to regress the number of pages (Length) against where the book is in the series (BookOrder). Question 1 should help you with using the right code (and, yes, you can use the easy built in R functions).

Question 3: The Climate Change of Crime

There are many effects of the changing climate, with species moving futher north, and great tits (Parus major) becoming more murderous. But even humans are not immune. There have been suggestions that people also become more violent when the temperature get hotter. We can explore some of the evidence about this, thanks to the Chicago Police Department. Our data comes from their online resources: these are the daily mean temperatures and number of crimes between January 1st 2001 and January 23rd 2019. We have removed data where the man temperature was below freezing (the full data will appear later in the course). We can read in the data from the web (check that the link is correct!):

ChicagoCrime <- read.csv("https://www.math.ntnu.no/emner/ST2304/2019v/Week5/NoFreezeTempCrimeChicago.csv")
str(ChicagoCrime)
## 'data.frame':    57 obs. of  2 variables:
##  $ TempC: num  0.556 1.111 1.667 2.222 2.778 ...
##  $ Crime: num  888 969 950 995 891 ...

The data consists of the temperature in Celcius, and for that temperature, the average the number of crimes reported (over the whole period).

Now you should use lm()to fit a regression line.