Statistical Analysis: an Introduction using R/Chapter 3

Models
Anscombe?


 * R topics: user functions & grouping using curly braces

The Role of Assumptions
Imagine that we have some observations, and we want to use them to conclude something about the world around us. Statistics can help us in the common case when the observations are composed of a systematic component combined with random chance effects. A classic "toy" example is that of dice. Let us say that we wish to analyse the following result, obtained by rolling a die five times.

Although as a general rule, your first step should be to plot your data, there is little point in this instance. The dataset is so small that we can get a feel for it by just inspecting the numbers. The main striking feature is that we seem to have a preponderance of 6's, although this could, of course, be due to chance.

We could just treat the sequence {2,3,6,6,6} as a unique, arbitrary sequence of events. But this is rather pointless: data is usually analysed in order to seek general patterns, and by generalizing, increase our understanding. In this case, we might wish to use the result to decide whether the die is loaded in favour of sixes, and so if it can be relied upon to play a game.

Naively, we might hope to analyse this with a completely open mind; to approach the situation with no prior assumptions. A moment's thought should reveal this is impossible. For example, imagine that the die was a fraudster's dream: a sophisticated miniature machine that could be pre-programmed to give a particular sequence of numbers each day. If could, for example, be programmed to give a 2, followed by a 3, then three 6's, and spend the rest of the day rolling 1's. In this case, the results tally exactly with what we have observed, but they do not tell us anything about the subsequent behaviour of the die. Although it explains the data perfectly, most people would (quite reasonably) adopt the prior assumption that the "miniature machine" explanation was highly unlikely.

This is, of course, an extreme example, but it illustrates the point. Whether or not we realise it, we always examine data with prior notions of what is a reasonable explanation and what is not. Statistical analysis is the process of formalising these explanations, then using the data to choose between them. A good way to do this is by describing the assumptions that we have made in each case. For example, the following two assumptions are common to nearly all explanations that we might want to test.


 * The Assumption of Honesty : We assume that the data has been collected and reported “honestly”. This might not be the case if the data has been deliberately altered, or certain values have been “censored”. In our toy example, this might happen if say, the first four rolls were all 1's but the observer discarded these because this seemed an "unusual pattern”. Although dishonesty might seriously alter our conclusions, all we can do is to assume it has not taken place.


 * The Assumption of Random Error : We assume that the data have been affected by a process of “random chance”, causing the results to vary from one instance to the next in an unpredictable manner. In this case of the die, small changes in the way in which it is thrown and its subsequent tumbling and contact with the surface combine to produce one of six outcomes. Statisticians often (somewhat confusingly) refer to this process of chance as the source of “error” in the data. Describing the way in which chance works usually entails a whole set of other assumptions, which is the focus of   Chapter 3.

Making the assumptions clear
The problem with making any assumptions is that they are just that: assumptions. They may or may not not be true. When trying to convince others with our analysis, we are asking them to take our assumptions on trust. For this reason we should try to make widely accepted assumptions and, more importantly, ensure they are completely explicit. That way, others can decide for themselves if the analysis is to be trusted. We can encapsulate this most easily and concisely by formulating a model of the underlying process.

Modelling reality
A common way of understanding the world around us is to describe it in terms of a model. The more

Many basic statistics books teach simple tests, such as the t-test or sign-test. These are all based on an underlying model.

So what does an appropriate model look like? There are various ways in which we can

Some verbal models
Testing a particular model. We can easily disprove this by a single observation. However, we can never prove it. This turns out to be generally true. It is impossible to prove that something is the case, because there could always be a
 * Model 1 — a completely biased die : the die always gives a 6 when thrown. No extra assumptions are needed here, and in this extreme case, there is no error process.


 * Model 2 — a fair die: Here is a more complicated model, with the following assumptions
 * Each roll has 6 possible outcomes, with a number from 1-6 selected each time. This is sampling "with replacement". Defines the set of possibilities (the sample space)
 * The assumption of independence: one roll does not tell us anything extra about what will happen on a subsequent roll.
 * The assumption of homogeneity: the chance of any outcome is the same for each roll
 * Fairness: There is an equal chance of any of the 6 being chosen each number being equally probable. The

If the model contains an element of chance, how can we know whether ***

Once we have our models, we could either
 * try to disprove this model (although, because in this case there is always a slight chance of 6 in a row, we can never completely
 * compare the models in order to find out which is better

The simplest is simulation (compare to likelihood)

Simulating a model
One of the major ways in which we can use models is simulation. This will be a major way in which models are explored in this book. To do so, we need to convert the various models described above into simulations. The "fair die" model above provides a good, simple example. We will convert this model to a simulation in R. This involves learning a little about how R deals with numbers, so you should check that you are comfortable with the idea of functions in R, as described previously.

Sampling with replacement - describe here the idea of using random sampling for simulation


 * Statistical_Analysis:_an_Introduction_using_R/R/Random sampling

Testing Models
We can try to disprove a particular model, or select between different models using some informed judgement

Various ways to test a model. E.g. compare results from the simulation with the observed ones

We now have a simple method of simulating data produced by the model. How can we

Now that we can simulate How do we ***. We are unlikely to get exactly the sequence we observed. A classic method is to use a sample statistic. If we got 3 fives or 3 ones we would also be surprised. Link to idea of probability space.

The sample statistics

To calculate our sample statistic, we first need a way of counting the frequency of each number in a vector. R provides function called tabulate which does exactly that. We can then use the function max to find the most frequent

Now we can apply this to our model, 1000 times

You should be able to see that there are a smattering of 1’s,

Prelude to introducing the concept of probability.