Probability Distributions

One of the most useful things one can do with a distribution is use it to determine the probabilities of occurances. In particular, one axis of a frequency distribution histogram labels possible experimental values, and the other represents the number of occurances (frequency) of each value. We may change the scale of the frequency axis so that the total area under the distribution is 1, which then represents a probability of 1 that the result of any trial (experiment) will be some allowable value. That is,

P(some allowable value occurs) = 1.

Changing the scale like this changes a frequency distribution into a probability distribution. This can be done with both discrete histograms, and continuous distributions, of which we will look at two (normal and extreme value).

Suppose the result of an experiment is a number X, and we want P(a<X<b), where a and b determine a range of values that may be of interest to us, and P(a<X<b) is the probability that the experimental value X will fall in that range. Given a probability distribution, we find

P(a<X<b) = area over the interval (a,b).

(In particular, when dealing with continuous distributions it makes no sense to talk about the probability of a single value occuring, since there are then infinitely many values that can occur.)
To make this notion clearer I constructed the little tutorial below. Work through it, then let's carry on.