Table of Contents
Markov Inequality
Suppose we have a distribution of a random variable with a CDF . We are interested in finding the probability of the random variable holding a value greater than some threshold , i.e. .
Let the distribution of be split into two groups.
- Group where
- Group where
Due to the linearity of expected value, we can say that
For any non-negative random variable with a finite and ,
Markov Inequality gives an exaggerated estimate of the probabilities, especially for tails. At times the upper bound provided by the Markov Inequality can be greater than 1 too.
Chebyshev Inequality
Consider some -neighbourhood around the mean of some random variable . For any value of to lie outside this neighbourhood, we can say that it should lie in .
Because , we can say that,
Because is a non-negative quantity, we can apply Markov’s Inequality on it and state,
By definition of variance, . Using the equality from we can say that,
For any real valued random variable with mean and variance , we say that
We can apply this inequality for Normal Distributions and try to find the probability of some value lying beyond some distance away from the mean . For such a case,
Note -
Both Markov and Chebyshev's Inequality are independent of the random variable's distribution. All they require are the mean and standard deviation of the distribution. # Central Limit Theorem Some definitions - 1. **Population -** The entire set of observations of our interest. 2. **Sample -** A subset of the population. 3. **Population mean -** The mean of the population. 4. **Sample mean -** The mean of each individual sample.If random samples of observations are drawn from a population with mean and standard deviation , then for a fairly large the sample distribution of the sample mean is approximately normally distributed with a mean and standard deviation . As tends to infinity, this standard deviation becomes really small and the distribution of this sample mean gets increasingly concentrated around .
The sample mean and the sample variance converge to the population mean and variance, respectively, as the sample size increases.
Consider repeated individual trials of a random experiment. The outcome of each trial is one observation of the random variable .
Each outcome itself is a random variable and we denote each outcome as . Upon carrying out such trials, we have the sample values of the R.Vs .
Assume that each R.V is independent and identically distributed with
Let the sample mean be
Then as , for the sample mean ,