Problems with Matlab Projects? You may face many Problems, but do not worry we are ready to solve your Problems. All you need to do is just leave your Comments. We will assure you that you will find a solution to your project along with future tips. On Request we will Mail you Matlab Codes for Registered Members of this site only, at free service...Follow Me.

Variance

Summary: The mean value locates the center of the probability mass distribution induced by X on the real line. In this unit, we examine how expectation may be used for further characterization of the distribution for X. In particular, we deal with the variance and its square root the standard deviation. We identify some important properties of variance and identify the concept covariance. The variance is calculated for several distributions, comparing analytical and Matlab results.


In the treatment of the mathematical expection of a real random variable X, we note that the mean value locates the center of the probability mass distribution induced by X on the real line. In this unit, we examine how expectation may be used for further characterization of the distribution for X. In particular, we deal with the concept of variance and its square root the standard deviation. In subsequent units, we show how it may be used to characterize the distribution for a pair {X,Y{ considered jointly with the concepts covariance, and linear regression

Variance

Location of the center of mass for a distribution is important, but provides limited information. Two markedly different random variables may have the same mean value. It would be helpful to have a measure of the spread of the probability mass about the mean. Among the possibilities, the variance and its square root, the standard deviation, have been found particularly useful.
Definition. The variance of a random variable X is the mean square of its variation about the mean value:
Var[X]=σX2=E[(XμX)2]whereμX=E[X](1)
The standard deviation for X is the positive square root σX of the variance.

Remarks

  • If X(ω) is the observed value of X, its variation from the mean is X(ω)μX. The variance is the probability weighted average of the square of these variations.
  • The square of the error treats positive and negative variations alike, and it weights large variations more heavily than smaller ones.
  • As in the case of mean value, the variance is a property of the distribution, rather than of the random variable.
  • We show below that the standard deviation is a “natural” measure of the variation from the mean.
  • In the treatment of mathematical expectation, we show that
    E[(Xc)2]isaminimumiffc=E[X],inwhichcaseE[(XE[X])2]=E[X2]E2[X](2)
    This shows that the mean value is the constant which best approximates the random variable, in the mean square sense.
Basic patterns for variance
Since variance is the expectation of a function of the random variable X, we utilize properties of expectation in computations. In addition, we find it expedient to identify several patterns for variance which are frequently useful in performing calculations. For one thing, while the variance is defined as E[(XμX)2], this is usually not the most convenient form for computation. The result quoted above gives an alternate expression.
  • (V1): Calculating formulaVar[X]=E[X2]E2[X].
  • (V2): Shift propertyVar[X+b]=Var[X]. Adding a constant b to X shifts the distribution (hence its center of mass) by that amount. The variation of the shifted distribution about the shifted center of mass is the same as the variation of the original, unshifted distribution about the original center of mass.
  • (V3): Change of scaleVar[aX]=a2Var[X]. Multiplication of X by constant a changes the scale by a factor |a|. The squares of the variations are multiplied by a2. So also is the mean of the squares of the variations.
  • (V4): Linear combinations
    1. Var[aX±bY]=a2Var[X]+b2Var[Y]±2ab(E,[,X,Y,],,E,[,X,],E,[,Y,])
    2. More generally,
      Var[
      n
      k=1
       ,ak,Xk
      ]=
      n
      k=1
       ak2Var[Xk]+2aiaj(E,[XiXj],,E,[Xi],E,[Xj])
      (3)
    The term cij=E[XiXj]E[Xi]E[Xj] is the covariance of the pair {Xi,Xj{, whose role we study in the unit on that topic. If the cij are all zero, we say the class is uncorrelated.
Remarks
  • If the pair {X,Y{ is independent, it is uncorrelated. The converse is not true, as examples in the next section show.
  • If the ai=±1 and all pairs are uncorrelated, then
    Var[
    n
    k=1
     ,ai,Xi
    ]=
    n
    k=1
     Var[Xi]
    (4)
    The variance add even if the coefficients are negative.
We calculate variances for some common distributions. Some details are omitted—usually details of algebraic manipulation or the straightforward evaluation of integrals. In some cases we use well known sums of infinite series or values of definite integrals. A number of pertinent facts are summarized in Appendix B. Some Mathematical Aids. The results below are included in the table in Appendix C.
Variances of some discrete distributions
  1. Indicator function X=IEP(E)=p,q=1p E[X]=p
    E[X2]E2[X]=E[IE2]p2=E[IE]p2=pp2=p(1p)=pq(5)
  2. Simple random variableX=
    n
    i=1
     tiIAi
     (primitive form) P(Ai)=pi.
    Var[X]=
    n
    i=1
     ti2piqi2titjpipj,sinceE[IAiIAj]=0ij
    (6)
  3. Binomial(n,p)X=
    n
    i=1
     IEiwith{IEi:1in{iidP(Ei)=p
    Var[X]=
    n
    i=1
     Var[IEi]=
    n
    i=1
     pq=npq
    (7)
  4. Geometric(p)P(X=k)=pqkk0E[X]=q/p 
    We use a trick: E[X2]=E[X(X1)]+E[X]
    E[X2]=p
    k=0
     k(k1)qk+q/p=pq2
    k=2
     k(k1)qk2+q/p=pq2
    2
    (1q)3
     +q/p=2
    q2
    p2
     +q/p
    (8)
    Var[X]=2
    q2
    p2
     +q/p(q/p)2=q/p2
    (9)
  5. Poisson(μ)P(X=k)=eμ
    μk
    k!
     k0
     
    Using E[X2]=E[X(X1)]+E[X], we have
    E[X2]=eμ
    k=2
     k(k1)
    μk
    k!
     +μ=eμμ2
    k=2
     
    μk2
    (k2)!
     +μ=μ2+μ
    (10)
    Thus, Var[X]=μ2+μμ2=μ. Note that both the mean and the variance have common value μ.
Some absolutely continuous distributions
  1. Uniform on (a,b)fX(t)=
    1
    ba
     a<t<b
    E[X]=
    a+b
    2
    E[X2]=
    1
    ba
     abt2dt=
    b3a3
    3(ba)
     soVar[X]=
    b3a3
    3(ba)
     
    (a+b)2
    4
     =
    (ba)2
    12
    (11)
  2. Symmetric triangular(a,b) Because of the shift property (V2), we may center the distribution at the origin. Then the distribution is symmetric triangular (c,c), wherec=(ba)/2. Because of the symmetry
    Var[X]=E[X2]=cct2fX(t)dt=20ct2fX(t)dt(12)
    Now, in this case,
    fX(t)=
    ct
    c2
     0tcsothatE[X2]=
    2
    c2
     0c(ct2t3)dt=
    c2
    6
     =
    (ba)2
    24
    (13)
  3. Exponential (λ) fX(t)=λeλt,t0E[X]=1/λ
    E[X2]=0λt2eλtdt=
    2
    λ2
     sothatVar[X]=1/λ2
    (14)
  4. Gamma(α,λ)fX(t)=
    1
    Γ(α)
     λαtα1eλtt0E[X]=
    α
    λ
    E[X2]=
    1
    Γ(α)
     0λαtα+1eλtdt=
    Γ(α+2)
    λ2Γ(α)
     =
    α(α+1)
    λ2
    (15)
    Hence Var[X]=α/λ2.
  5. Normal(μ,σ2)E[X]=μ
    Consider YN(0,1),E[Y]=0,Var[Y]=
    2
    \2π
     0t2et2/2dt=1
    .
    X=σY+μimpliesVar[X]=σ2Var[Y]=σ2(16)
Extensions of some previous examples
In the unit on expectations, we calculate the mean for a variety of cases. We revisit some of those examples and calculate the variances.

EXAMPLE 1: Expected winnings (Example 8 from "Mathematical Expectation: Simple Random Variables")

A bettor places three bets at $2.00 each. The first pays $10.00 with probability 0.15, the second $8.00 with probability 0.20, and the third $20.00 with probability 0.10.
SOLUTION
The net gain may be expressed
X=10IA+8IB+20IC6,withP(A)=0.15,P(B)=0.20,P(C)=0.10(17)
We may reasonbly suppose the class {A,B,C{ is independent (this assumption is not necessary in computing the mean). Then
Var[X]=102P(A)[1P(A)]+82P(B)[1P(B)]+202P(C)[1P(C)](18)
Calculation is straightforward. We may use MATLAB to perform the arithmetic.
c = [10 8 20];
p = 0.01*[15 20 10];
q = 1 - p;
VX = sum(c.^2.*p.*q)
VX =  58.9900

EXAMPLE 2: A function of X (Example 9 from "Mathematical Expectation: Simple Random Variables")

Suppose X in a primitive form is
X=3IC1IC2+2IC33IC4+4IC5IC6+IC7+2IC8+3IC9+2IC10(19)
with probabilities P(Ci)=0.08,0.11,0.06,0.13,0.05,0.08,0.12,0.07,0.14,0.16.
Let g(t)=t2+2t. Determine E[g(X)] and Var[g(X)]
c = [-3 -1 2 -3 4 -1 1 2 3 2];            % Original coefficients
pc = 0.01*[8 11 6 13 5 8 12 7 14 16];     % Probabilities for C_j
G = c.^2 + 2*c                            % g(c_j)
EG = G*pc'                                % Direct calculation E[g(X)]
EG =  6.4200
VG = (G.^2)*pc' - EG^2                  % Direct calculation Var[g(X)]
VG = 40.8036
[Z,PZ] = csort(G,pc);                   % Distribution for Z = g(X)
EZ = Z*PZ'                              % E[Z]
EZ =  6.4200
VZ = (Z.^2)*PZ' - EZ^2                  % Var[Z]
VZ = 40.8036

EXAMPLE 3: Z=g(X,Y) (Example 10 from "Mathematical Expectation: Simple Random Variables")

We use the same joint distribution as for Example 10 from "Mathematical Expectation: Simple Random Variables" and let g(t,u)=t2+2tu3u. To set up for calculations, we use jcalc.
jdemo1                      % Call for data
jcalc                       % Set up
Enter JOINT PROBABILITIES (as on the plane)  P
Enter row matrix of VALUES of X  X
Enter row matrix of VALUES of Y  Y
 Use array operations on matrices X, Y, PX, PY, t, u, and P
G = t.^2 + 2*t.*u - 3*u;    % Calculation of matrix of [g(t_i, u_j)]
EG = total(G.*P)            % Direct calculation of E[g(X,Y)]
EG =   3.2529
VG = total(G.^2.*P) - EG^2  % Direct calculation of Var[g(X,Y)]
VG =  80.2133
[Z,PZ] = csort(G,P);        % Determination of distribution for Z
EZ = Z*PZ'                  % E[Z] from distribution
EZ =   3.2529
VZ = (Z.^2)*PZ' - EZ^2      % Var[Z] from distribution
VZ =  80.2133

EXAMPLE 4: A function with compound definition (Example 12 from "Mathematical Expectation; General Random Variables")

Suppose X exponential (0.3). Let
Z={
X2forX4
16forX>4
=I[0,4](X)X2+I(4,](X)16
(20)
Determine E[Z] and Var[Z].
ANALYTIC SOLUTION
E[g(X)]=g(t)fX(t)dt=0I[0,4](t)t20.3e0.3tdt+16E[I(4,](X)](21)
=04t20.3e0.3tdt+16P(X>4)7.4972(byMaple)(22)
Z2=I[0,4](X)X4+I(4,](X)256(23)
E[Z2]=0I[0,4](t)t40.3e0.3tdt+256E[I(4,](X)]=04t40.3e0.3tdt+256e1.2100.0562(24)
Var[Z]=E[Z2]E2[Z]43.8486(byMaple)(25)
APPROXIMATION
To obtain a simple aproximation, we must approximate by a bounded random variable. Since P(X>50)=e153·107 we may safely truncate X at 50.
tappr
Enter matrix [a b] of x-range endpoints  [0 50]
Enter number of x approximation points  1000
Enter density as a function of t  0.3*exp(-0.3*t)
Use row matrices X and PX as in the simple case
M = X <= 4;
G = M.*X.^2 + 16*(1 - M);  % g(X)
EG = G*PX'                 % E[g(X)]
EG =  7.4972
VG = (G.^2)*PX' - EG^2     % Var[g(X)]
VG = 43.8472               % Theoretical = 43.8486
[Z,PZ] = csort(G,PX);      % Distribution for Z = g(X)
EZ = Z*PZ'                 % E[Z] from distribution
EZ =  7.4972
VZ = (Z.^2)*PZ' - EZ^2     % Var[Z]
VZ = 43.8472

EXAMPLE 5: Stocking for random demand (Example 13 from "Mathematical Expectation; General Random Variables")

The manager of a department store is planning for the holiday season. A certain item costs c dollars per unit and sells for p dollars per unit. If the demand exceeds the amount m ordered, additional units can be special ordered for s dollars per unit (s>c). If demand is less than the amount ordered, the remaining stock can be returned (or otherwise disposed of) at r dollars per unit (r<c). Demand D for the season is assumed to be a random variable with Poisson (μ) distribution. Supposeμ=50,c=30,p=50,s=40,r=20. What amount m should the manager order to maximize the expected profit?
PROBLEM FORMULATION
Suppose D is the demand and X is the profit. Then
  • For Dm,X=D(pc)(mD)(cr)=D(pr)+m(rc)
  • For D>m,X=m(pc)+(Dm)(ps)=D(ps)+m(sc)
It is convenient to write the expression for X in terms of IM, where M=(,m]. Thus
X=IM(D)[D(pr)+m(rc)]+[1IM(D)][D(ps)+m(sc)](26)
=D(ps)+m(sc)+IM(D)[D(pr)+m(rc)D(ps)m(sc)](27)
=D(ps)+m(sc)+IM(D)(sr)[Dm](28)
Then
E[X]=(ps)E[D]+m(sc)+(sr)E[IM(D)D](sr)mE[IM(D)](29)
We use the discrete approximation.
APPROXIMATION
>> mu = 50;
>> n = 100;
>> t = 0:n;
>> pD = ipoisson(mu,t);         % Approximate distribution for D
>> c  = 30;
>> p  = 50;
>> s  = 40;
>> r  = 20;
>> m  = 45:55;
>> for i = 1:length(m)          % Step by step calculation for various m
    M = t<=m(i);
    G(i,:) = (p-s)*t + m(i)*(s-c) + (s-r)*M.*(t - m(i));
end
>> EG = G*pD';
>> VG = (G.^2)*pD' - EG.^2;
>> SG =sqrt(VG);
>> disp([EG';VG';SG']')
   1.0e+04 *
    0.0931    1.1561    0.0108
    0.0936    1.3117    0.0115
    0.0939    1.4869    0.0122
    0.0942    1.6799    0.0130
    0.0943    1.8880    0.0137
    0.0944    2.1075    0.0145
    0.0943    2.3343    0.0153
    0.0941    2.5637    0.0160
    0.0938    2.7908    0.0167
    0.0934    3.0112    0.0174
    0.0929    3.2206    0.0179

EXAMPLE 6: A jointly distributed pair (Example 14 from "Mathematical Expectation; General Random Variables")

Suppose the pair {X,Y{ has joint density fXY(t,u)=3u on the triangular region bounded by u=0u=1+tu=1t. Let Z=g(X,Y)=X2+2XY.
Determine E[Z] and Var[Z].
ANALYTIC SOLUTION
E[Z]=(t2+2tu)fXY(t,u)dudt=31001+tu(t2+2tu)dudt+30101tu(t2+2tu)dudt=1/10(30)
E[Z2]=31001+tu(t2+2tu)2dudt+30101tu(t2+2tu)2dudt=3/35(31)
Var[Z]=E[Z2]E2[Z]=53/7000.0757(32)
APPROXIMATION
tuappr
Enter matrix [a b] of X-range endpoints  [-1 1]
Enter matrix [c d] of Y-range endpoints  [0 1]
Enter number of X approximation points  400
Enter number of Y approximation points  200
Enter expression for joint density  3*u.*(u<=min(1+t,1-t))
Use array operations on X, Y, PX, PY, t, u, and P
G = t.^2 + 2*t.*u;          % g(X,Y) = X^2 + 2XY
EG = total(G.*P)            % E[g(X,Y)]
EG =   0.1006               % Theoretical value = 1/10
VG = total(G.^2.*P) - EG^2
VG =   0.0765               % Theoretical value 53/700 = 0.0757
[Z,PZ] = csort(G,P);        % Distribution for Z
EZ = Z*PZ'                  % E[Z] from distribution
EZ =  0.1006
VZ = Z.^2*PZ' - EZ^2
VZ =  0.0765

EXAMPLE 7: A function with compound definition (Example 15 from "Mathematical Expectation; General Random Variables")

The pair {X,Y{ has joint density fXY(t,u)=1/2 on the square region bounded by u=1+tu=1tu=3t, and u=t1.
W={
Xformax{X,Y{1
2Yformax{X,Y{>1
=IQ(X,Y)X+IQc(X,Y)2Y
(33)
where Q={(t,u):max{t,u{1{={(t,u):t1,u1{.
Determine E[W] and Var[W].
ANALYTIC SOLUTION
The intersection of the region Q and the square is the set for which 0t1 and 1tu1. Reference to Figure 11.3.2 shows three regions of integration.
E[W]=
1
2
 011t1tdudt+
1
2
 0111+t2ududt+
1
2
 12t13t2ududt=11/61.8333
(34)
E[W2]=
1
2
 011t1t2dudt+
1
2
 0111+t4u2dudt+
1
2
 12t13t4u2dudt=103/24
(35)
Var[W]=103/24(11/6)2=67/720.9306(36)
tuappr
Enter matrix [a b] of X-range endpoints  [0 2]
Enter matrix [c d] of Y-range endpoints  [0 2]
Enter number of X approximation points  200
Enter number of Y approximation points  200
Enter expression for joint density  ((u<=min(t+1,3-t))& ...
      (u$gt;=max(1-t,t-1)))/2
Use array operations on X, Y, PX, PY, t, u, and P
M = max(t,u)<=1;
G = t.*M + 2*u.*(1 - M);     % Z = g(X,Y)
EG = total(G.*P)              % E[g(X,Y)]
EG =  1.8340                  % Theoretical 11/6 = 1.8333
VG = total(G.^2.*P) - EG^2
VG =  0.9368                  % Theoretical 67/72 = 0.9306
[Z,PZ] = csort(G,P);          % Distribution for Z
EZ = Z*PZ'                    % E[Z] from distribution
EZ =  1.8340
VZ = (Z.^2)*PZ' - EZ^2
VZ =  0.9368

EXAMPLE 8: A function with compound definition

fXY(t,u)=3on0ut21(37)
Z=IQ(X,Y)X+IQc(X,Y)forQ={(t,u):u+t1{(38)
The value t0 where the line u=1t and the curve u=t2 meet satisfies t02=1t0.
E[Z]=30t0t0t2dudt+3t01t01tdudt+3t011tt2dudt=
3
4
 (5t02)
(39)
For E[Z2] replace t by t2 in the integrands to get E[Z2]=(25t01)/20.
Using t0=(\51)/20.6180, we get Var[Z]=(2125t01309)/800.0540.
APPROXIMATION
% Theoretical values
t0 = (sqrt(5) - 1)/2
t0 =  0.6180
EZ = (3/4)*(5*t0 -2)
EZ =  0.8176
EZ2 = (25*t0 - 1)/20
EZ2 = 0.7225
VZ = (2125*t0 - 1309)/80
VZ =  0.0540
tuappr
Enter matrix [a b] of X-range endpoints  [0 1]
Enter matrix [c d] of Y-range endpoints  [0 1]
Enter number of X approximation points  200
Enter number of Y approximation points  200
Enter expression for joint density  3*(u <= t.^2)
Use array operations on X, Y, t, u, and P
G = (t+u <= 1).*t + (t+u > 1);
EG = total(G.*P)
EG =  0.8169                   % Theoretical = 0.8176
VG = total(G.^2.*P) - EG^2
VG =  0.0540                   % Theoretical = 0.0540
[Z,PZ] = csort(G,P);
EZ = Z*PZ'
EZ =  0.8169
VZ = (Z.^2)*PZ' - EZ^2
VZ =  0.0540
Standard deviation and the Chebyshev inequality
In Example 5 from "Functions of a Random Variable," we show that if XN(μ,σ2) then Z=
Xμ
σ
 N(0,1). Also, E[X]=μ and Var[X]=σ2. Thus
P(
|Xμ|
σ
 ,,t
)=P(|Xμ|tσ)=2Φ(t)1
(40)
For the normal distribution, the standard deviation σ seems to be a natural measure of the variation away from the mean.
For a general distribution with mean μ and variance σ2, we have the
Chebyshev inequality
P(
|Xμ|
σ
 ,,a
)
1
a2
 orP(|Xμ|aσ)
1
a2
(41)
In this general case, the standard deviation appears as a measure of the variation from the mean value. This inequality is useful in many theoretical applications as well as some practical ones. However, since it must hold for any distribution which has a variance, the bound is not a particularly tight. It may be instructive to compare the bound on the probability given by the Chebyshev inequality with the actual probability for the normal distribution.
t = 1:0.5:3;
p = 2*(1 - gaussian(0,1,t));
c = ones(1,length(t))./(t.^2);
r = c./p;
h = ['       t     Chebyshev   Prob     Ratio'];
m = [t;c;p;r]';
disp(h)
       t     Chebyshev   Prob     Ratio
disp(m)
    1.0000    1.0000    0.3173    3.1515
    1.5000    0.4444    0.1336    3.3263
    2.0000    0.2500    0.0455    5.4945
    2.5000    0.1600    0.0124   12.8831
    3.0000    0.1111    0.0027   41.1554
— 
DERIVATION OF THE CHEBYSHEV INEQUALITY
Let A={|Xμ|aσ{={(Xμ)2a2σ2{. Then a2σ2IA(Xμ)2.
Upon taking expectations of both sides and using monotonicity, we have
a2σ2P(A)E[(Xμ)2]=σ2(42)
from which the Chebyshev inequality follows immediately.
— 
We consider three concepts which are useful in many situations.
Definition. A random variable X is centered iff E[X]=0.
X'=Xμisalwayscentered.(43)
Definition. A random variable X is standardized iff E[X]=0 and Var[X]=1.
X*=
Xμ
σ
 =
X'
σ
 isstandardized
(44)
Definition. A pair {X,Y{ of random variables is uncorrelated iff
E[XY]E[X]E[Y]=0(45)
It is always possible to derive an uncorrelated pair as a function of a pair {X,Y{, both of which have finite variances. Consider
U=(X*+Y*)V=(X*Y*),whereX*=
XμX
σX
 ,Y*=
YμY
σY
(46)
Now E[U]=E[V]=0 and
E[UV]=E(X*+Y*)(X*Y*)]=E[(X*)2]E[(Y*)2]=11=0(47)
so the pair is uncorrelated.

EXAMPLE 9: Determining an uncorrelated pair

We use the distribution for Examples Example 10 from "Mathematical Expectation: Simple Random Variables" and Example 3, for which
E[XY]E[X]E[Y]0(48)
jdemo1
jcalc
Enter JOINT PROBABILITIES (as on the plane)  P
Enter row matrix of VALUES of X  X
Enter row matrix of VALUES of Y  Y
 Use array operations on matrices X, Y, PX, PY, t, u, and P
EX = total(t.*P)
EX =   0.6420
EY = total(u.*P)
EY =   0.0783
EXY = total(t.*u.*P)
EXY = -0.1130
c = EXY - EX*EY
c  =  -0.1633                % {X,Y} not uncorrelated
VX = total(t.^2.*P) - EX^2
VX =  3.3016
VY = total(u.^2.*P) - EY^2
VY =  3.6566
SX = sqrt(VX)
SX =  1.8170
SY = sqrt(VY)
SY =  1.9122
x = (t - EX)/SX;            % Standardized random variables
y = (u - EY)/SY;
uu = x + y;                 % Uncorrelated random variables
vv = x - y;
EUV = total(uu.*vv.*P)      % Check for uncorrelated condition
EUV = 9.9755e-06            % Differs from zero because of roundoff

0 comments:

Post a Comment

Recent Comments

Popular Matlab Topics

Share your knowledge - help others

Crazy over Matlab Projects ? - Join Now - Follow Me

Sites U Missed to Visit ?

Related Posts Plugin for WordPress, Blogger...

Latest Articles

Special Search For Matlab Projects

MATLAB PROJECTS

counter

Bharadwaj. Powered by Blogger.