Problems with Matlab Projects? You may face many Problems, but do not worry we are ready to solve your Problems. All you need to do is just leave your Comments. We will assure you that you will find a solution to your project along with future tips. On Request we will Mail you Matlab Codes for Registered Members of this site only, at free service...Follow Me.

Functions of a Random Variable

Summary: Frequently, we observe a value of some random variable, but are really interested in a value derived from this by a function rule. If X is a random variable and g is a reasonable function (technically, a Borel function), then Z=g(X)is a new random variable which has the value g(t) for any ω such that X(ω)=t. Thus Z(ω)=g(X(ω)). Suppose we have the distribution for X. How can we determine P(Z∈M), the probability Z takes a value in the set M? Mapping approach. Simply find the amount of probability mass mapped into the set M by the random variable X. In the absolutely continuous case, integrate the density function for X over the set M. In the discrete case, as an alternative, select those possible values for X which are in the set M and add their probabilities. For a Borel function g and set M, determine the set N of all those t which are mapped into M, then determine the probability X is in N as in the previous case.




Introduction
Frequently, we observe a value of some random variable, but are really interested in a value derived from this by a function rule. If X is a random variable and gis a reasonable function (technically, a Borel function), then Z=g(X) is a new random variable which has the value g(t) for any ω such that X(ω)=t. Thus Z(ω)=g(X(ω)).

The problem; an approach

We consider, first, functions of a single random variable. A wide variety of functions are utilized in practice.

EXAMPLE 1: A quality control problem

In a quality control check on a production line for ball bearings it may be easier to weigh the balls than measure the diameters. If we can assume true spherical shape and w is the weight, then diameter is kw1/3, where k is a factor depending upon the formula for the volume of a sphere, the units of measurement, and the density of the steel. Thus, if X is the weight of the sampled ball, the desired random variable is D=kX1/3.

EXAMPLE 2: Price breaks

The cultural committee of a student organization has arranged a special deal for tickets to a concert. The agreement is that the organization will purchase ten tickets at $20 each (regardless of the number of individual buyers). Additional tickets are available according to the following schedule:
  • 11-20, $18 each
  • 21-30, $16 each
  • 31-50, $15 each
  • 51-100, $13 each
If the number of purchasers is a random variable X, the total cost (in dollars) is a random quantity Z=g(X) described by
g(X)=200+18IM1(X)(X10)+(1618)IM2(X)(X20)(1)
+(1516)IM3(X)(X30)+(1315)IM4(X)(X50)(2)
whereM1=[10,),M2=[20,),M3=[30,),M4=[50,)(3)
The function rule is more complicated than in Example 1, but the essential problem is the same.
The problem
If X is a random variable, then Z=g(X) is a new random variable. Suppose we have the distribution for X. How can we determine P(ZM), the probability Z takes a value in the set M?
An approach to a solution
We consider two equivalent approaches
  1. To find P(XM).
    1. Mapping approach. Simply find the amount of probability mass mapped into the set M by the random variable X.
      • In the absolutely continuous case, calculate MfX.
      • In the discrete case, identify those values ti of X which are in the set M and add the associated probabilities.
    2. Discrete alternative. Consider each value ti of X. Select those which meet the defining conditions for M and add the associated probabilities. This is the approach we use in the MATLAB calculations. Note that it is not necessary to describe geometrically the set M; merely use the defining conditions.
  2. To find P(g(X)M).
    1. Mapping approach. Determine the set N of all those t which are mapped into M by the function g. Now if X(ω)N, then g(X(ω))M, and if g(X(ω))M, thenX(ω)N. Hence
      {ω:g(X(ω))M{={ω:X(ω)N{(4)
      Since these are the same event, they must have the same probability. Once N is identified, determine P(XN) in the usual manner (see part a, above).
    2. Discrete alternative. For each possible value ti of X, determine whether g(ti) meets the defining condition for M. Select those ti which do and add the associated probabilities.
— 
Remark. The set N in the mapping approach is called the inverse image N=g1(M).

EXAMPLE 3: A discrete example

Suppose X has values -2, 0, 1, 3, 6, with respective probabilities 0.2, 0.1, 0.2, 0.3 0.2.
Consider Z=g(X)=(X+1)(X4). Determine P(Z>0).
SOLUTION
First solution. The mapping approach
g(t)=(t+1)(t4)N={t:g(t)>0{ is the set of points to the left of 1 or to the right of 4. The X-values 2 and 6 lie in this set. Hence
P(g(X)>0)=P(X=2)+P(X=6)=0.2+0.2=0.4(5)
Second solution. The discrete alternative
TABLE 1
 
X=-20136
PX=0.20.10.20.30.2
Z=6-4-6-414
Z>010001
Picking out and adding the indicated probabilities, we have
P(Z>0)=0.2+0.2=0.4(6)
In this case (and often for “hand calculations”) the mapping approach requires less calculation. However, for MATLAB calculations (as we show below), the discrete alternative is more readily implemented.

EXAMPLE 4: An absolutely continuous example

Suppose X uniform [3,7]. Then fX(t)=0.1,3t7 (and zero elsewhere). Let
Z=g(X)=(X+1)(X4)(7)
Determine P(Z>0).
SOLUTION
First we determine N={t:g(t)>0{. As in Example 3g(t)=(t+1)(t4)>0 for t<1 or t>4. Because of the uniform distribution, the integral of the density over any subinterval of [3,7] is 0.1 times the length of that subinterval. Thus, the desired probability is
P(g(X)>0)=0.1[(1(3))+(74)]=0.5(8)
We consider, next, some important examples.

EXAMPLE 5: The normal distribution and standardized normal distribution

To show that if XN(μ,σ2) then
Z=g(X)=
Xμ
σ
 N(0,1)
(9)
VERIFICATION
We wish to show the denity function for Z is
φ(t)=
1
\2π
 et2/2
(10)
Now
g(t)=
tμ
σ
 vifftσv+μ
(11)
Hence, for given M=(,v] the inverse image is N=(,σv+μ], so that
FZ(v)=P(Zv)=P(ZM)=P(XN)=P(Xσv+μ)=FX(σv+μ)(12)
Since the density is the derivative of the distribution function,
fZ(v)=FZ'(v)=FX'(σv+μ)σ=σfX(σv+μ)(13)
Thus
fZ(v)=
σ
σ\2π
 exp[
1
2
 (
σv+μμ
σ
 
)
2=
1
\2π
 ev2/2=φ(v)
(14)
We conclude that ZN(0,1)

EXAMPLE 6: Affine functions

Suppose X has distribution function FX. If it is absolutely continuous, the corresponding density is fX. Consider Z=aX+b(a0). Here g(t)=at+b, an affine function (linear plus a constant). Determine the distribution function for Z (and the density in the absolutely continuous case).
SOLUTION
FZ(v)=P(Zv)=P(aX+bv)(15)
There are two cases
  • a>0:
    FZ(v)=P(X,,
    vb
    a
     
    )=FX(
    vb
    a
     
    )
    (16)
  • a<0
    FZ(v)=P(X,,
    vb
    a
     
    )=P(X,>,
    vb
    a
     
    )+P(X,=,
    vb
    a
     
    )
    (17)
    So that
    FZ(v)=1FX(
    vb
    a
     
    )+P(X,=,
    vb
    a
     
    )
    (18)
For the absolutely continuous case, P(X,=,
vb
a
 )=0, and by differentiation
  • for a>0fZ(v)=
    1
    a
     fX(
    vb
    a
     
    )
  • for a<0fZ(v)=
    1
    a
     fX(
    vb
    a
     
    )
Since for a<0a=|a|, the two cases may be combined into one formula.
fZ(v)=
1
|a|
 fX(
vb
a
 
)
(19)

EXAMPLE 7: Completion of normal and standardized normal relationship

Suppose ZN(0,1). Show that X=σZ+μ(σ>0) is N(μ,σ2).
VERIFICATION
Use of the result of Example 6 on affine functions shows that
fX(t)=
1
σ
 φ(
tμ
σ
 
)=
1
σ\2π
 exp[,
1
2
 ,(
tμ
σ
 
)
2
]
(20)

EXAMPLE 8: Fractional power of a nonnegative random variable

Suppose X0 and Z=g(X)=X1/a for a>1. Since for t0t1/a is increasing, we have 0t1/av iff 0tva. Thus
FZ(v)=P(Zv)=P(Xva)=FX(va)(21)
In the absolutely continuous case
fZ(v)=FZ'(v)=fX(va)ava1(22)

EXAMPLE 9: Fractional power of an exponentially distributed random variable

Suppose X exponential (λ). Then Z=X1/a Weibull (a,λ,0).
According to the result of Example 8,
FZ(t)=FX(ta)=1eλta(23)
which is the distribution function for Z Weibull (a,λ,0).

EXAMPLE 10: A simple approximation as a function of X

If X is a random variable, a simple function approximation may be constructed (see Distribution Approximations). We limit our discussion to the bounded case, in which the range of X is limited to a bounded interval I=[a,b]. Suppose I is partitioned into n subintervals by points ti1in1, with a=t0 and b=tn. Let Mi=[ti1,ti) be the ith subinterval, 1in1 and Mn=[tn1,tn]. Let Ei=X1(Mi) be the set of points mapped into Mi by X. Then the Ei form a partition of the basic space Ω. For the given subdivision, we form a simple random variable Xs as follows. In each subinterval, pick a point si,ti1si<ti. The simple random variable
Xs=
n
i=1
 siIEi
(24)
approximates X to within the length of the largest subinterval Mi. Now IEi=IMi(X), since IEi(ω)=1 iff X(ω)Mi iff IMi(X(ω))=1. We may thus write
Xs=
n
i=1
 siIMi(X),afunctionofX
(25)

Use of MATLAB on simple random variables

For simple random variables, we use the discrete alternative approach, since this may be implemented easily with MATLAB. Suppose the distribution for X is expressed in the row vectors X andPX.
  • We perform array operations on vector X to obtain
    G=[g(t1)g(t2)g(tn)](26)
  • We use relational and logical operations on G to obtain a matrix M which has ones for those ti (values of X) such that g(ti) satisfies the desired condition (and zeros elsewhere).
  • The zero-one matrix M is used to select the the corresponding pi=P(X=ti) and sum them by the taking the dot product of M and PX.

EXAMPLE 11: Basic calculations for a function of a simple random variable

X = -5:10;                     % Values of X
PX = ibinom(15,0.6,0:15);      % Probabilities for X
G = (X + 6).*(X - 1).*(X - 8); % Array operations on X matrix to get G = g(X)
M = (G > - 100)&(G < 130);     % Relational and logical operations on G
PM = M*PX'                     % Sum of probabilities for selected values
PM =  0.4800
disp([X;G;M;PX]')              % Display of various matrices (as columns)
   -5.0000   78.0000    1.0000    0.0000
   -4.0000  120.0000    1.0000    0.0000
   -3.0000  132.0000         0    0.0003
   -2.0000  120.0000    1.0000    0.0016
   -1.0000   90.0000    1.0000    0.0074
         0   48.0000    1.0000    0.0245
    1.0000         0    1.0000    0.0612
    2.0000  -48.0000    1.0000    0.1181
    3.0000  -90.0000    1.0000    0.1771
    4.0000 -120.0000         0    0.2066
    5.0000 -132.0000         0    0.1859
    6.0000 -120.0000         0    0.1268
    7.0000  -78.0000    1.0000    0.0634
    8.0000         0    1.0000    0.0219
    9.0000  120.0000    1.0000    0.0047
   10.0000  288.0000         0    0.0005
[Z,PZ] = csort(G,PX);          % Sorting and consolidating to obtain
disp([Z;PZ]')                  % the distribution for Z = g(X)
 -132.0000    0.1859
 -120.0000    0.3334
  -90.0000    0.1771
  -78.0000    0.0634
  -48.0000    0.1181
         0    0.0832
   48.0000    0.0245
   78.0000    0.0000
   90.0000    0.0074
  120.0000    0.0064
  132.0000    0.0003
  288.0000    0.0005
P1 = (G<-120)*PX '           % Further calculation using G, PX
P1 =  0.1859
p1 = (Z<-120)*PZ'            % Alternate using Z, PZ
p1 =  0.1859

EXAMPLE 12

X=10IA+18IB+10IC with {A,B,C{ independent and P=[0.60.30.5].
We calculate the distribution for X, then determine the distribution for
Z=X1/2X+50(27)
c = [10 18 10 0];
pm = minprob(0.1*[6 3 5]);
canonic
 Enter row vector of coefficients  c
 Enter row vector of minterm probabilities  pm
Use row matrices X and PX for calculations
Call for XDBN to view the distribution
disp(XDBN)
         0    0.1400
   10.0000    0.3500
   18.0000    0.0600
   20.0000    0.2100
   28.0000    0.1500
   38.0000    0.0900
G = sqrt(X) - X + 50;       % Formation of G matrix
[Z,PZ] = csort(G,PX);       % Sorts distinct values of g(X)
disp([Z;PZ]')               % consolidates probabilities
   18.1644    0.0900
   27.2915    0.1500
   34.4721    0.2100
   36.2426    0.0600
   43.1623    0.3500
   50.0000    0.1400
M = (Z < 20)|(Z >= 40)      % Direct use of Z distribution
M =    1     0     0     0     1     1
PZM = M*PZ'
PZM =  0.5800
Remark. Note that with the m-function csort, we may name the output as desired.

EXAMPLE 13: Continuation of Example 12, above.

H = 2*X.^2 - 3*X + 1;
[W,PW] = csort(H,PX)
W  =     1      171     595     741    1485    2775
PW =  0.1400  0.3500  0.0600  0.2100  0.1500  0.0900

EXAMPLE 14: A discrete approximation

Suppose X has density function fX(t)=
1
2
 (3t2+2t) for 0t1. Then FX(t)=
1
2
 (t3+t2)
. Let Z=X1/2. We may use the approximation m-procedure tappr to obtain an approximate discrete distribution. Then we work with the approximating random variable as a simple random variable. Suppose we want P(Z0.8). Now Z0.8 iff X0.82=0.64. The desired probability may be calculated to be
P(Z0.8)=FX(0.64)=(0.643+0.642)/2=0.3359(28)
Using the approximation procedure, we have
tappr
Enter matrix [a b] of x-range endpoints  [0 1]
Enter number of x approximation points  200
Enter density as a function of t  (3*t.^2 + 2*t)/2
Use row matrices X and PX as in the simple case
G = X.^(1/2);
M = G <= 0.8;
PM = M*PX'
PM =   0.3359       % Agrees quite closely with the theoretical

0 comments:

Post a Comment

Recent Comments

Popular Matlab Topics

Share your knowledge - help others

Crazy over Matlab Projects ? - Join Now - Follow Me

Sites U Missed to Visit ?

Related Posts Plugin for WordPress, Blogger...

Latest Articles

Special Search For Matlab Projects

MATLAB PROJECTS

counter

Bharadwaj. Powered by Blogger.