How to find cumulative distribution function from probability density function

Event History Analysis

Nancy Brandon Tuma, in Encyclopedia of Social Measurement, 2005

Cumulative Distribution Function

The cumulative distribution function (CDF) of T is the complement of S(t):

(2)F(t)≡Pr(T≤t)=1−S(t ),

where F(t) is the probability that the event occurs before time t. The CDF and the survival probability give equivalent information, but traditionally the survival probability is reported more often than the CDF in event history analysis. Ordinarily, F(∞) = 1; eventually the event occurs. If the probability distribution of the event time is defective, there is a nonzero probability that the event does not occur, even after an infinite amount of time has elapsed. Then F(∞) < 1.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985001584

Additional Topics on Optimum Design

Jasbir S. Arora, in Introduction to Optimum Design (Third Edition), 2012

Cumulative Distribution Function

The cumulative distribution function (CDF) FX(x) describes the probability that a random variable X with a given probability distribution will be found at a value less than or equal to x. This function is given as

(20.69)FX(x)=P[X ≤x]=∫−∞xfX(u)du

That is, for a given value x, FX(x) is the probability that the observed value of X is less than or equal to x. If fX is continuous at x, then the probability density function is the derivative of the cumulative distribution function:

(20.70)fX(x)=dFX(x)d x

The CDF also has the following properties:

(20.71)limx→−∞F(x)=0; limx→∞F(x)=1

The cumulative distribution function is illustrated in Figure 20.4(b). It shows that the probability of X being less than or equal to xl is FX(xl). This is a point on the FX(x) versus x curve in Figure 20.4(b) and it is the shaded area in Figure 20.4(a).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123813756000292

Probability and Random Variables

FABRIZIO GABBIANI, STEVEN J. COX, in Mathematics for Neuroscientists, 2010

11.5 CUMULATIVE DISTRIBUTION FUNCTIONS

The cumulative distribution function of a random variable X is defined by

F(x)≡P(X≤x).

The cumulative distribution function is monotone increasing, meaning that x1 ≤ x2 implies F(x1) ≤ F(x2). This follows simply from the fact that {X ≤ x2} = {X ≤ x1}∪{x1 ≤ X ≤ x2} and the additivity of probabilities for disjoint events. Furthermore, if X takes values between −∞ and ∞, like the Gaussian random variable, then F(−∞) = 0 and F(∞) = 1. If the random variable X is continuous and possesses a density, p(x), like the Gaussian random variable does, it follows immediately from the definition of F, and since F(−∞) = 0, that

F(x)=∫−∞xp(y)dy.

Conversely, according to the fundamental theorem of calculus, Eq. (1.7), p(x) = F′(x). Thus, the probability density is the derivative of the cumulative distribution function. This in turn implies that the probability density is always nonnegative, p(x) ≥ 0, because F is monotone increasing. The cumulative distribution function of the standard normal distribution is, up to constant factors, the error function,

erf(x)≡2π∫0xexp(−y2)dy,

(Exercise 7). The error function is not an elementary function, meaning that it cannot be built explicitly in terms of simple functions like the exponential, the logarithm or nth roots by means of the four elementary operations (addition, subtraction, multiplication, and division).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123748829000113

Random Variables

Oliver C. Ibe, in Fundamentals of Applied Probability and Random Processes (Second Edition), 2014

2.5 Discrete Random Variables

A discrete random variable is a random variable that can take on at most a countable number of possible values. The number can be finite or infinite; that is, the random variable can have a countably finite number of values or a countably infinite number of values. For a discrete random variable X, the probability mass function (PMF), pX(x), is defined as follows:

(2.2)p Xx=PX=x

where ∑x=−∞∞pXx=1. The PMF is nonzero for at most a countable number of values of x. In particular, if we assume that X can only assume one of the values x1, x2, …, xn, then

pXxi≥0i=1,2,…,np Xxi=0otherwise

The CDF of X can be expressed in terms of pX(x) as follows:

(2.3) FXx=∑k≤xpXk

The CDF of a discrete random variable is a series of step functions. That is, if X takes on values at x1, x2, x3, …, where x1 < x2 < x3 < ⋯, then the value of FX(x) is constant in the interval between xi − 1 and xi and then takes a jump of size pX(xi) at xi, i = 1, 2, 3, … . Thus, in this case, FX(x) represents the sum of all the probability masses we have encountered as we move from − ∞ to x.

Example 2.4

Assume that X has the PMF given by

pXx=14x=012x=1 14x=20otherwise

The PMF of X is given in Figure 2.5(a), and its CDF is given by

FXx=0x<0140≤x<134 1≤x<21x≥2

How to find cumulative distribution function from probability density function

Figure 2.5. Graph of FX(x) for Example 2.4

Thus, the graph of the CDF of X is as shown in Figure 2.5(b).

Example 2.5

Let the random variable X denote the number of heads in three tosses of a fair coin. (a) What is the PMF of X? (b) Sketch the CDF of X.

Solution:

a.

The sample space of the experiment is

Ω=HHHHHTHTHHTTTHHTHTTTHTTT

The different events defined by the random variable X are as follows:

X=0=TTTX=1=HTTTHTTTHX=2=HHTHTHTHHX=3= HHH

Since the eight sample points in Ω are equally likely, the PMF of X is as follows:

pXx= 18x=038x=138x=218 x=30otherwise

The PMF is graphically illustrated in Figure 2.6(a).

How to find cumulative distribution function from probability density function

Figure 2.6. Graphs of pX(x) and FX(x) for Example 2.5

b.

The CDF of X is given by

FXx=0x<0180≤x<112 1≤x<2782≤x<31x≥3

The graph of FX(x) is shown in Figure 2.6(b).

Example 2.6

Let the random variable X denote the sum obtained in rolling a pair of fair dice. Determine the PMF of X.

Solution:

Let the pair (a, b) denote the outcomes of the roll, where a is the outcome of one die and b is the outcome of the other. Thus, the sum of the outcomes is X = a + b. The different events defined by the random variable X are as follows:

X=2={(1,1)} X=3={(1,2),(2,1)}X=4={(1,3), (2,2),(3,1)}X= 5={(1,4),(2,3),(3,2),(4,1)}X=6={(1,5),(2,4),(3,3),(4,2),(5,1)}X=7 ={(1,6),(2,5),(3,4),(4,3),(5,2),(6,1)}X=8={(2,6),(3,5),(4,4),,(5,3),(6,2 )}X=9={(3,6),(4,5),(5,4),(6,3)} X=10={(4,6),(5,5), (6,4)}X=11={(5,6),(6,5)}X=12={(6 ,6)}

Since there are 36 equally likely sample points in the sample space, the PMF of X is given by:

pXx =136x=2236x=3336x=4436 x=5536x=6636 x=7536x=8436x=9336x=10236x=11136x=120otherwise

Example 2.7

The PMF of the number of components K of a system that fail is defined by

pKk=4k0.2k(0.8)4−kk=0,1,…,40otherwise

a.

What is the CDF of K?

b.

What is the probability that less than 2 components of the system fail?

Solution:

a.

The CDF of K is given by

FKk=P[K≤k]=∑m≤kpKm=∑m=0kpK m=∑m=0k4!4−m!m!(0.2)m(0.8)4−m=0k<00.840≤k<10.84 +4(0.2)(0.8)31≤k<20.84+4(0.2)(0.8)3+6(0.2)2(0.8)22≤k<30.84+4(0.2)(0.8)3+6(0.2 )2(0.8)2+4(0.2)3( 0.8)3≤k<40.84+4(0.2)(0.8)3+6(0.2)2(0.8)2+4(0.2)3(0.8)+(0.2)4k≥4=0k< 00.40960≤k<10.81921≤k<20.97282≤k<30.99843≤k<41.0k≥4

b.

The probability that less than 2 components of the system fail is the probability that either no component fails or one component fails, which is given by

P K<2=P{K=0)∪K=1=PK=0+PK= 1=FK1=0.8192

where the second equality is due to the fact that the two events are mutually exclusive.

Example 2.8

The PMF of the number N of customers that arrive at a local library within one hour interval is defined by

pNn=5 nn!e−5n=0,1,… 0otherwise

What is the probability that at most two customers arrive at the library within one hour?

Solution:

The probability that at most two customers arrive at the library within one hour is the probability that 0 or 1 or 2 customers arrive at the library within one hour, which is

PN≤2=PN=0∪N=1∪N=2 =P[N=0]+P[N=1]+P [N=2]=pN0+p N(1)+pN(2)=e−5 1+5+252=18.5e−5= 0.1246

where the second equality on the first line is due to the fact that the three events are mutually exclusive.

2.5.1 Obtaining the PMF from the CDF

So far we have shown how to obtain the CDF from the PMF; namely, for a discrete random variable X with PMF pX(x), the CDF is given by

FXx=∑k≤xpXk

Sometimes we are given the CDF of a discrete random variable and are required to obtain its PMF. From Figures 2.5 and 2.6 we observe that the CDF of a discrete random variable has the staircase plot with jumps at those values of the random variable where the PMF has a nonzero value. The size of a jump at a value of a random variable is equal to the value of the PMF at the value.

Thus, given the plot of the CDF of a discrete random variable, we can obtain the PMF of the random variable by noting that the random variable only takes on values that have nonzero probability at those points where jumps occur. The probability that the random variable takes on any other value than where the jumps occur is zero. More importantly, the probability that the random variable takes a value where a jump occurs is equal to the size of the jump.

Example 2.9

The plot of the CDF of a discrete random variable X is shown in Figure 2.7. Find the PMF of X.

How to find cumulative distribution function from probability density function

Figure 2.7. Graph of FX(x) for Example 2.9

Solution

The random variable takes on values with nonzero probability at X = 1, X = 2, X = 4 and X = 6. The size of the jump at X = 1 is 13, the size of the jump at X = 2 is 12−13=16, the size of the jump at X = 4 is 3 4−12=14, and the size of the jump at X = 6 is 1−34 =14. Thus, the PMF of X is given by

pXx= 13x=116x=214x=414x =60otherwise

Example 2.10

Find the PMF of a discrete random variable X whose CDF is given by:

FXx=0x<0160≤x<2122≤x<4 584≤x<61x≥6

Solution:

In this example, we do not need to plot the CDF. We observe that it changes values at X = 0, X = 2, X = 4 and X = 6, which means that these are the values of the random variable that have nonzero probabilities. The next task after isolating these values with nonzero probabilities is to determine their probabilities. The first value is pX(0), which is 16. At X = 2 the size of the jump is 12− 16=13=pX2. Similarly, at X = 4 the size of the jump is 58−12=18=pX4. Finally, at X = 6 the size of the jump is 1−58=38=pX6. Therefore, the PMF of X is given by

pXx=16x=013x= 218x=438x=60otherwise

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012800852200002X

Elements of Probability

Sheldon Ross, in Simulation (Fifth Edition), 2013

2.4 Random Variables

When an experiment is performed we are sometimes primarily concerned about the value of some numerical quantity determined by the result. These quantities of interest that are determined by the results of the experiment are known as random variables.

The cumulative distribution function, or more simply the distribution function, F of the random variable X is defined for any real number x by

F(x)=P{X⩽x}.

A random variable that can take either a finite or at most a countable number of possible values is said to be discrete. For a discrete random variable X we define its probability mass function p(x) by

p(x)=P{X=x}

If X is a discrete random variable that takes on one of the possible values x1,x2 ,…, then, since X must take on one of these values, we have

∑i=1∞ p(xi)=1.

Example 2a

Suppose that X takes on one of the values 1, 2, or 3. If

p(1)=14,p(2)= 13

then, since p(1)+p(2)+p(3)=1, it follows that p(3)=512.

Whereas a discrete random variable assumes at most a countable set of possible values, we often have to consider random variables whose set of possible values is an interval. We say that the random variable X is a continuous random variable if there is a nonnegative function f(x) defined for all real numbers x and having the property that for any set C of real numbers

(2.1)P{X∈C}=∫Cf(x)dx

The function f is called the probability density function of the random variable X.

The relationship between the cumulative distribution F(·) and the probability density f(·) is expressed by

F(a)=P{X∈(-∞,a)}=∫-∞ af(x)dx.

Differentiating both sides yields

ddaF(a)=f(a).

That is, the density is the derivative of the cumulative distribution function. A somewhat more intuitive interpretation of the density function may be obtained from Eqution (2.1) as follows:

P a-ϵ2⩽X⩽a+ϵ2=∫a-ϵ/2a+ϵ/2f(x)dx≈ϵf(a)

when ϵ is small. In other words, the probability that X will be contained in an interval of length ϵ around the point a is approximately ϵf(a). From this, we see that f(a) is a measure of how likely it is that the random variable will be near a.

In many experiments we are interested not only in probability distribution functions of individual random variables, but also in the relationships between two or more of them. In order to specify the relationship between two random variables, we define the joint cumulative probability distribution function of X and Y by

F(x,y)=P{X⩽x,Y⩽y}

Thus, F(x,y) specifies the probability that X is less than or equal to x and simultaneously Y is less than or equal to y.

If X and Y are both discrete random variables, then we define the joint probability mass function of X and Y by

p(x,y)=P{X=x,Y=y}

Similarly, we say that X and Y are jointly continuous, with joint probability density function f(x,y), if for any sets of real numbers C and D

P{X∈C,Y∈D}=∬x∈Cy∈Df(x,y)dxdy

The random variables X and Y are said to be independent if for any two sets of real numbers C and D

P{X∈C,Y∈D}=P{X∈C}P{Y∈D}.

That is, X and Y are independent if for all sets C and D the events A={X∈C} and B={Y∈D} are independent. Loosely speaking, X and Y are independent if knowing the value of one of them does not affect the probability distribution of the other. Random variables that are not independent are said to be dependent.

Using the axioms of probability, we can show that the discrete random variables X and Y will be independent if and only if , for all x,y,

P{X=x,Y=y}=P{X=x}P{Y=y}

Similarly, if X and Y are jointly continuous with density function f(x,y), then they will be independent if and only if, for all x,y,

f (x,y)=fX(x)fY(y)

where fX(x) and fY(y) are the density functions of X and Y, respectively.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124158252000024

Probability Theory

P.K. Bhattacharya, Prabir Burman, in Theory and Methods of Statistics, 2016

Probability Integral Transform

Suppose X has cdf F which is continuous and strictly increasing. Then F−1 is uniquely defined as

F−1(u)=xiffF(x)=ufor0<u<1.

Then the cdf of Y = F(X) at u ∈ (0, 1) is

FY(u)=PF(x) ≤u=PX≤F−1(u)=FF−1(u)=u.

Thus fY(u) = 1 for 0 < u < 1 and fY(u) = 0 for u∉(0, 1),  because 0 < Y = F(X) < 1 with probability 1. In other words, if X has a continuous and strictly increasing cdf F,  then Y = F(X) is distributed with pdf

fY(u)=1if0<u<1,0otherwise.

A rv with this pdf is said to be a Uniform(0, 1) rv. Conversely, if U is Uniform(0, 1), then X = F−1(U) has cdf F. This fact is useful in generating random samples (ie, iid rv’s) with cdf F by first generating random samples U1, U2, … from Uniform(0, 1), which is easy, and then transforming U1, U2, … to X1 = F−1(U1), X2 = F−1(U2), ….

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128024409000011

Pairs of Random Variables

Scott L. Miller, Donald Childers, in Probability and Random Processes (Second Edition), 2012

Section 5.1 Joint CDFs

5.1

Recall the joint CDF given in Example 5.1,

FX,Y(x,y)={0,x<0ory<0,x,0≤x≤1,y>1,y,x>1,0≤y≤1,xy,0≤x≤1,0≤y≤1,1,x>1,y>1.

(a)

Find Pr(X < 3/4).

(b)

Find Pr(X > 1/2).

(c)

Find Pr(Y > 1/4).

(d)

Find Pr(1/4 < X < 1/2, 1/2 < Y <1).

5.2

A colleague of your proposes that a certain pair of random variables be modeled with a joint CDF of the form

FX,Y (x,y)=[1-ae-x-be-y+ce-(x+y)]u(x)u(y).

(a)

Find any restrictions on the constants a, b, and c needed for this to be a valid joint CDF.

(b)

Find the marginal CDFs, FX(x) and Fy (y) under the restrictions found in part (a).

5.3

Consider again the joint CDF given in Exercise 5.2.

(a)

For constants a and b, such that 0 < a < 1, 0 < b < 1 and a < b, find Pr(a < X < b).

(b)

For constants c and d, such that 0 < c < 1, 0 < d < 1 and c < d, find Pr(c < Y < d).

(c)

Find Pr(a < X < b|c < Y<d). Are the events {a < X < b} and {c < Y < d} statistically independent?

5.4

Suppose a random variable X has a CDF given by FX(x) and similarly, a random variable Y has a CDF, Fy(y). Prove that the function F(x, y) = FX(x)Fy(y) satisfies all the properties required of joint CDFs and hence will always be a valid joint CDF.

5.5

For the joint CDF that is the product of two marginal CDFs, FX, Y(x, y) = FX(x)FY(y), as described in Exercise 5.4, show that the events { a < X < b } and { c < Y < d } are always independent for any constants a < b and c < d.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123869814500084

Random Variables

Sheldon M. Ross, in Introduction to Probability Models (Twelfth Edition), 2019

2.3.1 The Uniform Random Variable

A random variable is said to be uniformly distributed over the interval (0,1) if its probability density function is given by

f(x)={1,0<x<10,otherwise

Note that the preceding is a density function since f(x)⩾0 and

∫−∞∞f(x)dx=∫01dx=1

Since f(x )>0 only when x∈(0,1), it follows that X must assume a value in (0,1). Also, since f(x) is constant for x∈(0,1),X is just as likely to be “near” any value in (0, 1) as any other value. To check this, note that, for any 0<a<b<1,

P{a⩽X⩽b}=∫abf(x)dx=b−a

In other words, the probability that X is in any particular subinterval of (0,1) equals the length of that subinterval.

In general, we say that X is a uniform random variable on the interval (α,β) if its probability density function is given by

(2.8) f(x)={1β−α,ifα<x<β0,otherwise

Example 2.13

Calculate the cumulative distribution function of a random variable uniformly distributed over (α,β).

Solution: Since F(a)=∫−∞af(x)dx , we obtain from Eq. (2.8) that

F(a)={0,a⩽αa−αβ−α,α<a<β 1,a⩾β■

Example 2.14

If X is uniformly distributed over (0,10), calculate the probability that (a) X<3, (b) X>7, (c) 1<X<6.

Solution:

P{X<3}=∫03dx10=310,P{X>7}=∫710dx10=310,P{1<X<6} =∫16dx10=12■

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012814346900007X

Quality of Analytical Measurements: Statistical Methods for Internal Validation

M.C. Ortiz, ... A. Herrero, in Comprehensive Chemometrics, 2009

Appendix 1 Some Basic Elements of Statistics

A distribution function (cumulative distribution function (cdf)) in R is any function F, such that

1.

F is an application from R to the interval [0,1]

2.

limx→−∞F(x)=0

3.

limx→+∞F(x) =1

4.

F is a monotonously increasing function, that is, a ≤ b implies F(a) ≤ F(b).

5.

F is continuous on the left or the right. For example, F is continuous on the left if limx→a,x<aF(x)=F(a) for each real number a.

Any probability defined in R corresponds to a distribution function and vice versa.

If p is the probability defined for intervals of real numbers, F(x) is defined as the probability that accumulates until x, that is, F(x) = p(–∞,x). It is easy to show that F(x) verifies the above definition of distribution function.

If F is a cdf continuous on the left, its associated probability p is defined by

p[a,b] =p{a≤x≤b}=F(b)−F(a)p(a,b]=p{a<x≤b}=F(b)−limx→a,x>aF(x)p[a,b)=p{a≤x<b}=F(b)−F(a) p(a,b)=p{a<x<b}=F(b)−limx→a,x>aF(x)

If the distribution function is continuous, then the above limits coincide with the value of the function in the corresponding point. The probability density function f(x), abbreviated pdf, if it exists, is the derivative of the cdf.

Each random variable X is characterized by a distribution function FX(x).

When several random variables are handled, it is necessary to define the joint distribution function.

(A1)FX1,X2,… ,Xk(a1,a2,…,ak)= pr{X1≤a1andX2≤a2and…andXk≤ak}

If the previous joint probability is equal to the product of the individual probabilities, it is said that the random variables are independent:

(A2)FX1,X2,…,Xk(a1,a2,…,ak)=pr{X1≤ a1}×pr{X2≤a2}×⋯×pr{Xk≤ak}

Equations (3) and (4) define the mean and variance of a random variable. Some basic properties are

(A3)E (aX+bY)=aE(X)+bE(Y)foranyXandY

(A4)V (aX)=a2V(X)foranyrandomvariableX

Given a random variable, X, the standardized variable is obtained by subtracting the mean and dividing by the standard deviation, Y= (X−E(X))/V(X). The standardized variable has E(Y)=0 and V(Y) =1.

For any two random variables, the variance is

(A5)V(X+Y)=V(X)+V(Y)+2Cov(X,Y)

and the covariance is defined as

(A6)Cov(X,Y)=∬(x−E (X))(y−E(Y))fX,Y(x,y)dxdy

In the definition of the covariance (Equation (A6)), fX,Y(x,y) is the joint pdf of the random variables. In the case where they are independent, the joint pdf is equal to the product fX(x)fY(y) and the covariance is zero.

In general, E(XY)≠E(X)E(Y), except where the variables are independent, in which case the equality holds.

In the applications in Analytical Chemistry, it is very frequent to use formulas to obtain the final measurement from other intermediate ones that had experimental variability. A strategy for the calculation of the uncertainty (variance) in the final result under two basic hypotheses has been developed. The strategy is to make a linear approach to the formula and then to assimilate the quadratic terms to the variance of the implied random variable (see for example the ‘Guide to the Expression of Uncertainty in Measurement’).2 This procedure, called in many texts the method of transmission of errors, can lead to unacceptable results. Hence, an improvement based on Monte Carlo simulation has been suggested for the calculation of the compound uncertainty (see the Supplement 1 to the aforementioned guide).

A useful representation of the data is the so-called box and whisker plot (or simply box plot). To explain its procedure of construction, we will use the 100 values of the method A of Figure 2.

These data have the following characteristics (summary of statistics):

Minimum: 5.23

Maximum: 7.86

First or lower quartile, Q1 = 6.39. It is the value below which lie 25% of the data.

Second quartile (median), Q2 = 6.66. It is the value below which lie 50% of the data.

Third or upper quartile, Q3 = 6.98. It is the value below which lie 75% of the data.

Interquartile range, IR = Q3 – Q1 = 0.59 in our case.

With these quartiles, the central rectangle (the box) is drawn that contains 50% of the data around the median.

The lower and upper limits are computed as LL = Q1 − 1.5IR and UL = Q3 + 1.5IR. In the example, LL = 6.39 − 1.5 × 0.59 = 5.505 and UL = 7.865.

Then, the ‘whiskers’ are made by joining the inferior side of the rectangle with the data immediately greater than or equal to LL, and the superior side of the rectangle with the greatest value in the data that is immediately less than UL.

The three smallest values in our case are 5.396, 5.233, and 5.507; thus the whisker will go until 5.507 and the other two values are left ‘disconnected’. The other whisker reaches the maximum 7.86 because it is less than UL. The box and whisker plot is the first one in Figure A1.

How to find cumulative distribution function from probability density function

Figure A1. ABox and whisker plot. (A) data of method A in Figure 2. (B) data of method A with an outlier.

The advantage of using box plots is that the quartiles are practically insensitive to outliers. For example, suppose that value 7.86 is changed by 8.86; this change does not affect the median or the quartiles, the box plot continues being similar but with a datum outside the upper whisker, as can be seen in the second box plot in Figure A1.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444527011000909

Mark A. Pinsky, Samuel Karlin, in An Introduction to Stochastic Modeling (Fourth Edition), 2011

Exercises

8.3.1

Show that the cumulative distribution function for reflected Brownian motion is

Pr{R(t)<y|R(0)= x}=Φ(y−xt)−Φ(− y−xt)=Φ(y−xt)+Φ(y+xt)−1=Φ( x+yt)−Φ(x−yt) .

Evaluate this probability when x = 1, y = 3, and t = 4.

8.3.2

The price fluctuations of a share of stock of a certain company are well described by a Brownian motion process. Suppose that the company is bankrupt if ever the share price drops to zero. If the starting share price is A(0) =5, what is the probability that the company is bankrupt at time t = 25? What is the probability that the share price is above 10 at time t = 25?

8.3.3

The net inflow to a reservoir is well described by a Brownian motion. Because a reservoir cannot contain a negative amount of water, we suppose that the water level R(t) at time t is a reflected Brownian motion. What is the probability that the reservoir contains more than 10 units of water at time t = 25? Assume that the reservoir has unlimited capacity and that R (0) =5.

8.3.4

Suppose that the net inflows to a reservoir follow a Brownian motion. Suppose that the reservoir was known to be empty 25 time units ago but has never been empty since. Use a Brownian meander process to evaluate the probability that there is more than 10 units of water in the reservoir today.

8.3.5

Is reflected Brownian motion a Gaussian process? Is absorbed Brownian motion (cf. Section 8.1.4)?

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123814166000083

How do you find the cumulative distribution function?

The cumulative distribution function (CDF) of random variable X is defined as FX(x)=P(X≤x), for all x∈R..
To find the CDF, note that. ... .
To find P(2<X≤5), we can write P(2<X≤5)=FX(5)−FX(2)=3132−34=732. ... .
To find P(X>4), we can write P(X>4)=1−P(X≤4)=1−FX(4)=1−1516=116..

What is the relationship between PDF and CDF?

A PDF is simply the derivative of a CDF. Thus a PDF is also a function of a random variable, x, and its magnitude will be some indication of the relative likelihood of measuring a particular value. As it is the slope of a CDF, a PDF must always be positive; there are no negative odds for any event.

Is PDF the derivative of CDF?

The probability density function f(x), abbreviated pdf, if it exists, is the derivative of the cdf. Each random variable X is characterized by a distribution function FX(x).

What is the difference between PDF and CDF?

Probability Density Function (PDF) vs Cumulative Distribution Function (CDF) The CDF is the probability that random variable values less than or equal to x whereas the PDF is a probability that a random variable, say X, will take a value exactly equal to x.