Continuous Random Variables The probability that a continuous ran ... [PDF]

Probability Density. The probability density at a multiplied by ε approximately equals the probabil- ity mass contained

40 downloads 20 Views 55KB Size

Recommend Stories


Random variables (continuous)
Learning never exhausts the mind. Leonardo da Vinci

Discrete & Continuous Random Variables
Everything in the universe is within you. Ask all from yourself. Rumi

Continuous Random Variables and the Normal Distribution
Stop acting so small. You are the universe in ecstatic motion. Rumi

probability and random variables
Don’t grieve. Anything you lose comes round in another form. Rumi

Probability and Random Variables
No amount of guilt can solve the past, and no amount of anxiety can change the future. Anonymous

Basic Probability Random Variables
We may have all come on different ships, but we're in the same boat now. M.L.King

Some Continuous Probability Distributions Continuous Uniform Distribution
Don't ruin a good today by thinking about a bad yesterday. Let it go. Anonymous

Continuous Probability Distributions Uniform Distribution
Just as there is no loss of basic energy in the universe, so no thought or action is without its effects,

Lecture 8: Channel Capacity, Continuous Random Variables 1 Channel Capacity
Happiness doesn't result from what we get, but from what we give. Ben Carson

2.16 Bivariate Random Variables Discrete Case Continuous Case
At the end of your life, you will never regret not having passed one more test, not winning one more

Idea Transcript


Continuous Random Variables The probability that a continuous random variable, X, has a value between a and b is computed by integrating its probability density function (p.d.f.) over the interval [a, b]: P(a ≤ X ≤ b) =

Z

b

fX (x)dx.

a

A p.d.f. must integrate to one: Z



−∞

fX (x)dx = 1.

Continuous Random Variables (contd.) The probability that the continuous random variable, X, has any exact value, a, is 0: P(X = a) = lim P(a ≤ X ≤ a + ∆x) ∆x→0

= lim

Z

∆x→0 a

a+∆x

fX (x)dx

= 0. In general

P(X = a) 6= fX (a).

Probability Density The probability density at a multiplied by ε approximately equals the probability mass contained within an interval of ε width centered on a: ε fX (a) ≈

Z

a+ε/2

a−ε/2

fX (x)dx

≈ P(a − ε/2 ≤ X ≤ a + ε/2)

Cumulative Distribution Function A continuous random variable, X, can also be defined by its cumulative distribution function (c.d.f.): FX (a) = P(X ≤ a) =

Z

a

−∞

fX (x)dx.

For any c.d.f., FX (−∞) = 0 and FX (∞) = 1. The probability that a continuous random variable, X, has a value between a and b is easily computed using the c.d.f.: P(a ≤ X ≤ b) = =

Z

b

Za b

fX (x)dx

−∞

fX (x)dx −

Z

= FX (b) − FX (a).

a

−∞

fX (x)dx

Cumulative Distribution Function (contd.) The p.d.f., fX (x), can be derived from the c.d.f., FX (x): d x fX (x) = fX (s)ds dx −∞ dFX (x) . = dx Z

Joint Probability Densities Let X and Y be continuous random variables. The probability that a ≤ X ≤ b and c ≤ Y ≤ d is found by integrating the joint probability density function for X and Y over the interval [a, b] w.r.t. x and over the interval [c, d] w.r.t. y: P(a ≤ X ≤ b, c ≤ Y ≤ d) =

Z bZ a

d

fXY (x, y)dydx.

c

Like a one-dimensional p.d.f., a twodimensional joint p.d.f. must also integrate to one: Z



Z



−∞ −∞

fXY (x, y)dxdy = 1.

Marginal Probability Densities fX (x) = fY (y) =

Z



Z−∞ ∞ −∞

fXY (x, y)dy fXY (x, y)dx

Conditional Probability Densities fXY (x, y) fX|Y (x | y) = fY (y) fXY (x, y) = R∞ −∞ f XY (x, y)dx fXY (x, y) = fX|Y (x | y) fY (y)

Exponential Density A constant fraction of a radioactive sample decays per unit time: d f (t) 1 = − f (t). dt τ What fraction of the radioactive sample will remain after time t? − τt

1 −t d(e ) =− e τ dt τ

Exponential Density (contd.) − τt

The function, f (t) = e , satisfies the differential equation, but it does not integrate to one: Z

0

So that by τ:

R∞



∞ − τt

e dt = −τe − τt

= τe = τ.

−∞ f T (t)dt

− ∞τ

0



= 1, we divide f (t)

1 −t fT (t) = e τ . τ

Exponential Density (contd.) The time, T , at which an atom of a radioactive element decays is a continuous random variable with the following p.d.f.: 1 −t fT (t) = e τ . τ The corresponding c.d.f. is: FT (a) =

Z

a1

− τt

e dt 0 τ a − τt = −e 0 − aτ

= 1−e .

The c.d.f. gives the probability that an atom of a radioactive element has already decayed.

Example The lifetime of a radioactive element is a continuous random variable with the following p.d.f.: 1 −t e 100 . fT (t) = 100 The probability that an atom of this element will decay within 50 years is: 50

1 −t e 100 dt P(0 ≤ t ≤ 50) = 0 100 = 1 − e−0.5 = 0.39. Z

Exponential Density (contd.) The half-life, λ, is defined as the time required for half of a radioactive sample to decay:

Since

P(0 ≤ t ≤ λ) = 1/2. λ

1 −t e 100 dt P(0 ≤ t ≤ λ) = 0 100 1 λ − 100 = 1−e = 1/2, Z

it follows that λ = 100 ln 2 or 69.31 years.

1 line 1 line 2 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

10

20

30

Figure 1: Exponential p.d.f.,

40

50

1t 1 − 10 , 10 e

60

70

80

1

and c.d.f., 1 − e− 10 t .

90

100

Memoryless Property of the Exponential If X is an exponentially distributed random variable, then P(X > s + t|X > t) = P(X > s). Proof: P(X > s + t, X > t) P(X > t) P(X > t|X > s + t)P(x > s + t) = P(X > t) P(X > s + t) = . P(X > t)

P(X > s + t|X > t) =

Since P(X > t) = 1 − P(X ≤ t),

P(X > s + t) 1 − (1 − e−(s+t)/τ) = P(X > t) 1 − (1 − e−t/τ) = e−s/τ = P(X > s).

Memoryless Property of the Exponential In plain language: Knowing how long we’ve already waited doesn’t tells us anything about how much longer we are going to have to wait, e.g., for a bus.

Expected Value Let X be a continuous random variable. The expected value of X, is defined as follows: hXi = µ = Variance

Z



−∞

x fX (x)dx

The variance of X is defined as the expected value of the squared difference of X and hXi: Z D E [X − hXi]2 = σ2 =



−∞

[x − hXi]2 fX (x)dx

Gaussian Density A random variable X with p.d.f., 1 −(x−µ)2/2σ2 fX (x) = √ e σ 2π is called a Gaussian (or normal) random variable with expected value, µ, and variance, σ2.

Expected Value for Gaussian Density Let the p.d.f., fX (X), equal 1 −(x−µ)2/(2σ2) √ e . 2πσ The expected value, hXi, can be derived as follows: Z ∞ 1 −(x−µ)2/(2σ2) hXi = √ xe dx. 2πσ −∞

Expected Value for Gaussian Density (contd.) Writing x as (x − µ) + µ: Z ∞ 1 −(x−µ)2/(2σ2) hXi = √ (x − µ)e dx −∞ 2πσ Z ∞ 1 −(x−µ)2/(2σ2) + µ√ e dx. 2πσ −∞ The first term is zero, since (after substitution of u for x − µ) it is the integral of the product of an odd and even function. The second term is µ, since Z ∞ Z ∞ 1 −(x−µ)2/(2σ2) √ e dx = fX (x)dx = 1. −∞ 2πσ −∞ Consequently, hXi = µ.

C.d.f. for Gaussian Density Because the Gaussian integrates to one and is symmetric about zero, its c.d.f., FX (a), can be written as follows:  1 R0 Z a − f (x)dx if a < 0 fX (x)dx = 12 Raa X + 0 fX (x)dx otherwise. −∞ 2

Equivalently, we can write: ( R |a| Z a 1 − f (x)dx if a < 0 fX (x)dx = 12 R0|a| X −∞ + 0 fX (x)dx otherwise. 2

C.d.f. for Gaussian Density (contd). R |a|

To evaluate 0 fX (x)dx, recall that the Taylor series for ex is: ∞

xn e =∑ . n=0 n! x

The Taylor series for a Gaussian is therefore: 1 −x2/2 fX (x) = √ e 2π  ∞ −x2/2 n 1 = √ ∑ n! 2π n=0 1 ∞ (−1)n x2n = √ ∑ . n 2π n=0 n! 2

C.d.f. for Gaussian Density (contd.) Consequently: Z

0

|a|

1 fX (x)dx = √ 2π

Z

0

|a| ∞

(−1)n x2n ∑ n! 2n dx n=0

|a| 1 ∞ (−1)n x2n+1 = √ ∑ n 2π n=0 n! 2 (2n + 1) 1 ∞ (−1)n |a|2n+1 . = √ ∑ n 2π n=0 n! 2 (2n + 1)

0

1 line 1 line 2 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -3

-2

-1

0

1

2

Figure 2: Gaussian p.d.f. and c.d.f., µ = 0 and σ2 = 1, computed using Taylor series.

3

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.