Tải bản đầy đủ - 0 (trang)
2 Power Spectal Density (PSD) and Energy Spectral Density (ESD)

2 Power Spectal Density (PSD) and Energy Spectral Density (ESD)

Tải bản đầy đủ - 0trang

16



Digital Communications



power of this signal are calculated using (1.75)

and (1.76) as follows:





E=



A2 e − 2αt dt =



0



A2





considered, which is infinite. Therefore, this

signal having infinite energy, is classified as

a power signal.



A2 1 − e − 2αT

0 1.2.2 Autocorrelation Function and

T

T ∞ 2α

T

0

Spectral Density

(1.81)

ESD and PSD are employed to describe the disHence, this is an energy signal. However, if tribution of, respectively, the signal energy and

we assume A = 1 and α = 0 in (1.80), then

the signal power over the frequency spectrum.

ESD/PSD is defined as the Fourier transform

E ∞, P = 1

u t = lim s t

of the autocorrelation function of an energy/

α 0

A 1

(1.82) power signal. [4][5][6][7]

T



1

∞T



A2 e − 2αt dt = lim



P = lim



which shows that a unit step function is a power

signal.

Now consider the following signal:

t



st =



−1 4



t ≥ t0 > 0



0



elsewhere



(1.83)



Its energy and power are found to be





t



E=



−1 2



dt = 2 t



t0



1

P = lim

T ∞ T

= lim 2

T







t0 + T



t





t0



−1 2







0





−∞





(1.84)



dt



−∞



s∗ t s t + τ dt

s t s∗ t − τ dt, − ∞ < τ < ∞

(1.87)



t0



T + t0 − t0

T



0



(1.85)



Since this function is periodic with a period

T0 = 1/f0, its power can be determined by integration over one period:

T0



Autocorrelation of a function s(t) provides a

measure of the degree of similarity between

the signal s(t) and a delayed replica of itself.

The autocorrelation function of an energy signal s(t) is defined by



=



sT0 t = Acos w0 t + θ



1

T0



Energy Signals



Rs τ =



This signal is neither a power signal nor an

energy signal.

Finally we consider a cosine function with

amplitude A:



P=



1.2.2.1



A2 cos2 w0 t + θ dt =



A2

2



(1.86)



The energy of this signal is given by the

product of its power and the number of periods



Note that τ, that varies in the interval (−∞,∞),

measures the time shift between s(t) and its

delayed replica. The sign of the time shift τ shows

whether the delayed version of s(t) leads or lags

s(t). If the time shift τ becomes zero, then s(t) will

overlap with itself and the resemblence between

them will be perfect. Consequently, the value of

the autocorrelation function at the origion is equal

to the energy of the signal:





E = Rs 0 =



2



st



dt



(1.88)



−∞



It is logical to expect that the resemblence

between a signal and its delayed replica to

decrease with increasing values of τ. The properties of the autocorrelation function of a real

valued energy signal may be listed as follows:



17



Signal Analysis



a. Symmetry about τ = 0:



the convolution of r(t) and s(t), given by

(1.47), shows that



Rs τ = Rs − τ



(1.89)



b. Maximum value occurs at τ = 0:

Rs τ ≤ Rs 0 = E,



Rs τ = Rss τ = s t



τ



(1.90)



c. Autocorrelation and ESD are related to each

other by the Fourier transform:

Rs τ



Ψs f



(1.91)



where ESD is defined in (1.14) as the variation of the energy E of s(t) with frequency.

One can prove the relationship given by

(1.91) by taking the Fourier transform of

the autocorrelation function of an energy

signal, given by (1.87):

Ψ s f = ℑ Rs τ =





=

=

=



Rs τ e





−∞



s∗ t dt



−∞





−∞



e − jwτ dτ



−∞











−∞



s∗ t e jwt dt







s∗ t s t + τ dt







s v e − jwv dv



−∞

S∗



f



S f



2



(1.92)

Example

1.13 Convolution

Versus

Correlation.

In order to discover the similarity between the

correlation and convolution, we first define the

cross correlation of two energy signals r(t) and

s(t) as follows:

Rrs τ =





−∞







r t s t − τ dt =





−∞



(1.94)



s∗ − t



For real-valued functions with even symmetry, that is, for s(t) = s(−t), auto and cross

correlation functions are the same as the

convolution:

Rrs τ = r t



st



Rs τ = s t



st



1.2.2.2



for s − t = s t



(1.95)



Power Signals



The autocorrelation function of a power signal

s(t) is defined by



s t + τ e − jwτ dτ



−∞



= S f



− jwτ



s∗ − t



Rrs τ = r t







s t r t + τ dt

(1.93)



which reduces to (1.87) for Rs τ = Rss τ .

Comparison of (1.93) with the expression for



1

∞T



Rs τ = lim

T



T 2

−T 2



s∗ t s t + τ dt



(1.96)



If the power signal is periodic with a period

T0, then the autocorrelation can be computed by

integration over a single period:

Rs τ =



1

T0



T0 2

− T0 2



s∗ t s t + τ dt



(1.97)



The value of the autocorrelation function at

the origin is equal to the power of s(t):

1

∞T



P = Rs 0 = lim

T



T 2



st



2



dt



(1.98)



−T 2



The properties of the autocorrelation function of a real-valued power signal s(t) are given

by

a. Symmetry about τ = 0:

Rs τ = Rs − τ



(1.99)



b. Maximum value occurs at τ = 0:

Rs τ ≤ Rs 0 = P,



τ



(1.100)



18



Digital Communications



c. Autocorrelation and power spectral density

(PSD) are related to each other by Fourier

transform:

Rs τ



Gs f



(1.101)



PSD, the variation of the power P of s(t)

with frequency, is given by (1.66) and

(1.68) for respectively periodic and aperiodic power signals.

Based on (1.87), (1.92), (1.96) and (1.101),

the PSD and ESD of a signal may be obtained

either by taking the magnitude-square of

its Fourier transform or the Fourier transform

of the autocorrelation function (see

Figure 1.11):

s(t)





⇔ S(f)





Rs(τ) ⇔ |S(f)|2

Figure 1.11 Two Alternative Approaches for

Obtaining the ESD/PSD of a Signal s(t).



Example 1.14 PSD and ESD of a Rectangular Pulse.

To have a better feeling about the two alternative approaches shown in Figure 1.11 to determine the PSD/ESD, consider a rectangular

pulse s(t) and its Fourier transform S(f ) (see

(1.15) and (1.16)):

s t = AΠ t T



S f = ATsin c fT

(1.102)



which is an energy signal, with energy A2 T.

The autocorrelation function and the ESD of

s(t) are related to each other by (1.50), (1.52)

and (1.95):

Rs τ = A2 T Λ τ T



Ψs f = A2 T 2 sinc2 fT

(1.103)



Noting that Rs 0 = A2 T denotes the energy

of s(t), Ψs f = S f 2 clearly represents the



ESD, since the area under it is equal to A2T.

The corresponding PSD is given by



Gs f =



Ψs f

S f

=

T

T



2



= A2 T sinc2 fT

(1.104)



The integration of (1.104) with frequency

gives A2, which denotes the signal power over

the finite pulse duration T.



1.3 Random Signals

In a digital communication system, one of the

M symbols is transmitted, in a finite symbol

duration, using a pre-defined M-ary alphabet.

These symbols are unknown to the receiver

when they are transmitted. At the receiver,

these signals bear a random character in

AWGN, fading or shadowing channels. Since

a random signal can not be predicted before

reception, a receiver processes the received

random signal in order to estimate the transmitted symbol. In view of the above, the performance of telecommunication systems can be

evaluated by characterizing the random signals

statistically.

This section aims to provide a short introduction to random signals and processes encountered in telecommunication systems. The

reader may refer to Appendix F for further

details about the random signals and their characterization. [3][9]



1.3.1 Random Variables

Given a sample space S and elements s S, we

define a function X(s) whose domain is S and

whose range is a set of numbers on the real line

(see Figure 1.12). The function X(s) is called a

random variable (rv). For example, if we consider coin flipping with outcomes head

(H) and tail (T), the sample space is defined



19



Signal Analysis



c. The cdf of a continuous rv X is a nondecreasing and smooth function of X.

Therefore, for x2 ≥ x1, P x1 < X ≤ x2 =



Sample space S

s

X(s)



Real line



Range: set of real numbers



Domain: S



Figure 1.12 Definition of a Random Variable.



by S = {H, T} and the rv may be assumed to be

X(s) = 1 for s = H and −1 for s = T. Note that a

rv, which is unknown and unpredictable

beforehand, is known completely once it

occurs. For example, one does not know the

outcome before flipping a coin. However, once

the coin is flipped, the outcome (head or tail)

is known.

A rv is characterized by its probability density function (pdf ) or cumulative distribution

function (cdf ), which are interrelated. The

cdf FX(x) of a rv X is defined by

FX x = P X ≤ x , − ∞ < x < ∞



(1.105)



which specifies the probability with which the

rv X is less than or equal to a real number x. The

pdf of a rv X is defined as the derivative of

the cdf.

fX x =



d

FX x , − ∞ < x < ∞

dx



fX u du



(1.107)



In view of (1.105)−(1.107), a rv is characterized by the following properties:

a. fX x ≥ 0 and the area under fX(x) is always

equal to unity: FX ∞ =







−∞



fX u du = 1.



b. 0 ≤ FX(x) ≤ 1 since FX ∞ = 1 and F − ∞ =

−∞



−∞



fX u du = 0.



d. When a rv X is discrete or mixed, its cdf is

still a non-decreasing function of X but contains discontinuities.

A rv is characterized by its moments. The

n-th moment of a rv X is defined as

E Xn =





−∞



x n fX x dx



(1.108)



where E[.] denotes the expectation. The rv’s

encountered in telecommunication systems

are mostly characterized by their first two

moments, namely the mean (expected) value

mX and the variance σ 2X :





mX = E X =



−∞



xfX x dx



var X = σ 2X = E X − mX



2



(1.109)



= E X 2 − m2X

When X is discrete or of a mixed type, the pdf

contains impulses at the points of discontinuity

of FX(x). In such cases, the discrete part of fX(x)

may be expressed as

N



P X = xi δ x − xi



fX x =



x

−∞



fX u du = FX x2 − FX x1 ≥ 0.



x1



(1.106)



Conversely, the cdf is defined as the integral

of the pdf:

FX x =



x2



(1.110)



i=1



where the rv X may assume one of the N values,

x1, x2,…, xN at the discontinuities. For

example, in case of coin flipping, two outcomes

may be represented as x1 for head and x2 for tail.

Then, P(X = x1) = p ≤ 1 shows the probability of

head, while P(X = x2) = 1 − p denotes the probability of tail.

Mean value and the variance of a discrete rv

is found by inserting (1.110) into (1.109):

N



mX = E X =

N



σ 2X



x2i P



=

i=1



xi P X = xi

i=1



X



= xi − m2X



(1.111)



20



Digital Communications



For the special case of equally-likely probabilities, that is, P X = xi = 1 N, then (1.111)

simplifies to the well-known expression

mX = E X =

σ 2X



1 N

xi

N i=1



1 N

=

xi − mX

N i=1



2



1 N 2

=

x − m2X

N i=1 i



(1.112)



Now consider two rv’s X1 and X2, each of

which may be continuous, discrete or mixed.

The joint cdf is defined as

FX1 X2 x1 , x2 = P X1 ≤ x1 , X2 ≤ x2

x1



x2



−∞



−∞



=



fX1 X2 u1 , u2 du2 du1



FX1 X2 − ∞ , − ∞ = 0

FX1 X2 ∞, ∞ = 1



(1.113)



and is related to the joint pdf as follows:



fX1 X2 x1 , x2 =



∂2

FX X x1 , x2

∂x1 ∂x2 1 2



(1.114)



The marginal pdf’s are found as follows:



following implications: If the experiments

result in mutually exclusive outcomes, then

the probability of an outcome in one experiment is independent of an outcome in any other

experiment. The joint pdf’s (cdf’s) may then be

written as the product of the pdf’s (cdf’s) corresponding to each outcome.

Example 1.15 Averaging.

In order to model the wireless channel between

base and mobile stations of a mobile radio system, the average signal level received by a

mobile station is measured at points evenly distributed on a circle of radius d0 = 100 meters

from the base station. These measurement

results are rescaled for arbitrary distances in

order to develop a reliable channel path-loss

model. Therefore, sufficiently many shortrange measurements are required in order to

take into account all potential propagation

effects, for example, climate, topography of

the terrain, vegetation. Now consider for simplicity that ten measurements are conducted

in dBm and (1.112) is used to determine the

average signal level and its standard deviation

at d0 = 100 meters:

X = − 41 1 − 45 − 39 6 − 42 − 40 3 − 44

− 48 − 43 − 43 3 − 41 5





−∞



−∞



fX1 X2 x1 , x2 dx1 = fX2 x2

(1.115)

fX1 X2 x1 , x2 dx2 = fX1 x1



The conditional pdf f(xi|xj), which gives the

pdf of xi for a given deterministic value of

the rv xj, has the following properties

f xi xj = fXi Xj xi , xj



fXj xj



F − ∞ xj = 0, F ∞ xj = 1



i



j, i, j = 1, 2

(1.116)



Generalization of (1.113)-(1.116) to more

than two rv’s is straightforward. Statistical

independence of random events has the



mX =



1 N

xi = − 42 78 dBm

N i=1



1 N 2

σX =

x − m2X

N i=1 i



1

2



= 2 35 dBm



(1.117)



Consequently, the average signal level at a

distance of d0 = 100 meters may be modeled

by mX = − 42 78 dBm and a standard deviation

σ X = 2 35 dBm.



1.3.2 Random Processes

Future values of a deterministic signal can be

predicted from its past values. For example,

Acos(wt + Φ) shows a deterministic signal as



21



Signal Analysis



long as A, w and Φ are deterministic and

known; its future values can be determined

exactly using its value at a certain time. However, it describes a random signal if any of A,

w or Φ is random. For example, a noise generator generates a noise with a random amplitude and phase at any instant of time and

these values change randomly with time.

Similarly, amplitude and phase of a multipath

fading signal vary randomly not only at a

given instant of time but also with time.

Hence, a random signal changes randomly

not only at an instant of time but also with

time. Therefore, future values of random

signals cannot be predicted by using their

observed past values. [4][6][7]

A rv may be used to describe a random signal

at a given instant of time. Random process

extends the concept of a rv to include also

the time dimension. Such a rv then becomes

a function of the possible outcomes s of a random event (experiment) and time t. In other

words, every outcome s will be associated with

a time (sample) function. The family (ensemble) of all such sample functions is called a random process and denoted as X(s, t). [3][4][9]

A random process does not need to be a function of a deterministic argument. For example,

the terrain height between transmitter and

receiver may be a random process of the distance, since the location and the height of the

obstacles may change randomly. The random

process may be continuous or discrete either

in time or in the value of the rv.

For example, consider L Gaussian noise generators. The time variation of the noise voltages

at the output of each of the noise generators is

called as a sample function, or a realization of

the process. Thus, each of the L sample functions, corresponding to a specific event sj,

changes randomly with time; noise voltages

at any two instants are independent from each

other (see Figure 1.13). The totality of sample

functions is called an ensemble. On the other

hand, at an instant of time tk, the value of the

rv X(s, tk) depends on the event. For a specific

event and time tk, X(sj, tk) = Xj(tk) is a real



X1(t) : sample function



X2(t)



time-averaging

ensemble-averaging

XL(t)



tk



time



Figure 1.13 Random (Gaussian Noise) Process.



number. The properties of X(s,t) is summarized

below: [3]

a. X(S, t) represents a family (ensemble) of

sample functions, where S = s1 , s2 , , sL .

b. X(sj, t) represents a sample function for the

event sj (an outcome of S or a realization of

the process).

c. X(S, tk) is a rv at time tk.

d. X(sj, tk) is a non-random number.

For the sake of convenience, we will denote

a random process by X(t) where the presence of

the event is implicit.



1.3.2.1



Statistical Averages



Empirical determination of the pdf of a random

process is neither practical nor easy since it may

change with time. For example, the pdf of a

Gaussian noise voltage may change with time

as the temperature of the noise generator (a

resistor) increases; then, the noise variance

would be higher. Nevertheless, the mean

and the autocorrelation function adequately

describe the random processes encountered in



22



Digital Communications



telecommunication systems. The mean of a random process X(t) at time tk is defined as





E X tk = mX tk =



−∞



xfXk x dx



(1.118)



(1.119)



A random process X(t) is said to be stationary in the strict sense, if none of its statistics

changes with time, but they depend only on

the time difference tk−tj. This implies that the

joint pdf satisfies the following:

f x1 , x2 ,



, xL ; t1 , t2 ,



= f x1 , x2 ,



, tL



, xL ; t1 + τ, t2 + τ,



, tL + τ

(1.120)



Here, (1.120) should be read as the joint pdf

of x1 , x2 , ,xL at times t1 , t2 , , tL . For

example, a strict sense stationary process satisfies the following for L = 1 and 2:



t2 = t1 + τ



= f x1 , x2 ; τ



(1.122)



Note that strict-sense stationarity implies

wide-sense stationarity, but the reverse is not

true. In most applications, the analysis based

on WSS assumption provides sufficiently

accurate results at least over some time

intervals of interest. The autocorrelation function of WSS random processes, which is an

even function of τ, provides a measure of the

degree of correlation between the random values of a process withτ seconds time shift from

each other.

The properties of the autocorrelation function of a real-valued WSS random process are

listed below:

a. An even function of τ:

RX τ = RX − τ



(1.123)



b. Maximum value occurs at τ = 0:

RX τ ≤ RX 0 ,



τ



(1.124)



c. The value of the autocorrelation function

at the origin is equal to the average power

of the signal

RX 0 = P = E X t



2



(1.125)



d. Autocorrelation and PSD are related to each

other by the Fourier transform:

RX τ



f x; t = f x; t + τ

f x1 , x2 ; t1 , t2



E X t = mX , t

RX tj , tk = RX τ , τ ≜ tj − tk



where the pdf of X(tk) is defined over the

ensemble of events at time tk. Evidently, the

mean would be time-invariant if the pdf is

time-invariant. On the other hand, identical

time variations of the mean and the variance

of two random processes do not imply that

these process are equivalents. For example,

one of these processes may be changing faster

than the other, and hence have higher spectral

components. Therefore, autocorrelation functions (and corresponding PSDs) of these processes should also be compared with each

other. The autocorrelation function of a random

process X(t) provides a measure of the degree of

similarity between the rv’s X(tj) and X(tk):

RX tj , tk = E X tj X ∗ tk



mean is a constant and the autocorrelation function depends only on the time difference tk−tj:



GX f



(1.126)



(1.121)



A random process X(t) is said to be wide

sense stationary (WSS), if only its mean and

autocorrelation function are unaffected by a

shift in the time origin. This implies that the



Example 1.16 Poisson Process.

Poisson process is a discrete random process

which describes the number of occurrences

of an event as a function of time. The event

may represent the number of customers



23



Signal Analysis



X(t)

7

6

5

4

3

2

1

0



t1



0



t2



t3



t4



Figure 1.14



arriving to a bank or a supermarket, the failure

of some components in a system, the number

of planes arriving to and departing from an airport, the number of goals scored in a football

game, and so on. A single event of the process,

which consists of counting the number of

occurrences (arrivals) with time, is also

random.

Let the rv k denote the number of occurrences

(arrivals) during a time interval τ = t2 − t1 where

t1 and t2 are arbitrary times with t2 ≥ t1. The

average number of arrivals per unit time is

defined by the arrival rate λ in arrivals/s. The

probability of k arrivals during τ is given by

Pk τ = e − λτ



λτ k

, k = 0, 1, 2,

k



(1.127)





k=0



t6



t7



t8



the number of occurrences during (ti, tj), with

discontinuities at random time instants ti and

tj. As shown in Figure 1.14, this process may

represent, for example, the number of uncoordinated customers entering into a bank office

with random arrival times ti. Figure 1.14 shows

that one customer arrives during (0, t1), no customers arrive during (t2, t3) and two customers

arrive during (t3, t4). Since the time intervals of

any two events do not overlap, the corresponding rv’s are independent. [3]

The first two moments of the rv X(tk) at a specific time tk are found, with the help of (D.1), as

follows:

E X t1 = e − λt1







k



λt1 k − 1

k−1







= λt1 e − λt1



k=1



= λt1

E X 2 t1 = e − λt1







λt1

k



k2



k



δ x− k



(1.128)



= λt1 e − λt1







k

k=1







Noting that

P τ = 1 (see (D.1)), the

k=0 k

area under the pdf given by (1.128) is unity.

Based on (1.127) and (1.128), we now define

a discrete random process X(t), accounting for



k



λt1

k



k=0



λτ

k



t



Poisson Process.



k=0



For example, the probability that no customers arrive to a bank during τ = 1 minute is given

by P0 τ = exp − λτ = 0 905 if one customer

arrives on the average per 10 minutes, that is,

λ = 0.1 arrivals/minutes. The pdf of the number

of arrivals during τ may be written as

fk x = e −λτ



t5



= λt1 e − λt1



k



λt1 k − 1

k−1







k +1

k =0



= λt1 1 + λt1



(1.129)



λt1

k



k



24



Digital Communications



It is clear from (1.129) that the first two

moments of the rv, defined as the number of

occurrences during (0, t1), are time-dependent.

Since the intervals (0, t1) and (t2 − t1) do not

overlap with each other, the rv’s X(t1) and

X(t2) − X(t1) are independent and Poisson distributed with parameters λt1 and λ(t2 − t1). Hence,

E X t1 X t2 − X t1



(1.130)



= λt1 λ t2 − t1 , t1 ≤ t2



The autocorrelation function for t2 ≥ t1 may

then be found using (1.129) and (1.130) as

follows:



(1.131)

Since R t1 , t2 = R t2 , t1 for a real random

process, the autocorrelation function is given by



λt1 + λ2 t1 t2 t1 ≤ t2



T 2



X t dt



(1.133)



−T 2



A random process is said to be ergodic in the

autocorrelation function, if RX(x) is correctly

described through the use of a sample function,

that is, time averaging instead of ensemble

averaging (see (1.119)):

T 2



1

∞T



−T 2



X ∗ t X t + τ dt (1.134)



+ X 2 t1



= λ2 t1 t2 + λt1 , t1 ≤ t2



R t1 , t2 =



T



T



= λt1 λ t2 − t1 + λt1 1 + λt1



λt2 + λ2 t1 t2 t1 ≥ t2



1

∞T



mX = lim



RX τ = lim



R t1 , t2 = E X t1 X t2

= E X t1 X t2 − X t1



A random process is said to be ergodic in the

mean if its mean can be calculated using a sample function, that is, using time averaging

instead of ensemble averaging (see (1.118)):



It is not easy to test whether a random process is ergodic or not. However, in most applications, it is reasonable to assume that time and

ensemble averaging are interchangeable.

Example 1.17 Test for Stationarity.

Consider a random process defined by

X t = A cos wt + Φ



(1.132)



where A and w are constants but Φ is uniformly

distributed in [0, 2π]:



Since the mean of the random process and

the autocorrelation are time dependent, the

Poisson process is not stationary.



1.3.2.2 Time Averaging and Ergodicity

The computation of the mean and the autocorrelation function of a random process by

ensemble averaging is often not practical since

it requires the use of all sample functions. For

the so-called ergodic processes, ensemble averaging may be replaced by time averaging. This

means that the mean and the autocorrelation

function of an ergodic process may be determined by using a single sample function. [3]

[4][5][7][8]



(1.135)



fΦ ϕ =



1

, 0 ≤ ϕ ≤ 2π





(1.136)



In order to determine whether this process is

WSS, we first determine its mean:

A



mX = E X t =





=



−A



xfX



t



x dx

(1.137)



Acos wt + ϕ fΦ ϕ dϕ = 0



0



which is obviously independent of time. Note

that the evaluation of the integral on the first line

requires the knowledge of the pdf of X(t), while

the second integral uses the pdf of Φ. The pdf of



25



Signal Analysis



X(t) is obtained from the pdf of Φ with a variable transformation (see Example F.6):



RZ t, t + τ = E Z ∗ t Z t + τ = RZ τ

L



fX



t





dx



x = fΦ ϕ



ϕ = ϕ1



+ fΦ ϕ





dx



ℓ=1

ϕ = ϕ2



L



αℓ e − j wt + Φℓ



=E



αk e j w



t + τ + Φk



k=1



L



L



E αℓ αk E e j



= e jwτ



Φk − Φℓ



k=1 ℓ=1



=



1

π A2 − x2



, −A ≤ x ≤ A



where ϕ1 = ϕ2 − π = π 2 − wt are the two roots

of the multi-valued function (1.135). Note that

the pdf is independent of w and time t, accounting for the time-invariance of the mean. Also

note that the pdf peaks at x = ±A.

The autocorrelation of X(t) is given by

RX t1 ,t2 = E X t1 X t2

= A2 E cos wt1 + ϕ cos wt2 + ϕ

1

= A2 E cos w t1 + t2 + 2ϕ + cos w t1 − t2

2

1

= A2 cos wτ , τ = t1 − t2

2



Since the mean is constant and the autocorrelation is a function of the time difference, this

random process is WSS.

Example 1.18 Complex Random Process.

A received signal, described by the following

complex random process

L



αℓ e j wt + Φℓ



L



(1.140)



ℓ=1



represents the sum of L replicas of a complex

carrier signal ejwt scattered from L obstacles

with random channel gains αℓ and phases Φℓ.

They are all assumed to be statistically independent of each other and the pdf of Φℓ is given

by (1.136). In view of (1.137), the mean value

of (1.140) is identically equal to zero. Its autocorrelation function is given by



E αℓ 2



= e jwτ

ℓ=1



(1.141)

The random process Z(t) is evidently WSS.

The value of the autocorrelation function at

the origin RZ(0) shows that the received power

is given by the sum of the powers of independent channel gains. The pdf of the received

power can be determined using the pdf’s of

αℓ’s (see for example (F.109)–(F.111) and

Example F.14).



1.3.2.3

(1.139)



Z t =



δ k−ℓ



(1.138)



PSD of a Random Process



Random processes are generally classified as

power signals with a PSD as given by (1.68)

and with the following properties:

a. The PSD is positive real-valued function of

frequency:

GX f ≥ 0

b. The PSD

frequency:



has



even



(1.142)

symmetry



GX f = GX − f



with



(1.143)



c. The PSD and autocorrelation function are

Fourier transform pairs:

RX τ



GX f



(1.144)



d. The average normalized power of a random

process is given by the area under the PSD:





PX = RX 0 =



−∞



GX f df



(1.145)



26



Digital Communications



Example 1.19 Autocorrelation and PSD of a

Random Binary Sequence.

A baseband signal consisting of a binary

sequence can be expressed as follows:





bn p t − nT



s t =A



(1.146)



n= −∞



where bn

− 1, + 1 represents the bit 0 and 1

with equal likelihood, P bn = 1 = P bn = − 1 =

1 2, p(t) denotes the pulse shape, T is the bit

duration and A is the amplitude. Therefore,

(1.146) represents a sequence of pulses with

amplitudes ±1. The infinitely long bit

sequence, which is a power signal, becomes

perfectly random if the bits are independent

from each other. The autocorrelation function

of a random bit sequence is given by a delta

function, since all dibit combinations are

equally likely with probability ẳ:

Rb k = E bn bn + k

=



1

1 ì 1 + 1 × −1 + − 1 × 1 + − 1 × − 1 = 0 k 0

4

1

k=0



Rb k = δ k



where the PSD of the infinitely long random

bit sequence in the second line is flat and given

by (1.23).

The autocorrelation function of s(t) is given

by the expectation of s(t) with a delayed replica

of itself:

Rs τ = E s∗ t s t + τ





= A2

m= −∞n= −∞



E b∗n bm



n= −∞





n+1 T



= A2

=A



1.3.2.4



Noise



The thermal noise voltage n(t) that we encounter in telecommunication systems is a random

process with zero mean and an autocorrelation

function described by Dirac delta function:



δ n− m



p∗ t − nT p t + τ − nT dt



n = − ∞ nT



2



−∞



(1.149)



An alternative approach for determining the

PSD of s(t) is based on the observation that

(1.146) is the convolution of the random bit

sequence and the rectangular pulse Π t T .

Then, in view of the convolution property of

the Fourier transform, given by (1.46), the

PSD of s(t) may be written as the product of

the PSD’s of Π t T and the infinite bit

sequence. The result is evidently identical to

(1.149). Noting that the area under Tsinc2(fT)

in (1.149) is equal to unity, the integration of

Gs(f) gives the total signal power A2, as

expected.



Rn τ = E n t n t + τ =



E p∗ t − nT p t + τ − nT



= A2



Gs f = A2 T sinc2 fT



E n t =0



E p∗ t − nT p t + τ − mT





Rs τ = A2 Rp τ = A2 TΛ t T



(1.147)



Gb f = 1







It is clear from (1.148) that the autocorrelation of a random binary sequence is identical

to that of the pulse p(t). For example, if we

assume p t = Π t T , which is an even function of t, then, in view of (1.50), (1.95) and

(1.104), the autocorrelation function is triangular and the PSD is sinc-squared:



p u p u + τ dt = A2 Rp τ

(1.148)



N0

δτ

2



(1.150)



The formula (1.150) implies that any two different noise samples are uncorrelated with each

other, no matter how close the time difference

between the samples is. Using the Fourier

transform relationship between the autocorrelation function and the PSD, the two-sided noise

PSD is given by

Gn f = ℑτ Rn τ =



N0

2



W Hz



(1.151)



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

2 Power Spectal Density (PSD) and Energy Spectral Density (ESD)

Tải bản đầy đủ ngay(0 tr)

×