# 3.10 Estimation theory: problems  (Page 3/3)

 Page 3 / 3

In this section , we questioned the existence of an efficient estimator forsignal parameters. We found in the succeeding example that an unbiased efficient estimator exists for the signalamplitude. Can a nonlinearly represented parameter, such as time delay, have an efficient estimator?

Simplify the condition for the existence of an efficient estimator by assuming it to be unbiased. Note carefullythe dimensions of the matrices involved.

Show that the only solution in this case occurs when the signal depends "linearly" on the parameter vector.

In Poission problems, the number of events $n$ occurring in the interval $\left[0 , T\right)$ is governed by the probability distribution (see The Poission Process ) $(n)=\frac{(T)^{n}}{n!}e^{-(T)}$ where  is the average rate at which events occur.

What is the maximum likelihood estimate of average rate?

Does this estimate satisfy the Cramr-Rao bound?

In the "classic" radar problem, not only is the time of arrival of the radar pulse unknown but also the amplitude.In this problem, we seek methods of simultaneously estimating these parameters. The received signal $r(l)$ is of the form $r(l)={}_{1}s(l-{}_{2})+n(l)$ where ${}_{1}$ is Gaussian with zero mean and variance ${}_{1}^{2}$ and ${}_{2}$ is uniformly distributed over the observation interval. Find the receiver that computes the maximum a posteriori estimates of ${}_{1}$ and ${}_{2}$ jointly. Draw a block diagram of this receiver and interpret its structure.

We state without derivation the Cramr-Rao bound for estimates of signal delay (see this equation ).

The parameter  is the delay of the signal $s()$ observed in additive, white Gaussian noise: $r(l)=s(l-)+n(l)$ , $l\in \{0, , L-1\}$ . Derive the Cramr-Rao bound for this problem.

In Time-delay Estimation , this bound is claimed to be given by $\frac{{}_{n}^{2}}{E^{2}}$ , where $^{2}$ is the mean-squared bandwidth. Derive this result from your general formula. Does the bound make sense for allvalues of signal-to-noise ratio $\frac{E}{{}_{n}^{2}}$ ?

Using optimal detection theory, derive the expression (see Time-Delay Estimation ) for the probability of error incurred when trying todistinguish between a delay of  and a delay of $+$ . Consistent with the problem pposed for the Cramr-Rao bound, assume the delayed signals are observed in additive, white Gaussian noise.

In formulating detection problems, the signal as well as the noise are sometimes modeled as Gaussian processes. Let'sexplore what differences arise in the Cramr-Rao bound derived when the signal is deterministic. Assume thatthe signal contains unknown parameters  , that it is statistically independent of the noise, and that the noise covariancematrix is known.

What forms do the conditional densities of the observations take under the two assumptions? What are thetwo covariance matrices?

Assuming the stochastic signal model, show that each element of the Fisher information matrix has the form $F_{i, j}=\frac{1}{2}\mathrm{tr}(K^{(-1)}\frac{\partial^{1}K}{\partial {}_{i}}K^{(-1)}\frac{\partial^{1}K}{\partial {}_{j}})$ where $K$ denotes the covariance matrix of the observations. Make this expression more complex by assuming the noisecomplement has no unknown parameters.

Compare the stochastic and deterministic bounds, the latter is given by this equation , when the unknown signal parameters are amplitude and delay. Assume thenoise covariance matrix equals ${}_{n}^{2}I$ . Do these bounds have similar dependence on signal-to-noise ratio?

The histogram probability density estimator is a special case of a more general class of estimators known as kernel estimators . $(p(r, x))=\frac{1}{L}\sum_{l=0}^{L-1} k(x-r(l))$ Here, the kernel $k()$ is usually taken to be a density itself.

What is the kernel for the histogram estimator.

Interpret the kernel estimator in signal processing terminology. Predict what the most time consumingcomputation of this estimate might be. Why?

Show that the sample average equals the expected value of a random variable having the density $(p(r, x))$ regardless of the choice of kernel.

Random variables can be generated quite easily if the probability distribution function is "nice." Let $X$ be a random variable having distribution function $P(X, )$ .

Show that the random variable $U=P(X, X)$ is uniformly distributed over $\left(0 , 1\right)$ .

Based on this result, how would you generate a random variable having a specific density with a uniform randomvariable generator, which is commonly supplied with most computer and calculator systems?

How would you generate random variables having the hyperbolic secant density $p(X, x)=\frac{1}{2}\mathrm{sech\,}\left(\frac{\pi x}{2}\right)$ ?

Why is the Gaussian not in the class of "nice" probability distribution functions? Despite this fact, the Gaussianand other similarly unfriendly random variables can be generated using tabulated rather than analytic forms forthe distribution function.

a perfect square v²+2v+_
kkk nice
algebra 2 Inequalities:If equation 2 = 0 it is an open set?
or infinite solutions?
Kim
y=10×
if |A| not equal to 0 and order of A is n prove that adj (adj A = |A|
rolling four fair dice and getting an even number an all four dice
Kristine 2*2*2=8
Differences Between Laspeyres and Paasche Indices
No. 7x -4y is simplified from 4x + (3y + 3x) -7y
is it 3×y ?
J, combine like terms 7x-4y
im not good at math so would this help me
how did I we'll learn this
f(x)= 2|x+5| find f(-6)
f(n)= 2n + 1
Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)=
. After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
can nanotechnology change the direction of the face of the world
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
the Beer law works very well for dilute solutions but fails for very high concentrations. why?
how did you get the value of 2000N.What calculations are needed to arrive at it
Got questions? Join the online conversation and get instant answers!