<< Chapter < Page | Chapter >> Page > |
Expanding the dot product, $r^T{s}_{i}=\sum_{l=0}^{L-1} r(l){s}_{i}(l)$ another signal processing interpretation emerges. The dot product now describes a finite impulse response (FIR) filteringoperation evaluated at a specific index. To demonstrate this interpretation, let $h(l)$ be the unit-sample response of a linear, shift-invariant filterwhere $h(l)=0$ for $l< 0$ and $l\ge L$ . Letting $r(l)$ be the filter's input sequence, the convolution sum expresses the output. $$(r(k), h(k))=\sum_{l=k-L-1}^{k} r(l)h(k-l)$$ Letting $k=L-1$ , the index at which the unit-sample response's last value overlaps the input's value at the origin, we have $$(k, , (r(k), h(k)))=\sum_{l=0}^{L-1} r(l)h(L-1-l)$$ If we set the unit-sample response equal to the index-reversed, then delayed signal $h(l)={s}_{i}(L-1-l)$ , we have $$(k, , (r(k), {s}_{i}(L-1-k)))=\sum_{l=0}^{L-1} r(l){s}_{i}(l)$$ which equals the observation-dependent component of the optimal detector's sufficient statistic. depicts these computations graphically.
The sufficient statistic for the ${i}^{\mathrm{th}}$ signal is thus expressed in signal processing notation as $(k, , (r(k), {s}_{i}(L-1-k)))-\frac{{E}_{i}}{2}$ . The filtering term is called a matched filter because the observations are passed through a filter whose unit-sample response "matches" that of the signalbeing sought. We sample the matched filter's output at the precise moment when all of the observations fall within thefilter's memory and then adjust this value by half the signal energy. The adjusted values for the two assumed signals aresubtracted and compared to a threshold.
To compute the performance probabilities, the expressions should
be simplified in the ways discussed in the hypothesis testingsections. As the energy terms are known
Note that the only signal-related quantity affecting this performance probability (and all of the others)is the ratio of energy in the difference signal to the noise variance. The larger this ratio, the better (smaller) the performance probabilities become. Note that thedetails of the signal waveforms do not greatly affect the energy of the difference signal. For example, consider the case wherethe two signal energies are equal ( ${E}_{0}={E}_{1}=E$ ); the energy of the difference signal is given by $2E-2\sum {s}_{0}(l){s}_{1}(l)$ . The largest value of this energy occurs when the signals are negatives of each other, with the difference-signalenergy equaling $4E$ . Thus, equal-energy but opposite-signed signals such as sine waves, square-waves, Bessel functions,etc. all yield exactly the same performance levels. The essential signal properties that do yield goodperformance values are elucidated by an alternate interpretation. The term $\sum ({s}_{1}(l)-{s}_{0}(l))^{2}$ equals $({s}_{1}-{s}_{0})^{2}$ , the $L^{2}$ norm of the difference signal. Geometrically, the difference-signal energy is the same quantity as the square ofthe Euclidean distance between the two signals. In these terms, a larger distance between the two signals will mean betterperformance.
Notification Switch
Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?