<< Chapter < Page Chapter >> Page >

p ( y φ ) = 1 π N det ( C ) exp y H ( C ) 1 y size 12{p \( y \llineφ\) = { {1} over {πrSup { size 8{N} } "det" \( C \) } } "exp" left ( - y rSup { size 8{H} } \( C \) rSup { size 8{ - 1} } y right )} {}

Active sonar likelihood function

We will find that the natural logarithm of the measurement likelihood ratio simplifies the detection expression considerably:

log L ( y h ) L ( y φ ) = log ( det ( C ) ) log ( det ( C + C r ) ) + y H C 1 ( C r + C ) 1 y size 12{"log" { {L \( y \lline h \) } over {L \( y \llineφ\) } } ="log" \( "det" \( C \) \) - "log" \( "det" \( C+C rSub { size 8{r} } \) \) +y rSup { size 8{H} } left (C rSup { size 8{ - 1} } - \( C rSub { size 8{r} } +C \) rSup { size 8{ - 1} } right )y} {}

The determinant of C size 12{C} {} is det ( C ) = det ( N 0 I L D min ) det ( R ) = ( N 0 ) L D min det ( R ) size 12{"det" \( C \) ="det" \( N rSub { size 8{0} } I rSub { size 8{L - D rSub { size 6{"min"} } } } \) "det" \( R \) = \( N rSub {0} size 12{ \) rSup {L - D rSub { size 6{"min"} } } } size 12{"det" \( R \) }} {}

It is convenient to partition C + C r size 12{C+C rSub { size 8{r} } } {} into sub-matrices compatible with r.

C + C r = R 0 0 0 0 N 0 I D h D min 0 0 0 0 N 0 I N + σ A ( h ) 2 ww H 0 0 0 0 N 0 I L D ( h ) N size 12{C+C rSub { size 8{r} } = left [ matrix { R {} # 0 {} # 0 {} # 0 {} ##0 {} # N rSub { size 8{0} } I rSub { size 8{D left (h right ) - D rSub { size 6{"min"} } } } {} # 0 {} # 0 {} ## 0 {} # 0 {} # N rSub {0} size 12{I rSub {N} } size 12{+σrSub {A \( h \) } rSup {2} } size 12{ bold "ww" rSup {H} } {} # size 12{0} {} ## size 12{0} {} # size 12{0} {} # size 12{0} {} # size 12{N rSub {0} size 12{I rSub {L - D \( h \) - N} }} {}} right ]} {}

The determinant of C + C r size 12{C+C rSub { size 8{r} } } {} is

det ( C + C r ) = det ( R ) det ( N 0 I D ( h ) D min ) det ( N 0 I N + σ A ( h ) 2 rr H ) det ( N 0 I L D ( h ) N ) size 12{"det" \( C+C rSub { size 8{r} } \) ="det" \( R \) "det" \( N rSub { size 8{0} } I rSub { size 8{D \( h \) - D rSub { size 6{"min"} } } } \) "det" \( N rSub {0} size 12{I rSub {N} } size 12{+σrSub {A \( h \) } rSup {2} } size 12{ bold "rr" rSup {H} } size 12{ \) "det" \( N rSub {0} } size 12{I rSub {L - D \( h \) - N} } size 12{ \) }} {}

Which becomes

det ( C + C r ) = ( N 0 ) L D min N det ( N 0 I N + σ A ( h ) 2 rr H ) det ( R ) size 12{"det" \( C+C rSub { size 8{r} } \) = \( N rSub { size 8{0} } \) rSup { size 8{L - D rSub { size 6{"min"} } - N} } "det" \( N rSub {0} size 12{I rSub {N} } size 12{+σrSub {A \( h \) } rSup {2} } size 12{ bold "rr" rSup {H} } size 12{ \) "det" \( R \) }} {}

Using Sylvester’s determinant theorem :

det ( I + AB ) = det ( I + BA ) size 12{"det" \( I + bold "AB" \) =" det" \( I + bold "BA" \) } {}

We obtain:

det ( C + C r ) = ( N 0 ) L D min ( 1 + σ A ( h ) 2 r H r N 0 ) det ( R ) size 12{"det" \( C+C rSub { size 8{r} } \) = \( N rSub { size 8{0} } \) rSup { size 8{L - D rSub { size 6{"min"} } } } \( 1+ { {σrSub {A \( h \) } rSup {2} size 12{r rSup {H} } size 12{r}} over {N rSub {0} } } size 12{ \) "det" \( R \) }} {}

Which equals

det ( C + C r ) = ( N 0 ) L D min ( 1 + σ A ( h ) 2 N 0 ) det ( R ) size 12{"det" \( C+C rSub { size 8{r} } \) = \( N rSub { size 8{0} } \) rSup { size 8{L - D rSub { size 6{"min"} } } } \( 1+ { {σrSub {A \( h \) } rSup {2} } over { size 12{N rSub {0} } } } size 12{ \) "det" \( R \) }} {}

Now we partition the observed ping history y into

y = y R y N1 y h y N2 size 12{y= left [ matrix { y rSub { size 8{R} } {} ##y rSub { size 8{N1} } {} ## y rSub { size 8{h} } {} ##y rSub { size 8{N2} } } right ]} {}

so that:

C 1 ( C r + C ) 1 = 0 0 0 0 0 0 0 0 0 0 1 N 0 I N ( N 0 I N + σ A ( h ) 2 ww H ) 1 0 0 0 0 size 12{C rSup { size 8{ - 1} } - \( C rSub { size 8{r} } +C \) rSup { size 8{ - 1} } = left [ matrix { 0 {} # 0 {} # 0 {} # 0 {} ##0 {} # 0 {} # 0 {} # 0 {} ## 0 {} # 0 {} # { {1} over {N rSub { size 8{0} } } } I rSub { size 8{N} } - \( N rSub { size 8{0} } I rSub { size 8{N} } +σrSub { size 8{A \( h \) } } rSup { size 8{2} } bold "ww" rSup { size 8{H} } \) rSup { size 8{ - 1} } {} # {} ## 0 {} # 0 {} # 0 {} # 0{}} right ]} {}

The Woodbury matrix identity states:

( A + UCV ) 1 = A 1 A 1 U ( C 1 + VA 1 U ) 1 VA 1 size 12{ \( A+ bold "UCV" \) rSup { size 8{ - 1} } =A rSup { size 8{ - 1} } - A rSup { size 8{ - 1} } U \( C rSup { size 8{ - 1} } + bold "VA" rSup { size 8{ - 1} } U \) rSup { size 8{ - 1} } bold "VA" rSup { size 8{ - 1} } } {}

So that

( N 0 I N + σ A ( h ) 2 ww H ) 1 = 1 N 0 I N 1 N 0 2 ww H 1 N 0 + 1 σ A ( h ) 2 size 12{ \( N rSub { size 8{0} } I rSub { size 8{N} } +σrSub { size 8{A \( h \) } } rSup { size 8{2} } bold "ww" rSup { size 8{H} } \) rSup { size 8{ - 1} } = { {1} over {N rSub { size 8{0} } } } I rSub { size 8{N} } - { {1} over {N rSub { size 8{ {} rSub { size 6{0} } } } rSup {2} } } { { size 12{ bold "ww" rSup {H} } } over { size 12{ { {1} over {N rSub {0} } } size 12{+ { {1} over {σrSub {A \( h \) } rSup {2} } } }} } } } {}

And hence

y H C 1 ( C r + C ) 1 y = 1 N 0 σ A ( h ) 2 / N 0 ( 1 + σ A ( h ) 2 N 0 ) w H y h 2 size 12{y rSup { size 8{H} } left (C rSup { size 8{ - 1} } - \( C rSub { size 8{r} } +C \) rSup { size 8{ - 1} } right )y= { {1} over {N rSub { size 8{0} } } } { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } /N rSub { size 8{0} } } over { \( 1+ { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } } over {N rSub { size 8{0} } } } \) } } lline w rSup { size 8{H} } y rSub { size 8{h} } rline rSup { size 8{2} } } {}

So that the log likelihood function becomes

log L ( y h ) L ( y φ ) = log ( 1 + σ A ( h ) 2 N 0 ) + 1 N 0 σ A ( h ) 2 / N 0 ( 1 + σ A ( h ) 2 N 0 ) w H y h 2 size 12{"log" { {L \( y \lline h \) } over {L \( y \llineφ\) } } = - "log" \( 1+ { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } } over {N rSub { size 8{0} } } } \) + { {1} over {N rSub { size 8{0} } } } { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } /N rSub { size 8{0} } } over { \( 1+ { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } } over {N rSub { size 8{0} } } } \) } } lline w rSup { size 8{H} } y rSub { size 8{h} } rline rSup { size 8{2} } } {}

Putting the noise variance inside the observation vector yields for the log likelihood function

log L ( y h ) L ( y φ ) = log ( 1 + σ A ( h ) 2 N 0 ) + σ A ( h ) 2 / N 0 ( 1 + σ A ( h ) 2 N 0 ) w H y h N 0 2 size 12{"log" { {L \( y \lline h \) } over {L \( y \llineφ\) } } = - "log" \( 1+ { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } } over {N rSub { size 8{0} } } } \) + { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } /N rSub { size 8{0} } } over { \( 1+ { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } } over {N rSub { size 8{0} } } } \) } } lline w rSup { size 8{H} } { {y rSub { size 8{h} } } over { sqrt {N rSub { size 8{0} } } } } rline rSup { size 8{2} } } {}

The log-likelihood ratio shows that the assumed echo shape, w H size 12{w rSup { size 8{H} } } {} , the assumed energy of the signal σ A ( h ) 2 size 12{σrSub { size 8{A \( h \) } } rSup { size 8{2} } } {} , the ambient noise density N 0 size 12{N rSub { size 8{0} } } {} , and the expected location in time of the echo needed to select y h size 12{y rSub { size 8{h} } } {} are the parameters required to evaluate the log-likelihood function. The assumed echo shape and energy can vary by hypothesis h, but the noise properties N 0 size 12{N rSub { size 8{0} } } {} have been assumed to be the same for each decision hypothesis h.

Signal to noise ratio of the detector

The magnitude squared term, w H y h N 0 2 size 12{ lline w rSup { size 8{H} } { {y rSub { size 8{h} } } over { sqrt {N rSub { size 8{0} } } } } rline rSup { size 8{2} } } {} , is a matched filter, where the observations are cross-correlated with the signal template. The observations are normalized by the noise variance before cross correlation, which is a form of pre-whitening. The term σ A ( h ) 2 N 0 size 12{ { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } } over {N rSub { size 8{0} } } } } {} is the Energy to Noise Density Ratio (ENR) of the detection problem. Note that the signal to noise ratio of the matched filter output can be written as:

E w H Aw N 0 2 E w H q N 0 2 = σ A ( h ) 2 N 0 E w H qq H w N 0 = σ A ( h ) 2 N 0 w H Iw = σ A ( h ) 2 N 0 = ENR size 12{ { {E left lbrace lline w rSup { size 8{H} } { {Aw} over { sqrt {N rSub { size 8{0} } } } } rline rSup { size 8{2} } right rbrace } over {E left lbrace lline w rSup { size 8{H} } { {q} over { sqrt {N rSub { size 8{0} } } } } rline rSup { size 8{2} } right rbrace } } = { { { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } } over {N rSub { size 8{0} } } } } over {E left lbrace { {w rSup { size 8{H} } ital "qq" rSup { size 8{H} } w} over {N rSub { size 8{0} } } } right rbrace } } = { { { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } } over {N rSub { size 8{0} } } } } over {w rSup { size 8{H} } ital "Iw"} } = { {σrSub { size 8{A \( h \) } } rSup { size 8{2} } } over {N rSub { size 8{0} } } } = ital "ENR"} {}

This result is a general result for matched filters. A matched filter’s SNR is the Energy to Noise Density Ratio of the problem. The energy of a signal being detected is related to the average amplitude and the duration of the signal. The SNR output of the matched filter is independent of the details of the waveform being detected, only the signal energy and the noise spectral density determine the matched filter response.

The likelihood function is dependent only the ENR as well.

A-priori assumptions

Before the ping history is received, we assess the probability of each hypothesis, p ( h ) size 12{p \( h \) } {} and p ( φ ) size 12{p \(φ\) } {} . The a-priori information may have come from previous pings, or are probabilities assigned by the sonar system to begin a target search.

Using the logarithm of probability density function ratios simplifies the expressions:

log p ( h y ) p ( φ y ) = log L ( y h ) L ( y φ ) + log p ( h ) p ( φ ) size 12{"log" { {p \( h \lline y \) } over {p \(φ\lline y \) } } ="log" { {L \( y \lline h \) } over {L \( y \llineφ\) } } +"log" { {p rSup { size 8{ - {}} } \( h \) } over {p rSup { size 8{ - {}} } \(φ\) } } } {}

Using the likelihood ratio notation,

log Λ ( h y ) = log L ( y h ) L ( y φ ) + log Λ ( h ) size 12{"log"Λ\( h \lline y \) ="log" { {L \( y \lline h \) } over {L \( y \llineφ\) } } +"log"ΛrSup { size 8{ - {}} } \( h \) } {}

Once we compute log Λ ( h y ) size 12{"log"Λ\( h \lline y \) } {} , we can declare a target is present, with confidence p T size 12{p rSub { size 8{T} } } {} by computing:

Target Present if: h H τ Λ ( h ) dh > 1 p T p T size 12{ Int cSub { size 8{h in H rSup { size 6{τ} } } } {Λ\( h \) ital "dh"}>{ {1 - p rSub {T} } over { size 12{p rSub {T} } } } } {}

Because the target hypothesis space contains many hypotheses, this detection problem is can be considered a composite hypothesis test.

An alternative approach to detection of a target with an unknown range is solved by finding the target range hypothesis with the greatest measurement likelihood as the detection statistic [Kay]. This is referred to as Generalized Maximum Likelihood Ratio Testing (GLRT).

Target Present if: max h log L ( y h ) L ( y φ ) > γ size 12{ {"max"} cSub { size 8{h} } "log" { {L \( y \lline h \) } over {L \( y \llineφ\) } }>γ} {}

The GLRT approach is often easier to implement than the Bayes detection approach, because one avoids the integration/summation over a-priori probabilities.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Signal and information processing for sonar. OpenStax CNX. Dec 04, 2007 Download for free at http://cnx.org/content/col10422/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?

Ask