<< Chapter < Page Chapter >> Page >
This module discusses secondary and/or experimental detection methods used in a real-time laugh track removal system. This module is part of a larger series discussing the implementation of this system.


In this module, some of the more exotic detection methods considered for our laugh track removal system are discussed. In particular, a detection method using polynomial curve fitting in conjunction with support vector machines.

Support vector machine (svm)/polynomial curve fitting

As previously discussed in the Anatomy of a Laugh Track module, laugh tracks have a characteristic shape in the time domain. In order to take advantage of this fact, one detection method we considered involved using support vector machines and polynomial curve fitting to detect laughs. Given the unique shape of a laugh track, one ought to be able to consistently fit the same polynomial curve onto a signal representing a laugh. This curve can then be completely characterized in terms of its coefficients. By then building a database of positive and negative examples of polynomial coefficients, and training a support vector machine on this data, we ought to then be able to produce a fast detection scheme for laughs.

What are support vector machines?

Support vector machines are a relatively new technique for partitioning high dimensional datasets into classes. At the simplest level, a SVM divides a high dimensional dataset using hyper-planes. These divisions can then be completely characterized in terms of the data points–called support vectors–that are closest to the hyper-plane of the division. In this way an enormously complex dataset can be partitioned, and more importantly, these partitions are easy to describe. Once a training dataset is partitioned, new points can be classified by checking into which partition the point falls. Further complexity can be introduced by switching from hyper-planes to other curves for partitioning a dataset. Common curves include polynomials, sigmoids, and radial basis functions. More information about support vector machines can be found at www.kernel-machines.org .

Polynomial curve fitting

Not to be confused with the polynomial curve potentially used by a SVM, we experimented using polynomial curve fitting to characterize the shape of a laugh track. After experimenting with a handful of potential curves, we chose the degree 9 polynomial as the best curve to use for fitting based on the low error, quick fitting, and high dimension. The high dimension is important, because the power of SVMs becomes most readily apparent in high dimensional space. In this case, our data consisted of 11 dimensional vectors: 9 dimensions from the coefficients of a fitted polynomial plus the duration of the audio segment in question, and the error of the fitted curve.


In order to detect laugh tracks, we first divided the audio stream into smaller segments which we then approximated with a polynomial curve and stored into a database for use in a SVM training regime. Observation revealed that the shortest laugh track is approximately 1 second long. To generate our dataset, we considered audio segments spaced slightly apart (0.25 to 0.33 seconds) of at least 1 second in duration. Having fit a curve to the 1 second segment, we then examined a segment slightly longer. If the error of approximation of the longer segment was better, then we considered an even long segment, until an optimal segment was found. The coefficients of the polynomial which was fit, the duration, the error, and the classification (laugh, not laugh), were then stored into a dataset.

Having built a sufficiently large training dataset, we then trained a support vector machine ( LIBSVM ). Experimentation with parameters showed that the radial basis kernel produced the best results. New audio segments were then classified according to the partitions generated.

Results and analysis

Overall, the results of this detection method were reasonable, although not ideal. In particular, a 70% detection rate was achieved. Further analysis revealed several factors limited the effectiveness of this method. First, the large number of negative to positive examples heavily skewed the dataset. Experiments showed that results are exceedingly sensitive to parameter selection due to this fact. Second, a large amount of human error existed in the dataset. The training dataset had to be constructed by hand, and variability as to what did and did not qualify as a laugh introduced error. Finally, it is questionable as to how suitable polynomial coefficients are for classifying shape. Coefficient values do not necessarily encode shape characteristics such as duration, slope, and amplitude directly, and as such classification along these lines may be difficult. In sum, this detection method was reasonable, but produced results that were worse than other, simpler methods.

Questions & Answers

a perfect square v²+2v+_
Dearan Reply
kkk nice
Abdirahman Reply
algebra 2 Inequalities:If equation 2 = 0 it is an open set?
Kim Reply
or infinite solutions?
Embra Reply
if |A| not equal to 0 and order of A is n prove that adj (adj A = |A|
Nancy Reply
rolling four fair dice and getting an even number an all four dice
ramon Reply
Kristine 2*2*2=8
Bridget Reply
Differences Between Laspeyres and Paasche Indices
Emedobi Reply
No. 7x -4y is simplified from 4x + (3y + 3x) -7y
Mary Reply
is it 3×y ?
Joan Reply
J, combine like terms 7x-4y
Bridget Reply
im not good at math so would this help me
Rachael Reply
how did I we'll learn this
Noor Reply
f(x)= 2|x+5| find f(-6)
Prince Reply
f(n)= 2n + 1
Samantha Reply
Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)=
Crystal Reply
. After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight?
Chris Reply
preparation of nanomaterial
Victor Reply
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
Himanshu Reply
can nanotechnology change the direction of the face of the world
Prasenjit Reply
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
Ali Reply
the Beer law works very well for dilute solutions but fails for very high concentrations. why?
bamidele Reply
how did you get the value of 2000N.What calculations are needed to arrive at it
Smarajit Reply
Got questions? Join the online conversation and get instant answers!
QuizOver.com Reply

Get the best Algebra and trigonometry course in your pocket!

Source:  OpenStax, Elec 301 projects fall 2007. OpenStax CNX. Dec 22, 2007 Download for free at http://cnx.org/content/col10503/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Elec 301 projects fall 2007' conversation and receive update notifications?