# 0.1 Optimizing logistic regression for a particle physics application  (Page 4/4)

 Page 4 / 4

The solution was to use the DaVinci cluster at Rice to generate the backgrounds by running up to 200 Pythia simulation runs at once. This process generated enough background for 12 t-tbar events and this served as our test set. Unfortunately signal-to-noise ratio in the test set was so low that the logistic regression algorithm in WEKA could not be trained against it – the resulting model failed to detect any t-tbar events at all. Also, training logistic regression on the full test set strained the memory capabilities of the Java virtual machine it was run on, leading to frequent crashes.

To get around this, the author decided to compromise and experiment with training the logistic regression classifier on much smaller training sets with differing ratios of t-tbar events to background events. The performance of the resulting models were then tested on the cross-validation set containing 10,000 of each type of event. The model generated from a training set with a ratio of tt-bar to background events of about 50 seemed to perform the best. (See Table TODO).

## Results

As a result of performing the above optimizations, the efficiency of the logistic regression model at analyzing the very large test set improved by over a factor of 30 while only halving the true positive rate, as shown in Table TODO. It is important to note that the false positive ratio is still much too high for top quarks to be discoverable with a data set of this size. Since we are dealing with counting statistics of independent events, the uncertainty in the background count is approximately the square root of the background count, corresponding to a standard deviation of about 13. Since our signal is only 6 t-tbar events, this means we have a signal significance of about $0.4\sigma$ . In order to get a statistically significant result, we would need to collect around 150 times as much raw data.

## Conclusion

We have demonstrated that we can optimize our use of linear regression to exploit characteristics of a particular particle physics data set. Existing tools like WEKA make using machine learning for this task relatively straightforward, with no need to reinvent the wheel.

It should be noted that this project has neglected the most difficult and computationally intensive part of identifying new physics with particle detectors: modeling the performance of the particle detectors themselves. Modern particle detectors are incredibly complicated pieces of machinery and modeling their capabilities (which change often as components are upgraded) requires a measurable fraction of the planet's computing resources. (ref Grid Computing)

## Future work

Dr. Subramanian also suggested that classifier performance could be improved by combining several integer features, namely how many of each type of lepton were found in each event, into one category feature, namely which lepton type was found. This makes sense because the high-level trigger eliminates all events that do not have exactly one lepton. A simple script should be able to transform all of the existing data to make this possible.

## References

WEKA PythiaParticle physics book

$http://www.readwriteweb.com/archives/cer{n}_{o}fficiall{y}_{u}nveil{s}_{i}t{s}_{g}r.php$

## Acknowledgments

The author would like to thank Dr. Paul Padley and Dr. Devika Subramanian for providing advice and training for this project, as well as Dr. Andrew Ng for his excellent and fun on-line machine learning class.

## Directions for using code

Install Pythia 8 and WEKA on your UNIX machine. The included scripts and Makefile assume that the WEKA classes are in /usr/share/java/weka.jar and that the directory containing the code and data files is located in the pythia directory. See the included README file for more details.

so some one know about replacing silicon atom with phosphorous in semiconductors device?
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials​ and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
the Beer law works very well for dilute solutions but fails for very high concentrations. why?
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!