-
Home
- Data structures and algorithms
- Data structures and algorithms
- Introduction to algorithms
The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the
logic programming paradigm. In pure logic programming languages the control component is fixed and algorithms are specified by supplying only the logic component. The appeal of this approach is the elegant
semantics : a change in the axioms has a well defined change in the algorithm.
- Serial or parallel or distributed: Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to
parallel algorithms or
distributed algorithms . Parallel algorithms take advantage of computer architectures where several processors can work on a problem at the same time, whereas distributed algorithms utilise multiple machines connected with a
network . Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the results back together. The resource consumption in such algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable. Some problems have no parallel algorithms, and are called inherently serial problems.
- Exact or approximate: While many algorithms reach an exact solution,
approximation algorithms seek an approximation that is close to the true solution. Approximation may use either a deterministic or a random strategy. Such algorithms have practical value for many hard problems.
Classification by design paradigm
Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories will include many different types of algorithms. Some commonly found paradigms include:
- Divide and conquer. A
divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually
recursively ), until the instances are small enough to solve easily. One such example of divide and conquer is
merge sorting . Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in conquer phase by merging them. A simpler variant of divide and conquer is called decrease and conquer algorithm, that solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so conquer stage will be more complex than decrease and conquer algorithms. An example of decrease and conquer algorithm is
binary search algorithm .
-
Dynamic programming . When a problem shows
optimal substructure , meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems, and
overlapping subproblems , meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. For example, the shortest path to a goal from a vertex in a weighted
graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and
memoization go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a
table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity.
- The greedy method. A
greedy algorithm is similar to a
dynamic programming algorithm , but the difference is that solutions to the subproblems do not have to be known at each stage; instead a "greedy" choice can be made of what looks best for the moment. The greedy method extends the solution with the best possible decision (not all feasible decisions) at an algorithmic stage based on the current local optimum and the best decision (not all possible decisions) made in previous stage. It is not exhaustive, and does not give accurate answer to many problems. But when it works, it will be the fastest method. The most popular greedy algorithm is finding the minimal spanning tree as given by
Kruskal .
- Linear programming. When solving a problem using
linear programming , specific
inequalities involving the inputs are found and then an attempt is made to maximize (or minimize) some linear function of the inputs. Many problems (such as the
maximum flow for directed
graphs ) can be stated in a linear programming way, and then be solved by a 'generic' algorithm such as the
simplex algorithm . A more complex variant of linear programming is called integer programming, where the solution space is restricted to the
integers .
-
Reduction . This technique involves solving a difficult problem by transforming it into a better known problem for which we have (hopefully)
asymptotically optimal algorithms. The goal is to find a reducing algorithm whose
complexity is not dominated by the resulting reduced algorithm's. For example, one
selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquers.
- Search and enumeration. Many problems (such as playing
chess ) can be modeled as problems on
graphs . A
graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes
search algorithms ,
branch and bound enumeration and
backtracking .
- The probabilistic and heuristic paradigm. Algorithms belonging to this class fit the definition of an algorithm more loosely.
Questions & Answers
how does Neisseria cause meningitis
is the branch of biology that deals with the study of microorganisms.
studies of microbes
Louisiaste
when we takee the specimen which lumbar,spin,
How bacteria create energy to survive?
Bacteria doesn't produce energy they are dependent upon their substrate in case of lack of nutrients they are able to make spores which helps them to sustain in harsh environments
_Adnan
But not all bacteria make spores, l mean Eukaryotic cells have Mitochondria which acts as powerhouse for them, since bacteria don't have it, what is the substitution for it?
Muhamad
they make spores
Louisiaste
what is sporadic nd endemic, epidemic
the significance of food webs for disease transmission
Abreham
food webs brings about an infection as an individual depends on number of diseased foods or carriers dully.
Mark
explain assimilatory nitrate reduction
Assimilatory nitrate reduction is a process that occurs in some microorganisms, such as bacteria and archaea, in which nitrate (NO3-) is reduced to nitrite (NO2-), and then further reduced to ammonia (NH3).
Elkana
This process is called assimilatory nitrate reduction because the nitrogen that is produced is incorporated in the cells of microorganisms where it can be used in the synthesis of amino acids and other nitrogen products
Elkana
Examples of thermophilic organisms
Give Examples of thermophilic organisms
Shu
advantages of normal Flora to the host
Prevent foreign microbes to the host
Abubakar
they provide healthier benefits to their hosts
ayesha
They are friends to host only when Host immune system is strong and become enemies when the host immune system is weakened . very bad relationship!
Mark
cell is the smallest unit of life
Fauziya
cell is the smallest unit of life
Akanni
cell is the structural and functional unit of life
Hasan
is the fundamental units of Life
Musa
what are emergency diseases
There are nothing like emergency disease but there are some common medical emergency which can occur simultaneously like Bleeding,heart attack,Breathing difficulties,severe pain heart stock.Hope you will get my point .Have a nice day ❣️
_Adnan
define infection ,prevention and control
Innocent
I think infection prevention and control is the avoidance of all things we do that gives out break of infections and promotion of health practices that promote life
Lubega
Heyy Lubega hussein where are u from?
_Adnan
which site have a normal flora
Many sites of the body have it
Skin
Nasal cavity
Oral cavity
Gastro intestinal tract
Safaa
skin,Oral,Nasal,GIt
Sadik
How can Commensal can Bacteria change into pathogen?
Sadik
How can Commensal Bacteria change into pathogen?
Sadik
what are the advantages of normal Flora to the host
Micheal
what are the ways of control and prevention of nosocomial infection in the hospital
Micheal
part of a tissue or an organ being wounded or bruised.
Wilfred
what term is used to name and classify microorganisms?
Binomial nomenclature
adeolu
Got questions? Join the online conversation and get instant answers!
Source:
OpenStax, Data structures and algorithms. OpenStax CNX. Jul 29, 2009 Download for free at http://cnx.org/content/col10765/1.1
Google Play and the Google Play logo are trademarks of Google Inc.