List
the four basic fundamental techniques, which are used to design an algorithm
efficiently with brief explanation for each
Ans
Four General Design Techniques
We will start with a discussion of a well-known design approach that has been
missing from the tables of content of textbooks organized around algorithm
design techniques:brute force. It can be defined as a straightforward
approach to solving a problem, usually directly based on the problem's
statement and definitions of the concepts involved. Though very rarely a source
of efficient algorithms, the brute-force approach should not be overlooked as
an important algorithm design technique in view of the following. First, unlike
some of the others, this approach is applicable to a very wide variety of
problems. (In fact, it seems to be the only general approach for which it is
more difficult to point out problems it cannot tackle.) In
particular, it is brute force that is used for many elementary but important
algorithmic tasks such as computing the sum of nnumbers, finding
the largest element in a list, adding two matrices, etc. Second, for some
important problems (e.g., sorting, searching, matrix multiplication, string
matching), the brute-force approach yields reasonable algorithms of at least
some practical value with no limitations on instance sizes. Third, even if too
inefficient in general, a brute-force algorithm can be still useful (and an
economically sound!) choice for solving small-size instances of a problem.
Fourth, a brute-force algorithm can serve an important theoretical or
educational purpose, e.g., as the only deterministic algorithm for an NP-hard
problem or as a yardstick for more efficient alternatives for solving a
problem. Finally, no taxonomy of algorithm design techniques would be complete
without it; moreover, as we are going to see below, it happens to be one of
only four design approaches classified as most general.- They provide templates suited to solving a broad range of diverse problems.
- They can be translated into common control and data structures provided by most high-level languages.
- The temporal and spatial requirements of the algorithms which result can be precisely analyzed.
Divide-and-conquer is probably the best known general algorithm
design technique. It is based on partitioning a problem into a number of
smaller subproblems, usually of the same kind and ideally of about the same
size. The sub-problems are then solved (usually recursively or, if they are
small enough, by a simpler algorithm) and their solutions combined to get a
solution to the original problem. Standard examples include mergesort,
quicksort, multiplication of large integers, and Strassen's matrix
multiplication; several other interesting applications are discussed by Bentley
[3]. Though most applications of divide-and-conquer partition a problem into
two subproblems, other situations do arise: e.g., the multiway mergesort [9]
and Pan's algorithm for matrix multiplication [14]. As to the case of a single
subproblem, it is difficult to disagree with Brassard and Bratley [5] that for
such applications, "... it is hard to justify calling the technique divide-and-conquer."
Hence, though binary search is often cited as a quintessential
divide-and-conquer algorithm, it fits better in a separate category we are
about to discuss.
Solving a problem by reducing its
instance to a smaller one, solving the latter (recursively or otherwise), and
then extending the obtained solution to get a solution to the original instance
is, of course, a well-known design approach in its own right. For obvious
reasons, we will call it decrease-and-conquer. (Brassard and
Bateley use the term
"simplification" which we are going to use below for a different
design technique.) This approach has several important special cases. The
first, and more frequently encountered, decreases the size of an instance by a constant.
The canonical example here is insertion sort; other examples are provided by
Manber who has investigated an intimate
relationship between this approach and mathematical induction. Though the
size-reduction constant is equal to one for most algorithms of this type, other
situations may also arise: e.g., recursive algorithms that have to distinguish
between even and odd sizes of their inputs.
The second special case of the
decrease-and-conquer technique covers the size reduction by a constant factor.
The examples include binary search and multiplication à la russe. Though most
natural examples involve a size reduction by the factor of two, other
situations do happen: e.g., the Fibonacci search for locating the extremum of a
unimodal function and the
"divide-into-three" algorithm for solving the problem of identifying
a lighter false coin with a balance scale.
Finally, the third important special
case of the approach covers more sophisticated situations of the variable-size
reduction. Examples include Euclid's algorithm, interpolation search, and the
quicksort-like algorithm for the selection problem.
Though the decrease-and-conquer
approach is well known, most authors consider it either a special case of
divide-and-conquer e versa In our
opinion, it is more appropriate, from theoretical, practical and especially
educational points of view, to consider divide-and-conquer and
decrease-and-conquer as two distinct design techniques.
The last technique to be considered
here is based on the idea of transformation and will be called transform-and-conquer.
One can identify several flavors of this approach. The first one --- we will
call it simplification --- solves a problem by first transforming
its instance to another instance of the same problem (and of the same size)
with some special property which makes the problem easier to solve. Good
examples include presorting (e.g., for finding equal elements of a list),
Gaussian elimination, and heapsort (if the heap is interpreted as an array with
the special properties required from a heap).
The second --- to be called representation
change--- is based on a transformation of a problem's input to a different
representation, which is more conductive to an efficient algorithmic solution.
Examples include search trees, hashing, and heapsort if the heap is interpreted
as a binary tree.
Preconditioning (or Preprocessing) can be
considered as yet another variety of the transformation strategy. The idea is
to process a part of the input or the entire input to get some auxiliary
information which speeds up solving the problem.
Algorithm Design Paradigms: General approaches to the construction of efficient
solutions to problems.
Such methods are of interest because:
Although more than
one technique may be applicable to a specific problem, it is often the case
that an algorithm constructed by one approach is clearly superior to equivalent
solutions built using alternative techniques
No comments:
Post a Comment