CS 2251 DESIGN AND ANALYSIS OF
ALGORITHMS 3 1 0 4
UNIT I
9
Algorithm Analysis – Time Space Tradeoff – Asymptotic
Notations – Conditional asymptotic notation – Removing condition from the
conditional asymptotic notation - Properties of big-Oh notation – Recurrence
equations – Solving recurrence equations – Analysis of linear search.
UNIT II 9
Divide and Conquer: General Method – Binary Search –
Finding Maximum and Minimum – Merge Sort – Greedy Algorithms: General Method –
Container Loading – Knapsack Problem.
UNIT III 9
Dynamic Programming: General Method – Multistage
Graphs – All-Pair shortest paths – Optimal binary search trees – 0/1 Knapsack –
Travelling salesperson problem .
UNIT IV 9
Backtracking: General Method – 8 Queens
problem – sum of subsets – graph coloring – Hamiltonian problem – knapsack
problem.
UNIT V 9
Graph Traversals – Connected Components – Spanning
Trees – Biconnected components – Branch and Bound: General Methods (FIFO &
LC) – 0/1 Knapsack problem – Introduction to NP-Hard and NP-Completeness.
TUTORIAL = 15 Total = 60
TEXT BOOK:
1.
Ellis Horowitz, Sartaj Sahni and Sanguthevar Rajasekaran,
Computer Algorithms/ C++, Second Edition, Universities Press, 2007. (For Units
II to V)
2.
K.S. Easwarakumar, Object Oriented Data Structures
using C++, Vikas Publishing House pvt. Ltd., 2000 (For Unit I)
REFERENCES:
1.
T. H. Cormen, C. E. Leiserson, R.L.Rivest, and C.
Stein, "Introduction to Algorithms", Second Edition, Prentice Hall of India
Pvt. Ltd, 2003.
2. Alfred
V. Aho, John E. Hopcroft and Jeffrey D.
Ullman, "The Design and Analysis of Computer
Algorithms", Pearson Education,
1999.
CS2251 -DESIGN AND
ANALYSIS OF ALGORITHMS
UNIT –I
1.
What is an Algorithm? May/June 2006, Nov/Dec 2008
An
algorithm is a sequence of unambiguous instructions for solving a problem,
i.e., for obtaining a required output for any legitimate input in a finite
amount of time
2.
State the Euclid ’s
algorithm for finding GCD of two given numbers.
ALGORITHM Euclid (m, n)
// Computes gcd(m,n) by Euclid ’s algorithm
//Input : Two nonnegative, not-both-zero integers m
and n
//Output: Greatest common divisor
of m and n
while n ¹ 0 do
r ß m mod n
m ß n
n ß r
return m.
3.
What are Sequential Algorithms?
The
central assumption of the RAM model is that instructions are executed one after
another, one operation at a time. Accordingly, algorithms designed to be
executed on such machines are called Sequential algorithms.
4.
What are Parallel Algorithms?
The central assumption of the RAM model does not hold
for some newer computers that can execute operations concurrently, i.e., in
parallel algorithms that take advantage of this capability are called Parallel
algorithms.
5.
What is Exact and Approximation algorithm?
The principal decision to choose solving the problem
exactly is called exact algorithm.
The principal decision to choose solving the problem
approximately is called Approximation algorithm.
6.
What is Algorithm Design Technique? Nov/Dec 2005
An algorithm design technique is a
general approach to solving problems algorithmically that is applicable to a
variety of problems from different areas of computing.
7.
Define Pseudo code.
A Pseudo code is a mixture of a natural
language and programming language like constructs. A pseudo code is usually
more precise than a natural language, and its usage often yields more succinct
algorithm descriptions.
8.
Define Flowchart.
A
method of expressing an algorithm by a collection of connected geometric shapes
containing descriptions of the algorithm’s steps.
9.
Explain Algorithm’s Correctness
To
prove that the algorithm yields a required result for every legitimate input in
a finite amount of time.
Example:
Correctness of Euclid ’s
algorithm for computing the greatest common divisor stems from correctness of
the equality gcd (m, n) = gcd (n, m mod n).
10.
What is Efficiency of algorithm?
Efficiency of an
algorithm can be precisely defined and investigated with mathematical rigor.
There are two kinds of algorithm efficiency
1)
Time Efficiency – Indicates how fast the algorithm runs
2)
Space Efficiency – Indicates how much extra memory the
algorithm needs.
11.
What is generality of an algorithm?
It
is a desirable characteristic of an algorithm. Generality of the problem the
algorithm solves is sometimes easier to design an algorithm for a problem posed
in more general terms.
12.
What is algorithm’s Optimality?
Optimality is about the complexity of the problem
that algorithm solves. What is the minimum amount of effort any algorithm will
need to exert to solve the problem in question is called algorithm’s
Optimality.
13.
What do you mean by ²Sorting”
problem?
The sorting problem asks us to rearrange the items of a given list in
ascending order (or descending order)
14.
What do you mean by ²Searching”
problem?
The searching problem deals with finding a given value, called a search
key, in a given set.
15.
What do you mean by ²Worst
case-Efficiency” of an algorithm?
The ²Worst case-Efficiency” of an algorithm is its efficiency for the
worst-case input of size n, which is an input (or inputs) of size n for which
the algorithm runs the longest among all possible inputs of that size.
Ex: if you want to sort a list of numbers in ascending
order when the numbers are given in descending order. In this running time will
be the longest.
16.
What do you mean by ²Best
case-Efficiency” of an algorithm?
The
²Best
case-Efficiency” of an algorithm is its efficiency for the Best-case input of
size n, which is an input(or inputs) of
size n for which the algorithm runs the fastest among all possible inputs of
that size.
Ex: if you want to sort a list of numbers
in ascending order when the numbers are given in ascending order. In this
running time will be the smallest.
17.
Define the ²Average-case efficiency”
of an algorithm?
The ²Average-case efficiency” of an algorithm is its efficiency for the input of size n, for which the algorithm runs between the best
case and the worst case among all possible inputs of that size.
18.
What do you mean by “Amortized efficiency”?
The “Amortized efficiency” applies not only a single
run of an algorithm but rather to a sequence of operations performed on the
same data structure. It turns out that in some situations a single operation
can be expensive ,but the total time for
an entire sequence of n such operations is always significantly better than the
worst case efficiency of that single operation multiplied by n. This is known
as “Amortized efficiency”.
19.
How to measure the algorithm’s efficiency?
It is logical to investigate the algorithm’s
efficiency as a function of some parameter n indicating the algorithm’s input
size.
Example: It will be the size of the list for problems of sorting,
searching, finding the list’s smallest element, and most other problems dealing
with lists.
20.
What is called the basic operation of an
algorithm?
The most important
operation of the algorithm is the operation contributing the most to the total
running time is called basic operation of an algorithm.
21.
How to measure an algorithm’s running time?
Let
Cop be the time of execution of an algorithm’s basic iteration on a
particular computer and let C (n) be the number of times this operation needs
to be executed for this algorithm. Then
we can estimate the running time T(n) of a program implementing this algorithm
on that computer by the formula
T(n) ≈ Cop
C(n)
22.
Define order of growth.
The efficiency
analysis framework concentrates on the order of growth of an algorithm’s basic
operation count as the principal indicator of the algorithm’s efficiency. To compare and rank such orders of growth we
use three notations
1)
O (Big oh) notation
2)
Ω (Big Omega) notation &
3)
Θ (Big Theta) notation
23.
Define Big oh notation May/June 2006, April/May
2008
A function t(n) is said to be in O(g(n)) denoted t(n)
ε O (g(n)), if t(n) is bounded above by some constant multiple of g(n) for all
large n, i.e., if there exist some positive constant c and some non negative
integer n0 such that
T (n) < c g (n) for n >
n0
24.
Prove that 100n+5 Î O (n2)?
Clearly 100n+5 £ 100n+n (for all n ³ 5) = 101n£101n2
By choosing n0=5 and c=101 we find that
100n+5ÎO (n2).
25.
Define Ω
notation
A function t(n) is said to be in Ω (g(n)), denoted
t(n) Î
Ω (g(n)), if t(n) is bounded below by some positive constant multiple of g(n)
for all large n, i.e., if there exist some positive constant c and some non
negative integer n0 such that
T (n) < c g (n) for n >
n0
26.
Prove that n3ÎW(n2)?
Clearly n3 ³ n2 for all n ³ 0. i.e.,
we can select c=1 and n0=0.
27.
Define Θ - notation
A function t(n) is said to be in Θ(g(n)), denoted
t(n) Î
Θ (g(n)), if t(n) is bounded both above and below by some positive constant
multiples of g(n) for all large n, i.e., if there exist some positive constant
c1 and c2 and some non negative integer n0
such that
c2
g (n) < t (n) < c1 g(n) for n > n0
28.
Prove that( ½)n(n-1) Î Q(n2)
1/2n(n-1)=(1/2)n2-1/2n £ 1/2 n2
for all n³0.(we have proved upper inequality) now
1/2n(n-1)=(1/2)n2-1/2n³(1/2)n2-1/2n*1/2n(for all n³2)=1/4 n2 hence we can select c2=1/4,c1=1/2
and n0=2.
29.
What
is the use of Asymptotic Notations?
The notations O, W and Q and are used to indicate and
compare the asymptotic orders of growth of functions expressing algorithm
efficiencies.
UNIT II
1) Explain divide and conquer algorithms
Divide and conquer is probably the best known general
algorithm design technique. It work according to the following general plan
i)A problem's instance is divided into several
smaller instances of the same problem, ideally of about the same size.
ii) The smaller instances are solved
iii) If necessary, the solutions obtained for the
smaller instances are combined to get a solution to the original problem
2) Define Merge Sort
Merge sort is a perfect example of a successful
application of the divide and conquer technique. It sorts a given array
A[0...n-l] by dividing it into two halves A[0...[n/2] - 1] and A[[n/2]....n-l],
sorting each of them recursively, and then merging the two smaller sorted
arrays into a single sorted one.
3) Define Binary Search
Binary Search is remarkably efficient algorithm for
searching in a sorted array. It works by comparing a search key K with the
array's middle element A[m]. If they match, the algorithm stops; Otherwise, the
same operation is repeated recursively for the first half of the array if K<
A[m] and for the second half if K > A[m]
A[0].............A[m-l] A[m] A[m+l]............. A[n-l]
4) What can we say about the average case efficiency of
binary search?
A sophisticated analysis shows that the average
number of key comparisons made by binary search is only slightly smaller than
that in the worst case
Cavg(n) » log2n
5) Define Binary Tree
A binary tree T is defined as a finite set of nodes
that is either empty or consists of s root and two disjoint binary trees tL, and tr called, respectively the
left and right subtree of the root.
6) How
divide and conquer technique can be applied to binary trees?
Since the binary tree definition itself divides a
binary tree into two smaller structures of the same type, the left subtree and
the right subtree, many problems about binary trees can be solved by applying
the divide-conquer technique.
7) Explain Internal and External Nodes
To draw the tree's extension by replacing the empty
subtrees by special nodes.The extra nodes shown by little squares are called
external. The original nodes shown by littile circles are called internal.
8) Define Preorder, inorder and postorder Traversal
PREORDER -The root is visited before the left and
right subtrees are visited (in that order)
INORDER -The root is visited after visiting its left
subtree but before visiting the right Subtree
POSTORDER - The root is visited after visiting the
left and right subtrees(in that order)
9) Define the Internal Path Length
The Internal Path Length I of an extended binary tree
is defined as the sum of the lengths of the paths - taken over all internal
nodes- from the root to each internal node.
10) Define the External Path Length
The External Path Length E of an extended binary tree
is defined as the sum of the lengths of the paths - taken over all external
nodes- from the root to each external node.
11) Explain about greedy technique
The greedy technique suggests constructing a solution
to an optimization problem through a sequence of steps, each expanding a
partially constructed solution obtained so far, until a complete solution to
the problem is reached. On each step, the choice made must be feasible,
locally optimal and irrevocable.
12) Define Spanning Tree
A Spanning Tree of a connected graph is its connected
acyclic subgraph(i.e., a tree) that contains all the vertices of the graph.
13) Define Minimum Spanning Tree
A minimum spanning tree of a weighted connected graph
is its spanning tree of the smallest weight ,where the weight of a tree is
defined as the sum of the weights on all its edges.
14) Define min-heap
A min-heap is a complete binary tree in which every
element is less than or equal to its children. All the principal properties of
heaps remain valid for min-heaps, with some obvious modifications.
15) Define Kruskal's Algorithm
Kruskal's algorithm looks at a minimum spanning tree
for a weighted connected graph G =(V,E) as an acyclic subgraph with | V| - 1
edges for which the sum of the edge weights is the smallest.
16) Define Prim's Algorithm
Prim's algorithm is a greedy algorithm for
constructing a minimum spanning tree of a weighted connected graph.It works by
attaching to a previously constructed subtree a vertex to the vertices already
in the tree.
17) Explain Dijkstra's Algorithm
Dijkstra's algorithm solves the single - source
shortest - path problem of finding shortest paths from a given vertex (the
source) to all the other vertices of a weighted graph or digraph. It works as
prim's algorithm but compares path lengths rather than edge lengths. Dijkstra's
algorithm always yields a correct solution for a graph with nonnegative
weights.
UNIT III
1)Define Dynamic Programming
Dynamic programming is a technique for solving
problems with overlapping problems. Typically, these subproblems arise from a
recurrence relating a solution to a given problem with solutions to its smaller
subproblems of the same type. Rather than solving overlapping subproblems again
and again, dynamic programming suggests solving each of the smaller sub
problems only once and recording the results in a table from which we can then
obtain a solution to the original problem.
2) Define Binomial Coefficient
The Binomial Coefficient, denoted C(n,k) or k is the
number of Combinations(subsets) of k elements from an n-element
set(0<k<n). The name "binomial coefficients" comes from the
participation of these numbers in the so called binomial formula
(a+b)n =(n,0)an+..........+C(n,i)an-ibi+.......+C(n,n)bn
3) Define Transitive closure
The transitive closure of a directed graph with n
vertices can be defined as the n by n Boolean matrix T = {ti,}, in which the
element in the ith row (1<i<n) and the jth
column (l<j<n) is 1 if there exists a non trivial directed
path from the ith vertex to jth vertex ; otherwise , tij
is 0.
4) Explain Warshalls algorithm
Warshall's algorithm constructs the transitive closure
of a given digraph with n vertices through a series of n by n Boolean matrices R(0), ………, R(k-l)R(k),……..,
R(n)
Each of these matrices provides certain information about
directed paths in the digraph.
4) Explain All-pair shortest-paths problem
Given a weighted connected graph (undirected or directed),
the all pairs shortest paths problem asks to find the distances(the lengths of
the shortest path) from each vertex to all other vertices.
5) Explain Floyd's algorithm
It is convenient to record the lengths of shortest
paths in an n by n matrix D called the distance matrix: the element dij
in the ith row and the jth column of this matrix indicates the
length of the shortest path from the ith vertex to the jth
vertex . We can generate the distance matrix with an algorithm that is very
similar to warshall's algorithm. It is called Floyd's algorithm.
6)
What does Floyd’s algorithm do?
It is used to find the distances (the lengths of the
shortest paths) from each vertex to all other vertices of a weighted connected
graph.
7) Explain principle of Optimality
It says that an optimal solution to any instance of
an optimization problem is composed of optimal solutions to its subinstances.
8) Explain Optimal Binary Search Trees
One of the principal application of Binary Search
Tree is to implement the operation of searching. If probabilities of searching
for elements of a set are known, it is natural to pose a question about an
optimal binary search tree for which the average number of comparisons in a
search is the smallest possible.
9) Explain Knapsack problem
Given n items of known weights w1,w2...........wn
and values v1,v2............vn and a knapsack
of capacity W, find the most valuable subset of the items that fit into the
knapsack.(Assuming all the weights and the knapsack's capacity are positive
integers the item values do not have to be integers.)
10) Explain the Memory Function technique
The Memory Function technique seeks to combine
strengths of the top down and bottom-up
approaches to solving problems with overlapping subproblems. It does this by
solving, in the top-down fashion but only once, just necessary sub problems of
a given problem and recording their solutions in a table.
11)
Explain ²Traveling
salesman problem”?
A salesman has to travel n cities starting from any one of the cities
and visit the remaining cities exactly once and come back to the city where he
started his journey in such a manner that either the distance is minimum or
cost is minimum. This is known as traveling salesman problem.
UNIT IV
l) Explain Backtracking
The principal idea is to construct solutions one
component at a time and evaluate such partially constructed candidates as
follows.
> If a partially constructed solution can be developed
further without violating the problem's constraints, it is done by taking the
first remaininig legitimate option for the next component.
> If there is no legitimate option for the next
component, no alternatives for any remaining component need to be considered.
In this case, the algorithm backtracks to replace the last
component of the partially constructed solution with its next option
2) Explain State Space Tree
If it is convenient to implement backtracking by
constructing a tree of choices being made, the tree is called a state space
tree. Its root represents an initial state before the search for a solution
begins.
3) Explain promising and nonpromising node
A node in a state space tree is said to be promising
if it corresponds to a partially constructed solution that may still lead to a
complete solution;otherwise it is called nonpromising
4) Explain n-Queens problem
The problem is to place n queens on an n by n
chessboard so that no two queens attack each other by being in the same row or
same column or on the same diagonal.
5) Explain Subset-Sum Problem
We consider the subset-sum problem: Find a subset of
a given
set S={S1,S2,..........Sn}
of n positive integers whose sum is equal to a given positive integer d.
6) Explain Branch and Bound Technique
Compared to backtracking, branch and bound requires
The idea to be strengthened further if we deal with an
optimization problem, one that seeks to minimize or maximize an objective
function, usually subject to some constraints.
7) Define Feasible Solution
A feasible
solution is a point in the problem's search space that satisfies all the
problem's constraints.
Ex: A Hamiltonian Circuit in the traveling salesman
problem. A subset of items whose total
weight does not exceed the knapsack's Capacity
8) Define Optimal solution
Is a feasible solution with the best value of the
objective function
Eg: The shortest Hamiltonian Circuit
The most valuable subset of items that fit the
knapsack
9)Mention two reasons to terminate a search path at the
current node in a state-space tree of a branch and bound algorithm.
The value of the node's bound is not better than the
value of the best solution seen so far. The node represents no feasible
solutions because the constraints of the problem are already violated.
10) Explain ²Graph
coloring” problem.
The graph
coloring problem asks us to assign the smallest number of colors to vertices of
a graph so that no two adjacent vertices are the same color.
11) Explain Knapsack Problem
Find the most valuable subset of n items of given
positive integer weights and values that fit into a knapsack of a given
positive integer capacity.
UNIT V
1)
Define tractable and intractable problems
Problems that can be solved in polynomial time are
called tractable problems, problems that cannot be solved in polynomial time
are called intractable problems.
2) Explain the theory of computational complexity
A problem's intractability remains the same for all
principal models of computations and all reasonable input encoding schemes for
the problem under consideration
3)Explain class P problems
Class P is a class of decision problems that can be
solved in polynomial time by(deterministic) algorithms. This class of problems
is called polynomial.
4)Explain undecidable problems
If the decision problem cannot be solved in
polynomial time, and if the decision problems cannot be solved at all by any
algorithm. Such problems are called Undecidable.
5) Explain the halting problem
Given a computer program and an input to it,determine
whether the program will halt on that input or continue working indefinitely on
it.
6) Explain class NP problems
Class NP is the class of decision problems that can
be solved by nondeterministic polynomial algorithms.Most decision problems are
in NP. First of all, this class includes all the problems in P. This class of
problems is called Nondeterministic polynomial.
7)Explain NP-complete problems
A decision problem d is said to be NP-complete if
1) it belongs to class NP
2) every problem in NP is polynomially reducible to
D.
8)When a decision problem is said to be polynomially
reducible
A decision problem Dl is said to be polynomially
reducible to a decision problem D2 if there exists a function t that transforms
instances of Dl to instances ofD2 such that
i) t maps all yes
instances of d1 to yes instances odf d2 and all no instances of dl to no
instances ofd2
ii) t is
computable by a polynomial time algorithm
9) Define a Heuristic
A heuristic is a common-sense rule drawn from
experience rather than from a mathematically proved assertion.
Ex: Going to the nearest unvisited city in the
traveling salesman problem is a good illustration for Heuristic
10) Explain NP-Hard problems
The notion of an NP-hard problem can be defined more
formally by extending the notion of polynomial reducability to problems that
are not necessary in class NP including optimization problems.
11)Define Traversals.
When
the search necessarilyinvolves the examination of every vertex in the object
being searched it is called a traversal.
12)List out the techniques for
traversals in graph.
Breadth first search
Depth first search
13)What is articulation point.
A
vertex v in a connected graph G is an articulation point if and only if the
deletion of vertex v together with all edged incident to v disconnects the
graph in to two or more nonempty components.
PART-B
I-UNIT
1.
(a) Describe the steps in analyzing & coding an algorithm. (10)
(b) Explain some of the problem types used
in the design of
algorithm. (6)
2.(a)
Discuss the fundamentals of analysis framework . (10)
(b) Explain the various asymptotic notations
used in algorithm design. (6)
3.
(a) Explain the general framework for analyzing the efficiency of algorithm.
(8)
(b) Explain the various Asymptotic
efficiencies of an algorithm. (8)
4.
(a) Explain the basic efficiency classes. (10)
(b) Explain briefly the concept of
algorithmic strategies. (6)
5.
Describe briefly the notions of complexity of an algorithm. (16)
6.
(a) What is Pseudo-code?Explain with an example. (8)
(b) Find the complexity C(n) of the
algorithm for the worst
case,best case and average case.(Evaluate
average case complexity for n=3,Where n is the
number of inputs) (8)
7. Set
up & solve a recurrence relation for the number of key comparisons made by
above
pseudo code. (4)
II-UNIT
1)
Write a pseudo code for divide & conquer algorithm for merging two sorted
arrays in to a single sorted one.Explain with example. (12)
2)
Construct a minimum spanning tree using Kruskal’s algorithm with your own
example. (10)
3)
Explain about Knapsack Problem with example
4)
Explain Dijikstra algorithm (8)
5)
Define Spanning tree.Discuss design steps in Prim’s algorithm to construct
minimum spanning
tree with an example. (16)
6)Explain
Kruskal’s algorithm. (8)
7)
Explain about binary search with example.
III-UNIT
1)
Solve the all pair shortest path problem for the diagraph with the weighted
matrix given below:-
a b c
d
a 0 ∞
3 ∞
b 2 0
∞ ∞
c ∞ 7
0 1
d 6 ∞
∞ 0(16)
2) Explain
Warshall’s & Floyd’s Algorithm. (16)
3)
Explain about Multistage graphs with example.
4)
Define optimal binary search trees with example.
5)
Explain 0/1 knapsack problem with example.
6)
Discuss the solution for Travelling salesman problem using branch & bound
technique. (16)
VI-UNIT
1.
Explain the 8-Queen’s problem & discuss the possible solutions. (16)
2.
Solve the following instance of the knapsack problem by the branch & bound
algorithm. (16)
3.
Apply backtracking technique to solve the following instance of subset sum
problem :
S={1,3,4,5} and d=11 (16)
5.
Explain subset sum problem & discuss the possible solution strategies using
backtracking.(16)
6.
Explain Graph coloring with example.
7.
Explain about Knapsack Problem using back tracking with example.
V-UNIT
1. Give a suitable example
& explain the Breadth first search & Depth first search. (16)
2. Explain
about biconnected components with example.
3. Briefly
explain NP-Hard and NP-Completeness with examples.
4.
Explain about 0/1 Knapsack Problem using branch and bound with example.
No comments:
Post a Comment