a basic study on the algorithm analysis chapter 2. getting started 한양대학교 정보보호 및...

Post on 15-Jan-2016

220 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

A Basic Study on the Algorithm Analysis

Chapter 2. Getting Started

한양대학교 정보보호 및 알고리즘 연구실

2008. 1. 2이재준

담당교수님 : 박희진 교수님

1

Contents of Table

1. Algorithmic Paradigms2. Analysis Of Computer Algorithms3. Analyzing Insertion Sort algorithm4. Analyzing Merge Sort algorithm5. Comparing Insertion Sort and Merge

Sort6. Next Step for Algorithm Analysis7. References8. Question & Answer

2

1. Algorithmic Paradigms• Design and analysis of computer algorithms.

Critical thinking and problem-solvingCritical thinking and problem-solving

1) Greedy.2) Divide and conquer.3) Dynamic programming.4) Network flow.5) Randomized algorithms.6) Intractability.7) Copying with intractability.

3

1. Algorithmic Paradigms In Chapter 2

Chapter Contents

Chapter 3 - Theta[θ]-notation - formally interpret equation containing Theta[θ]-notation

Chapter 4 - How to solve Recurrence Relation - Master theorem

Chapter 5 Probabilistic analysis for Randomized algorithms

4

2. Analysis of computer algorithms• Loop Invariant - Definition Statements that remains true each time when the program

enters, and executes, and exits the loop.

- Understanding Loop invariants help us analyze programs, check for errors, derive

programs from specifications.

• Asymptotic Complexity - Big Oh[O], Omega[Ω], Theta[θ]

5

2. Analysis of computer algorithms • Use a Loop invariant to prove

Correctness

- Initialization : It is true prior to the first iteration of the loop.

- Maintenance : If it is true before an iteration of the loop, it remains true before the next iteration.

- Termination : When the loop terminates, the invariant gives us a useful property that helps show that the algorithm is correct.

6

2. Analysis of computer algorithms • Asymptotic complexity (Theta[θ])

- [Theta] : f(n) = θ(g(n))

- for all n, n ≥ n0 ,

If there exist positive constant c1 , c2 and n0 such that

c1 g(n) ≤ f(n) ≤ c2 g(n) then,

f(n) = θ(g(n)).

- if g(n) is both an upper and lower bound on f(n) then,

f(n) = g(n).

Big Oh

Omega

The Theta notation more precise than both the “Big Oh” and “Omega”.

The Theta notation more precise than both the “Big Oh” and “Omega”.

7

3. Analyzing Insertion Sort algorithm

J = 2J = 2 J = 3J = 3 J = 4J = 4

J = 5 J = 5 J = 6 J = 6 J = n + 1 J = n + 1

8

3. Analyzing Insertion Sort algorithm • Loop Invariant for Insertion Sort

• Initialization : When j=2 then, this subarray(A[1..j-1]=A[1])is sorted • Maintenance : the body outer for loop works by moving subarray is

sorted.• Termination : When j=N+1 then, for loop ends. A[1..j-1] is sorted.

• Initialization : When j=2 then, this subarray(A[1..j-1]=A[1])is sorted • Maintenance : the body outer for loop works by moving subarray is

sorted.• Termination : When j=N+1 then, for loop ends. A[1..j-1] is sorted.

9

3. Analyzing Insertion Sort algorithm • Asymptotic complexity for Insertion Sort

• Best Case : The array is already sorted ( tj = 1, T(n) is linear function of n )

• Worst Case : The array is in reverse sorted order ( tj = j, T(n) is quadratic function of n. )

• Average Case : running time is approximately half of the worst-case running

time ( tj = j / 2 ), it’s still a quadratic function of n.

• Best Case : The array is already sorted ( tj = 1, T(n) is linear function of n )

• Worst Case : The array is in reverse sorted order ( tj = j, T(n) is quadratic function of n. )

• Average Case : running time is approximately half of the worst-case running

time ( tj = j / 2 ), it’s still a quadratic function of n.

10

3. Analyzing Insertion Sort algorithm • Asymptotic complexity for Insertion Sort

Can Express T(n) as an2 + bn + c for constants a, b, c (that again depend on statement costs ) → T(n) is a quadratic function of n

Can Express T(n) as an2 + bn + c for constants a, b, c (that again depend on statement costs ) → T(n) is a quadratic function of n

11

4. Analyzing Merge Sort algorithm • Divide and Conquer algorithm

– Divide the problem into a number of subproblems

– Conquer the subproblems by solving them recursively Base case: If the subproblems are small enough, just solve

them by brute force

– Combine the subproblem solutions to give a solution to the original problem

12

4. Analyzing Merge Sort algorithm

Divide

Divide

Conquer

Conquer

Solution Solution

13

4. Analyzing Merge Sort algorithm

14

4. Analyzing Merge Sort algorithm • Example : Call of Merge (9, 12,

16)Sorted Order

Subarray

15

4. Analyzing Merge Sort algorithm • Example : Call of Merge (9, 12,

16)

Merge Complete

16

4. Analyzing Merge Sort algorithm

• Initialization : When k=p then, A[p..k-1] is empty. Not copied back to A

• Maintenance : L[i] > R[j] then R[i], or L[i]≤R[j] then L[i] copied into A[k].

• Termination : When k=r+1 then loop ends. A[p..r] is sorted.

• Initialization : When k=p then, A[p..k-1] is empty. Not copied back to A

• Maintenance : L[i] > R[j] then R[i], or L[i]≤R[j] then L[i] copied into A[k].

• Termination : When k=r+1 then loop ends. A[p..r] is sorted.

• Loop Invariant for Merging

17

4. Analyzing Merge Sort algorithm • Asymptotic complexity for Merging

θ(n1 + n2)

θ(n)

⇒ θ (n ) ⇒ θ (n )

18

4. Analyzing Merge Sort algorithm If we assume that n is a power of 2 ⇒ each divide step yields

two subproblems, both of size exactly n/2

The base case occurs when n = 1When n ≥2, time for merge sort steps:

CombineDivide & Conquer

Divide: Just compute q as the average of p and r ⇒ D(n) = θ(1)Conquer: Recursively solve 2 subproblems, each of size n/2 ⇒

2T(n/2)Combine: MERGE on an n-element subarray takes θ(n) time ⇒ C(n)

= θ(n)

Divide: Just compute q as the average of p and r ⇒ D(n) = θ(1)Conquer: Recursively solve 2 subproblems, each of size n/2 ⇒

2T(n/2)Combine: MERGE on an n-element subarray takes θ(n) time ⇒ C(n)

= θ(n) 19

c is a constant that describes the running time for the base case and also is the time per array element for the divide and conquer steps

c is a constant that describes the running time for the base case and also is the time per array element for the divide and conquer steps

4. Analyzing Merge Sort algorithm

• Rewrite the recurrence

• Draw a recurrence tree

20

4. Analyzing Merge Sort algorithm • Continue expanding until the problem sizes get down to 1:

Level = log n +1

cn (Log n + 1)⇒ θ (n log n )

cn (Log n + 1)⇒ θ (n log n )

21

4. Analyzing Merge Sort algorithm

• T(n) = 2i T(n/2i) + i c n

The expansion stops when n/2i = 1⇒ i = log n⇒ 2log n T(1) + log n c n⇒ n + c n log n

Ignore low-order term n and costant coefficient c⇒ θ (n lg n)

22

5. Compare Insertion & Merge

• Compared to insertion sort, merge sort is faster.• One small inputs, insertion sort may be faster. But, for large enough

inputs, merge sort will always be faster

• Compared to insertion sort, merge sort is faster.• One small inputs, insertion sort may be faster. But, for large enough

inputs, merge sort will always be faster

Insertion Sort

Merge Sort

23

6. Next Step for Algorithm Analysis

Chapter Contents

Chapter 3 - Theta[θ]-notation - formally interpret equation containing Theta[θ]-notation

Chapter 4 - How to solve Recurrence Relation - Master theorem

Chapter 5 Probabilistic analysis for Randomized algorithms

Reinforcement About this StudyReinforcement About this Study

24

7. References▣ Introduction to Algorithms Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein | The MIT Press | pg18 -

pg39

▣ Fundamentals of Data Structures in C Horowitz, Sahni, Anderson-Freed | Computer Science Press | pg31 – pg49

▣ Wikipedia the free encyclopedia http://www.wikipedia.org | definition about keyword.

▣ Algorithm Design Jon Kleinberg & Eva Tardos | Pearson International Edition, Addison Wesly

25

8. Q & A

Thank you !&

Happy New Year!!!

26

top related