Function

Growth of Functions



Subjects to be Learned

Contents

Introduction

One of the important criteria in evaluating algorithms is the time it takes to complete a job. To have a meaningful comparison of algorithms, the estimate of computation time must be independent of the programming language, compiler, and computer used; must reflect on the size of the problem being solved; and must not depend on specific instances of the problem being solved. The quantities often used for the estimate are the worst case execution time, and average execution time of an algorithm, and they are represented by the number of some key operations executed to perform the required computation.

As an example for an estimate of computation time, let us consider the sequential search algorithm.

Example: Algorithm for Sequential Search

Algorithm   SeqSearch(L, n, x)
L is an array with n entries indexed 1, .., n, and x is the key to be searched for in L.
Output: if x is in L , then output its index, else output 0.

   index := 1;
   while ( index n  and  L[ index ] x )
         index := index + 1 ;
   if ( index > n ) , then index := 0
   return index .

The worst case time of this algorithm, for example, can be estimated as follows: First the key operation is comparison of keys comparing L[ index ] with x . Most search algorithms (if not all) need "comparison of keys". The largest number of execution of this comparison is n , which occurs when x is not in L or at L[n] , and the while loop is executed n times. This quantity n thus obtained is used as an estimate of the worst case time of this sequential search algorithm.

Note that in the while loop two comparisons and one addition are performed. Thus one could use 3n as an estimate just as well. Also note that the very first line and the last two lines are not counted in. The reasons for those are firstly that differences in implementation details such as languages, commands, compilers and machines make differences in constant factors meaningless, and secondly that for large values of n , the highest degree term in n dominates the estimate. Since we are mostly interested in the behavior of algorithms for large values of n , lower terms can be ignored compared with the highest term. The quantity that is used to address these issues is something called big-oh, and that is what we are going to study here.


Big - Oh

Definition (big-oh): Let f and g be functions from the set of integers (or the set of real numbers) to the set of real numbers. Then f(x) is said to be O( g(x) ) , which is read as f(x) is big-oh of g(x) , if and only if there are constants C and n0 such that
     | f(x) | C | g(x) |
whenever x > n0 .

The relationship between f and g can be illustrated as follows when f is big-oh of g.



For example, 5 x + 10  is big-oh of  x2,   because 5 x + 10 < 5 x2 + 10 x2 = 15 x2   for   x > 1 .  
Hence for C = 15 and n0 = 1 ,   | 5x + 10 | C | x2 | .

Similarly it can be seen that  3 x2 + 2 x + 4 < 9 x2  for   x > 1 .   Hence 3 x2 + 2 x + 4 is O( x2 ) .
In general, we have the following theorem:

Theorem 1: an xn + ... + a1 x + a0   is   O( xn )   for any real numbers an , ..., a0 and any nonnegative number n .


Growth of Combinations of Functions

Big-oh has some useful properties. Some of them are listed as theorems here.
Let use start with the definition of max function.
Definition(max function): Let f1(x) and f2(x) be functions from a set A to a set of real numbers B. Then max( f1(x) , f2(x) ) is the function from A to B that takes as its value at each point x the larger of f1(x) and f2(x) .



Theorem 2: If  f1(x) is O( g1(x) ) , and   f2(x) is O( g2(x) ) , then  (f1 + f2)( x )  is   O( max( g1(x) , g2(x) ) ) .

From this theorem it follows that if  f1(x)  and  f2(x) are O( g(x) ) , then  (f1 + f2)( x )  is O( g(x) ) , and
(f1 + f2)( x )  is  O( max( f1(x) , f2(x) ) ) .

Theorem 3: If  f1(x) is O( g1(x) ) , and   f2(x) is O( g2(x) ) , then  (f1 * f2)( x )  is   O( g1(x) * g2(x) ) .


Big - Omega and Big - Theta

Big-oh concerns with the "less than or equal to" relation between functions for large values of the variable. It is also possible to consider the "greater than or equal to" relation and "equal to" relation in a similar way. Big-Omega is for the former and big-theta is for the latter.

Definition (big-omega): Let f and g be functions from the set of integers (or the set of real numbers) to the set of real numbers. Then f(x) is said to be ( g(x) ) , which is read as f(x) is big-omega of g(x) , if there are constants C and n0 such that
     | f(x) | C | g(x) |
whenever x > n0 .

Definition (big-theta): Let f and g be functions from the set of integers (or the set of real numbers) to the set of real numbers. Then f(x) is said to be ( g(x) ) , which is read as f(x) is big-theta of g(x) , if f(x) is O( g(x) ), and ( g(x) ) . We also say that f(x) is of order g(x) .

For example,   3x2 - 3x - 5  is ( x2 ) , because   3x2 - 3x - 5 x2   for integers x > 2   (C = 1 , n0 = 2 ) .
Hence by Theorem 1 it is ( x2 ) .
In general, we have the following theorem:

Theorem 4: an xn + ... + a1 x + a0   is   ( xn )   for any real numbers an , ..., a0 and any nonnegative number n .


Little - Oh and Little - Omega

If f(x) is O( g(x) ), but not ( g(x) ) , then f(x) is said to be o( g(x) ) , and it is read as f(x) is little-oh of g(x) . Similarly for little-omega () .
For example   x is   o(x2 ) ,   x2 is   o(2x ) ,   2x is   o(x ! ) , etc.


Test Your Understanding of Growth of Function

Indicate which of the following statements are correct and which are not.
Click True or False , then Submit. There are two sets of questions.




Next -- Calculation of Big-Oh

Back to Schedule
Back to Table of Contents





Calculation of Big - Oh



Subjects to be Learned

Contents

Basic knowledge of limits and derivatives of functions from calculus is necessary here.

Big-oh relationships between functions can be tested using limit of function. as follows:

Let f(x) and g(x) be functions from a set of real numbers to a set of real numbers. Then

1. If f(x)/g(x) = 0 , then   f(x) is o( g(x) ) .
2. If f(x)/g(x) = , then   g(x) is o( f(x) ) .
3. If   0 < f(x)/g(x) < , then   f(x) is ( g(x) ) .
4. If f(x)/g(x) < , then   f(x) is O( g(x) ) .

For example, ( 4x3 + 3x2 + 5 )/(x4 - 3x3 - 5x - 4 )
= ( 4/x + 3/x2 + 5/x4 )/(1 - 3/x - 5/x3 - 4/x4 ) = 0 .
Hence   ( 4x3 + 3x2 + 5 )   is   o(x4 - 3x3 - 5x - 4 ) ,   or equivalently,   (x4 - 3x3 - 5x - 4 )   is   (4x3 + 3x2 + 5 ) .

More on Calculation --- L'Hospital (L'Hôpital)'s Rule

f(x)/g(x) is not always easy to calculate such as x2/3x , for example, where both x2 and 3x go to as x goes to and there is no apparent factor common to both. One tool we may be able to use in such cases is L'Hospital's Rule, which is given as a theorem below.

Theorem 5 ( L'Hospital ): If f(x)  = and g(x)  = , and  f(x)  and  g(x)  have the first derivatives,   f '(x)   and   g '(x) ,  respectively,   then f(x)/g(x)  = f '(x)/g '(x) .

This also holds when f(x)  = 0 and g(x)  = 0 ,   instead of f(x)  = and g(x)  = .

For example, x/ex   = 1/ex   = 0 ,   because (ex)' = ex,   where e is the base for the natural logarithm.

Similarly ln x/x   = ( 1/x )/1   = 1/x   = 0 .

Note that this rule can be applied repeatedly as long as the conditions are satisfied. So, for example, x2/ex = 2x/ex = 2/ex = 0 . You can find an article about L'Hospital and exercise questions on L'Hospital's rule for example at http://gauss.hawcc.hawaii.edu/w3calc/m205c/hospital.html.


Test Your Understanding of L'Hospital's Rule

Indicate which of the following statements are correct and which are not.
Click True or False , then Submit. There is one set of questions.

In the questions below lim means the limit as n goes to infinity, and n**k is n raised to the k-th power.

Back to Schedule
Back to Table of Contents