269202 ALGORITHMS FOR ISNE DR. KENNETH COSH WEEK 1 COURSE DESCRIPTION Data Structures such as Sparse Matrix, B-tree, tries and graphs, Advanced sorting algorithms, Advanced searching algorithms, Information Processing Algorithms, Algorithms for Networking, Efficient implementation of algorithms. THIS WEEK Review Introduction to Data Structures and Algorithms Complexity Analysis HOW DO WE FIND THE MOST EFFICIENT ALGORITHM? To compare the efficiency of algorithms, computational complexity can be used. Computational Complexity is a measure of how much effort is needed to apply an algorithm, or how much it costs. An algorithm’s cost can be considered in different ways, but for our means Time and Space are critical. Time being the most significant. COMPUTATIONAL COMPLEXITY CONSIDERATIONS Computational Complexity is both platform / system and language dependent; An algorithm will run faster on my PC at home than the PC’s in the lab. A precompiled program written in C++ is likely to be much faster than the same program written in Basic. Therefore to compare algorithms all should be run on the same machine. COMPUTATIONAL COMPLEXITY CONSIDERATIONS II When comparing algorithm efficiencies, real-time units such as nanoseconds need not be used. Instead logical units representing the relationship between ‘n’ the size of a file, and ‘t’ the time taken to process the data should be used. TIME / SIZE RELATIONSHIPS Linear If t=cn, then an increase in the size of data increases the execution time by the same factor Logarithmic If t=log2n then doubling the size ‘n’ increases ‘t’ by one time unit. ASYMPTOTIC COMPLEXITY Functions representing ‘n’ and ‘t’ are normally much more complex, but calculating such a function is only important when considering large bodies of data, large ‘n’. Ergo, any terms which don’t significantly affect the outcome of the function can be eliminated, producing a function which approximates the functions efficiency. This is called Asymptotic Complexity. EXAMPLE I Consider this example; F(n) = n2 + 100n + log10n + 1000 For small values of n, the final term is the most significant. However as n grows, the first term becomes most significant. Hence for large ‘n’ it isn’t worth considering the final term – how about the penultimate term? EXAMPLE II n F(n) Value n2 Value 100n % Value log10n % Value 1000 % Value % 1 1,101 1 0.1 100 9.1 0 0.0 1,000 90.83 10 2,101 100 4.76 1000 47.6 1 0.05 1,000 47.6 100 21,002 10,000 47.6 10,000 47.6 2 0.001 1,000 4.76 1000 1,101,003 1,000,00 0 90.8 100,000 9.1 3 0.0003 1,000 0.09 10000 101,001,0 04 100,000, 000 99 1,000,000 0.99 4 0.0 1,000 0.001 100000 10,010,00 1,005 10,000,0 00,000 99.9 10,000,000 0.099 5 0.0 1,000 0.00 REMEMBER? Big O? Big Ω? Big Θ? Simplify complexity equations! DATA STRUCTURES So, throughout this course we will discuss some algorithms, and related data structures… What data structures have we talked about already?