Computer Science

Complexity analysis

Complexity analysis is the study of the resources required to solve a problem as the input size grows. It helps in understanding the efficiency of algorithms and data structures. The goal is to design algorithms that can solve problems in a reasonable amount of time and space.

Written by Perlego with AI-assistance

6 Key excerpts on "Complexity analysis"

  • Algorithmic Graph Theory and Perfect Graphs
    • Martin Charles Golumbic(Author)
    • 2004(Publication Date)
    • North Holland
      (Publisher)
    Chapter 2

    The Design of Efficient Algorithms

    Martin Charles Golumbic

    Publisher Summary

    With the advent of the high-speed electronic computer, new branches of applied mathematics have sprouted forth. One area that has enjoyed a most rapid growth in the past decade is the Complexity analysis of computer algorithms. At one level, the relative efficiencies of procedures, which solve the same problem, are compared. At a second level, one can ask whether one problem is intrinsically harder to solve than another problem. It may even turn out that a task is too hard for a computer to solve within a reasonable amount of time. Measuring the costs to be incurred by implementing various algorithms is a vital necessity in computer science, but it can be a formidable challenge. The chapter discusses the differences between computability and computational complexity. Computational complexity deals precisely with the quantitative aspects of problem solving. It addresses the issue of what can be computed within a practical or reasonable amount of time and space by measuring the resource requirements exactly or by obtaining upper and lower bounds. Complexity is actually determined on three levels: the problem, the algorithm, and the implementation.

    1 The Complexity of Computer Algorithms

    With the advent of the high-speed electronic computer, new branches of applied mathematics have sprouted forth. One area that has enjoyed a most rapid growth in the past decade is the Complexity analysis of computer algorithms. At one level, we may wish to compare the relative efficiencies of procedures which solve the same problem. At a second level, we can ask whether one problem is intrinsically harder to solve than another problem. It may even turn out that a task is too hard for a computer to solve within a reasonable amount of time. Measuring the costs to be incurred by implementing various algorithms is a vital necessity in computer science, but it can be a formidable challenge.
  • Algorithm and Design Complexity
    • Anli Sherine, Mary Jasmine, Geno Peter, S. Albert Alexander(Authors)
    • 2023(Publication Date)
    • CRC Press
      (Publisher)
    1
    Algorithm Analysis
    DOI: 10.1201/9781003355403-1

    1.1 ALGORITHM ANALYSIS

    Why do you need to study algorithms? If you are going to be a computer professional, there are both practical and theoretical reasons to study algorithms. From a practical standpoint, you have to know a standard set of important algorithms from different areas of computing; in addition, you should be able to design new algorithms and analyze their efficiency. From a theoretical standpoint, the study of algorithms, sometimes called algorithmics, has come to be recognized as the cornerstone of computer science.
    Another reason for studying algorithms is their usefulness in developing analytical skills. After all, algorithms can be seen as special kinds of solutions to problems—not just answers but precisely defined procedures for getting answers. Consequently, specific algorithm design techniques can be interpreted as problem-solving strategies that can be useful regardless of whether a computer is involved. Of course, the precision inherently imposed by algorithmic thinking limits the kinds of problems that can be solved with an algorithm.
    There are many algorithms that can solve a given problem. They will have different characteristics that will determine how efficiently each will operate. When we analyze an algorithm, we first have to show that the algorithm does properly solve the problem because if it doesn’t, its efficiency is not important. Analyzing an algorithm determines the amount of ‘time’ that an algorithm takes to execute. This is not really a number of seconds or any other clock measurement but rather an approximation of the number of operations that an algorithm performs. The number of operations is related to the execution time, so we will sometimes use the word time
  • Consumer Optimization Problem Solving
    • Alfred L Norman(Author)
    • 2014(Publication Date)
    • WSPC
      (Publisher)
    Analyzing human algorithms requires a very different approach than analyzing computer algorithms. Computer algorithms are specified so that they can be mathematically analyzed. Human algorithms must be inferred from human behavior. The fact that computational complexity definitions are based on a fixed cost of elementary operations does not create a problem in analyzing human procedures. With learning, the cost of performing an elementary operation is likely to decline. This does not affect the computational complexity definitions as long as there is a lower bound representing the learning limit. The definitions strip constants in determining to which equivalence class an algorithm belongs.
    To determine the difficultly in solving a problem with a computer algorithm the researcher constructs an algorithm and shows that a more efficient algorithm does not exist, thus establishing the computational complexity of the problem. The approach is to use mathematical analysis of the specified algorithm, for example [Norman and Jung (1977)]. A traditional computational complexity approach can be used to study economic theory, such as providing an explanation for money, [Norman (1987)] and a Bayesian explanation of F. Knight’s uncertainty versus risk, [Norman and Shimer (1994)]. For a discussion of the relationship between computability, complexity and economics see [Velupillai (2010)] and [Velupillai (2005)].
    In studying procedures used by humans the procedure must be inferred from the subjects’ behavior. Analysis of human procedures can be done by watching the hand motions of subjects handling the test objects or which buttons the subjects click on a computer screen. Also, asking the subjects what they are doing can be illuminating. Algorithms that are members of different equivalence classes can be identified using statistical techniques such as regression analysis. We use these approaches in determining what procedures humans use in ordering alternatives in Chapter 3
  • Designing with Multi-Agent Systems
    eBook - ePub

    Designing with Multi-Agent Systems

    A Computational Methodology for Form-Finding Using Behaviors

    • Evangelos Pantazis(Author)
    • 2024(Publication Date)
    • De Gruyter
      (Publisher)
    109 ]. If we consider a computer with particular hardware and software specifications, then algorithmic complexity is defined as the length of the shortest program that describes all the necessary steps for performing a process, that is, printing a string.
    Algorithmic complexity, in many cases, fails to meet our intuitive sense of what is complex and what is not. For instance, if we compare Aristotle’s works to an equally long passage written by the proverbial monkeys, the latter is likely to be more random and therefore have much greater complexity. Bennett introduced logical depth as a way to extend algorithmic complexity and averaged the number of steps over the relevant programs using a natural weighting procedure that heavily favors shorter programs [83 , 110 ]. Suppose you want to do a task of trivial algorithmic complexity, such as print a message with only 0 s, then the depth is very small. But if the example above with the random passage from the monkeys is considered, the algorithmic complexity is very high, but the depth is low, since the shortest program is Print followed by the message string. In the field of mathematical problem solving, computational complexity is defined as the difficulty of executing a task in terms of computational resources [111 ].
    In computer science, computational complexity is the amount of computational effort that goes into solving a decision problem starting from a given problem formulation [112 ]. Within this classification, non-deterministic polynomial time (NP) complexity is one of the most fundamental complexity classes and is defined as the set of all decision problems for which the instances where the answer is yes have efficiently verifiable proofs that the answer is indeed “yes” [113 , 114 ]. In other words, computational complexity describes how the time required to solve a problem using a currently known algorithm increases proportionally to the size of the problem. Depending on that relationship, problems are classified as polynomial time (P), NP, NP-complete or NP-hard, which describes whether a problem can be solved and how quickly. For NP-complete problems, for instance, although a solution can be verified as correct, there is no known way to solve the problem efficiently [115
  • Theory of Computation Simplified
    eBook - ePub

    Theory of Computation Simplified

    Simulate Real-world Computing Machines and Problems with Strong Principles of Computation (English Edition)

    • Dr. Varsha H. Patil, Dr. Vaishali S. Pawar, Dr. Swati A. Bhavsar, Dr. Aboli H. Patil(Authors)
    • 2022(Publication Date)
    • BPB Publications
      (Publisher)
    Algorithmics is a field of computer science, defined as the study of algorithms. The overall goal of algorithmics is an understanding of the complexity of algorithms. The study includes the design and analysis of algorithms.
    Although computer scientists have studied algorithms and algorithm efficiency extensively, the field has not been given an official name. Brassard and Bratley coined the term algorithmics, which they defined as the systematic study of the fundamental techniques used to design and analyze efficient algorithms.
    When we want to solve a problem, there may be a choice of algorithms available. In such a case, it is important to decide which one to use. Depending on our priorities and the limits of the equipment available to use, we may want to choose an algorithm that takes the least time or uses the least storage or is the easiest to program and so on. The answer can depend on many factors, such as the number involved, the way the problem is presented, or the speed and storage capacity of the available computing equipment.
    It may be possible that none of the available algorithms is entirely suitable, and therefore, we have to design a new algorithm of our own. Algorithmics is the science that lets us evaluate the effect of these various external factors on the available algorithms, so that we can choose the one that best suits our particular circumstances; it is also the science that tells us how to design a new algorithm.
    Algorithmics include the following:
    • How to devise algorithms
  • Ant Colony Optimization and Constraint Programming
    • Christine Solnon(Author)
    • 2013(Publication Date)
    • Wiley-ISTE
      (Publisher)
    Chapter 2

    Computational Complexity

    A problem is said to be combinatorial if it can be resolved by the review of a finite set of combinations. Most often, this kind of solving process is met with an explosion of the number of combinations to review. This is the case, for example, when a timetable has to be designed. If there are only a few courses to schedule, the number of combinations is rather small and the problem is quickly solved. However, adding a few more courses may result in such an increase of the number of combinations that it is no longer possible to find a solution within a reasonable amount of time.
    This kind of combinatorial explosion is formally characterized by the theory of computational complexity, which classifies problems with respect to the difficulty of solving them. We introduce algorithm complexity in section 2.1 , which allows us to evaluate the amount of resources needed to run an algorithm. In section 2.2 , we introduce the main complexity classes and describe the problems we are interested in within this classification. We show that some instances of a problem may be more difficult to solve than others in section 2.3 or, in other words, that the input data may change the difficulty involved in finding a solution in practice. We introduce the concepts of phase transition and search landscape which may be used to characterize instance hardness. Finally, in section 2.4 , we provide an overview of the main approaches that may be used to solve combinatorial problems.

    2.1. Complexity of an algorithm

    Algorithmic complexity utilizes computational resources to characterize algorithm scalability. In particular, the time complexity of an algorithm gives an order of magnitude of the number of elementary instructions that are executed at run time. It is used to compare different algorithms independently of a given computer or programming language.
    Time complexity usually depends on the size of the input data of the algorithm. Indeed, given a problem, we usually want to solve different instances of this problem where each instance corresponds to different input data.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.