Business

Decision Trees

Decision trees are a visual representation of decision-making processes, often used in business to analyze and model potential outcomes. They consist of nodes representing decisions, branches representing possible outcomes, and leaves representing the final outcomes. Decision trees help businesses make informed choices by mapping out various decision paths and their associated probabilities.

Written by Perlego with AI-assistance

6 Key excerpts on "Decision Trees"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • The Project Manager's Guide to Handling Risk
    • Alan Webb(Author)
    • 2017(Publication Date)
    • Routledge
      (Publisher)

    ...If the value to the decision maker of adopting each course of action can be established, it is a relatively simple matter to choose whichever course offers the best value. However, the identified courses of action often lead either to further decisions or to uncertain outcomes. When this happens, further branches can be drawn from the outcome of each of the original branches and the whole diagram develops a tree-like appearance. If all the branches have potential outcomes that can be stated with a high degree of certainty then it is a straightforward matter to evaluate the overall outcome at the end of each branch, choose the most favourable one, then work backwards through the tree to find the series of decisions that leads to the chosen result. Where outcomes cannot be stated with certainty there is a risk that whatever decision is made it could turn out to be the wrong one. Decision Trees can reflect this situation but in this case some of the branches will represent the decision options (the 'decision-maker's tree') and other branches will represent the outcomes determined by fate or the state of nature ('nature's tree'). In this situation, probability theory can be applied to determine the best decision. Decision Trees are a structured way of breaking down any decision problem and then evaluating the outcomes so that the best course of action can be chosen...

  • Data Classification
    eBook - ePub

    Data Classification

    Algorithms and Applications

    ...Chapter 4 Decision Trees: Theory and Algorithms Victor E. Lee John Carroll University University Heights, OH [email protected] Lin Liu Kent State University Kent, OH [email protected] Ruoming Jin Kent State University Kent, OH [email protected] 4.1 Introduction One of the most intuitive tools for data classification is the decision tree. It hierarchically partitions the input space until it reaches a subspace associated with a class label. Decision Trees are appreciated for being easy to interpret and easy to use. They are enthusiastically used in a range of business, scientific, and health care applications [ 12, 15, 71 ] because they provide an intuitive means of solving complex decision-making tasks. For example, in business, Decision Trees are used for everything from codifying how employees should deal with customer needs to making high-value investments. In medicine, Decision Trees are used for diagnosing illnesses and making treatment decisions for individuals or for communities. A decision tree is a rooted, directed tree akin to a flowchart. Each internal node corresponds to a partitioning decision, and each leaf node is mapped to a class label prediction. To classify a data item, we imagine the data item to be traversing the tree, beginning at the root. Each internal node is programmed with a splitting rule, which partitions the domain of one (or more) of the data’s attributes. Based on the splitting rule, the data item is sent forward to one of the node’s children. This testing and forwarding is repeated until the data item reaches a leaf node. Decision Trees are nonparametric in the statistical sense: they are not modeled on a probability distribution for which parameters must be learned...

  • The Wiley Blackwell Handbook of Judgment and Decision Making
    • Gideon Keren, George Wu, Gideon Keren, George Wu(Authors)
    • 2015(Publication Date)
    • Wiley-Blackwell
      (Publisher)

    ...Tversky and Shafir (1992; also Shafir & Tversky, 1992) showed that experiment participants violated Savage’s (1954, p. 21) sure-thing principle by preferring the same vacation option under both possible future events (having passed an exam or not) but not preferring it before the exam outcome is known. For relatively more complicated Decision Trees, Gabaix and Laibson (2000) presented and experimentally verified the descriptive validity of a decision algorithm which simplifies trees by removing branches with low probabilities when a person has scarce cognitive resources for decision making. They called their decision algorithm boundedly rational, since it uses a simpler and quicker process than a fully rational analysis to find the normatively optimal decision. Using a decision tree can help a person: (a) avoid violating Savage’s sure-thing principle and/or (b) make more consistent choices. Decision Trees are used to aid in analysis and in visualizing the decision making process in various domains. Lippman and McCardle (2004) analyze a high-profile, high-stakes, high-risk lawsuit case involving an heir-claimant to the estate of the deceased Larry Hillblom using Decision Trees with utility functions. Brandao, Dyer, and Hahn (2005) use a decision tree to calculate real-option valuation problems with complex payoffs and uncertainty associated with the changes in the value of a project over time. Bakir (2008) compared terrorism security measures for cargo-truck border crossings. Pharmaceutical firms use Decision Trees (and their corresponding influence diagrams; see Howard & Matheson, 2005) to visualize and analyze their pipeline of new drugs (Stonebraker, 2002). Assume that the decision maker is risk neutral and thus chooses the decision alternative that maximizes expected monetary value. (A subsequent section covers how to value outcomes with utility functions for decisions under risk and value functions for decisions under certainty...

  • Machine Learning
    eBook - ePub

    Machine Learning

    An Algorithmic Perspective, Second Edition

    ...CHAPTER 12 Learning with Trees We are now going to consider a rather different approach to machine learning, starting with one of the most common and powerful data structures in the whole of computer science: the binary tree. The computational cost of making the tree is fairly low, but the cost of using it is even lower: 𝒪 (log N), where N is the number of datapoints. This is important for machine learning, since querying the trained algorithm should be as fast as possible since it happens more often, and the result is often wanted immediately. This is sufficient to make trees seem attractive for machine learning. However, they do have other benefits, such as the fact that they are easy to understand (following a tree to get a classification answer is transparent, which makes people trust it more than getting an answer from a ‘black box’ neural network). For these reasons, classification by Decision Trees has grown in popularity over recent years. You are very likely to have been subjected to Decision Trees if you’ve ever phoned a helpline, for example for computer faults. The phone operators are guided through the decision tree by your answers to their questions. The idea of a decision tree is that we break classification down into a set of choices about each feature in turn, starting at the root (base) of the tree and progressing down to the leaves, where we receive the classification decision. The trees are very easy to understand, and can even be turned into a set of if-then rules, suitable for use in a rule induction system. In terms of optimisation and search, Decision Trees use a greedy heuristic to perform search, evaluating the possible options at the current stage of learning and making the one that seems optimal at that point. This works well a surprisingly large amount of the time. 12.1   Using Decision Trees As a student it can be difficult to decide what to do in the evening...

  • A User's Guide to Business Analytics

    ...10 Decision Tre es In the previous chapter we discussed fairly sophisticated regression models with different types of responses, continuous and binary, and different types of predictors, continuous, binary, nominal and ordinal. We also touched upon the fact that regression models may not always be linear. Polynomial regression and interactions among the predictors may make the model very complicated. If there are hundreds of predictors, then variable selection and incorporation of interactions may not always work out well, primarily due to volume and complexity. In such situations decision-rule-based models are used to split the sample into homogeneous groups and thereby come up with a prediction. Tree-based models are also classification models, but the main difference between these and ordinary classification models is that here the sample is split successively according to answers to questions like whether X 1 ≥ a. All observations with X 1 ≥ a are classified into one group and the rest are classified into a second group. Typically, the split is binary, but there are procedures when split may be multiple. Tree-based models are easily interpretable and they are able to take into account many variables automatically, using only those which are most important. Tree-based models are also useful to identify interactions, which may later be used in regression models. In medical diagnostics, tree-based methods have many applications. A common goal of many clinical researches is to develop a reliable set of rules to identify new patients into hierarchical risk levels. Based on data already in the database, and based on the clinical symptoms and ultimate outcome (death or survival) of the patients, a new patient may be classified into high-risk and low-risk categories. Decision Trees have very high applicability in industrial recommender systems. A recommender system recommends items or products to prospective buyers based on their purchase pattern...

  • The Project Manager's Guide to Making Successful Decisions

    ...APPENDIX C An Introduction to Decision Trees I n this appendix, Robert Dees and Ken Gilliam briefly describe some of the fundamental concepts of decision analysis and move to a discussion about one of the favorite structural models of decision analysts: the decision tree. Decisions and Outcomes To think about decision analysis, we must begin with the decision. A decision, as already stated, is an irrevocable allocation of resources. In the project manager’s world, this means that the decision is not made until contracts are signed or money changes hands. We know that we have made a decision when we cannot go back, or at least that we cannot go back without incurring a penalty. Going back would be another decision, and in this case we have at least decided to commit resources up to the amount of the penalty. This penalty could be time, money, public opinion, or any other resource. On the flip side, when we do nothing or wait to decide, we are choosing to allocate resources in a particular way right now, and we might incur an opportunity cost. We must decide about not only what to do but also when to do it. This definition of a decision also implies that a decision is more than just thought; a decision is an action. We would like our decision to be characterized by logical thought, but it isn’t a decision until action takes place. Consider a person who says that he or she is on a diet but routinely visits the pantry for junk food; the dieter hasn’t truly decided on the diet until the actions performed reflect his or her spoken words. Along the same lines, a project manager hasn’t decided on a project management plan, or anything else, until it is implemented. The most important, and still most commonly misunderstood, distinction in decision analysis is that between a decision and an outcome (Howard 2007)...