Business

Decision Tree Method

The decision tree method is a predictive modeling tool used to make decisions by mapping out possible outcomes and their associated costs, benefits, and probabilities. It visually represents decisions and their potential consequences, making it a valuable tool for businesses to analyze and optimize decision-making processes. This method is particularly useful for identifying the most effective strategies for achieving business objectives.

Written by Perlego with AI-assistance

7 Key excerpts on "Decision Tree Method"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • The Engineering Design Primer
    • K. L. Richards(Author)
    • 2020(Publication Date)
    • CRC Press
      (Publisher)

    ...It is preferable to sleep on it before announcing it. Once a decision has been announced publically, it is difficult to retract it. For very important decisions, it is worth keeping a record of the steps undertaken to reach that decision. This way, if you are ever criticised for making what turns out to be a bad decision, you can justify your thoughts based on the information and processes available to you at the time. 8.8 Having Made the Decision … Finally and perhaps most importantly once a decision has been made, don’t waste any time revisiting it about ‘what if’. If something does go wrong and it is required to revise the decision, then do so, but otherwise, accept it and move on. 8.9 Introduction to Constructing Decision Trees 8.9.1 What Is a Decision Tree? A decision tree is a map of the possible outcomes of a series of related choices. It allows either an individual or organisation to choose possible actions against one another based on either their cost, probabilities or benefits. They can also be used to drive informal or formal discussions or to map out an algorithm that predicts the best choice mathematically. A decision tree generally starts with a single point that branches out into possible outcomes. Each of these outcomes subsequently leads other additional nodes, which in turn branch into other possibilities, giving a treelike structure. There are three different types of nodes – chance nodes, decision nodes and endpoint nodes. These can be represented by flowchart symbols. A chance node represented by a circle shows the probabilities of certain results. A decision node represented by a square, shows a decision is to be made. An endpoint node represented by a triangle shows the final outcome of a decision path. 8.9.2 Decision Tree Symbols Table 8.1 Shows a selection of symbols used in the construction of a decision tree. TABLE 8.1 Decision Tree Symbols 8.9.3 How to Draw a Decision Tree Drawing a decision tree is straightforward...

  • Data Classification
    eBook - ePub

    Data Classification

    Algorithms and Applications

    ...Chapter 4 Decision Trees: Theory and Algorithms Victor E. Lee John Carroll University University Heights, OH [email protected] Lin Liu Kent State University Kent, OH [email protected] Ruoming Jin Kent State University Kent, OH [email protected] 4.1 Introduction One of the most intuitive tools for data classification is the decision tree. It hierarchically partitions the input space until it reaches a subspace associated with a class label. Decision trees are appreciated for being easy to interpret and easy to use. They are enthusiastically used in a range of business, scientific, and health care applications [ 12, 15, 71 ] because they provide an intuitive means of solving complex decision-making tasks. For example, in business, decision trees are used for everything from codifying how employees should deal with customer needs to making high-value investments. In medicine, decision trees are used for diagnosing illnesses and making treatment decisions for individuals or for communities. A decision tree is a rooted, directed tree akin to a flowchart. Each internal node corresponds to a partitioning decision, and each leaf node is mapped to a class label prediction. To classify a data item, we imagine the data item to be traversing the tree, beginning at the root. Each internal node is programmed with a splitting rule, which partitions the domain of one (or more) of the data’s attributes. Based on the splitting rule, the data item is sent forward to one of the node’s children. This testing and forwarding is repeated until the data item reaches a leaf node. Decision trees are nonparametric in the statistical sense: they are not modeled on a probability distribution for which parameters must be learned...

  • A User's Guide to Business Analytics

    ...Based on data already in the database, and based on the clinical symptoms and ultimate outcome (death or survival) of the patients, a new patient may be classified into high-risk and low-risk categories. Decision trees have very high applicability in industrial recommender systems. A recommender system recommends items or products to prospective buyers based on their purchase pattern. In many such cases, especially if the store does not have any information on the customer other than their previous purchases, or in the case of online stores, prospective buyers’ choices and click-stream responses, regression or any other model building procedures will not be applicable. Decision trees are applicable in marketing to identify homogeneous groups for target marketing. Decision trees segment the predictor space into several similar regions and use the mean of the continuous response or mode of categorical response in each separate region for prediction. 10.1   Algorith m for Tree-Based Methods Classification and regression tree (CART) is a recursive binary splitting algorithm introduced by Breiman et al. (1984). Even though there have been many improvements on this method, the core algorithm is still identical to that of CART. Consider the set of all predictors, continuous and categorical, together. CART does not make any differentiation between categorical and continuous predictors. CART is based on nodes and split. The full sample is known as the root node. At each instance of split, a variable and its level is selected, so that purity at each child node is the highest possible at that level. The basic idea of tree growing is to choose a split among all possible splits at each node so that the resultant child nodes are the purest. At each split the sample space is partitioned according to one predictor. Only univariate splits are considered, so that the partitions are parallel to the axes...

  • Artificial Intelligence Marketing and Predicting Consumer Choice
    eBook - ePub
    • Steven Struhl(Author)
    • 2017(Publication Date)
    • Kogan Page
      (Publisher)

    ...06 Predictive models Via classifications that grow on trees This chapter describes the classification tree methods, a remarkable set of approaches that uncover complex relationships in data. Here our focus broadens from predicting shares and understanding variables’ relative importances to developing models that boost the odds of reaching a desired outcome. Several illustrations will take the mystery out of these methods and show how they apply. We also discuss several useful extensions of basic classification trees. Classification trees: understanding an amazing analytical method Now that we have absorbed a great many facts about predicting what happens when you change the features of a product (or service or message), it is time for something completely different. As a reminder, through Chapters 4 and 5 (and bonus online Chapter 1) we have seen highly powerful means of understanding variables’ effects, and other methods that diagnose variables’ importances. Discrete choice and conjoint provide remarkable power in answering what/if -type questions, and the MaxDiff and Q-Sort/Case 5 methods clearly delineate relative importances. Here we shift from focusing on these areas, expanding to methods that also can work to improve the odds of an outcome. Market share and odds are closely related concerns. Improving the odds of an outcome is something different. The goals of doing that include reducing uncertainty and waste. Also, as we will see, these investigations can provide insights that lead to further analyses, and finally propel changes. Classification trees, the subject of this chapter, are the first set of methods that can be turned towards increasing the likelihood of an outcome. Other methods that are pressed into service in this way include the Bayesian networks explained in Chapter 7, and the ensembles and neural networks in Bonus online Chapter 2. Many other methods exist. Classification trees started as more of a promise than a solution...

  • Machine Learning
    eBook - ePub

    Machine Learning

    An Algorithmic Perspective, Second Edition

    ...CHAPTER 12 Learning with Trees We are now going to consider a rather different approach to machine learning, starting with one of the most common and powerful data structures in the whole of computer science: the binary tree. The computational cost of making the tree is fairly low, but the cost of using it is even lower: 𝒪 (log N), where N is the number of datapoints. This is important for machine learning, since querying the trained algorithm should be as fast as possible since it happens more often, and the result is often wanted immediately. This is sufficient to make trees seem attractive for machine learning. However, they do have other benefits, such as the fact that they are easy to understand (following a tree to get a classification answer is transparent, which makes people trust it more than getting an answer from a ‘black box’ neural network). For these reasons, classification by decision trees has grown in popularity over recent years. You are very likely to have been subjected to decision trees if you’ve ever phoned a helpline, for example for computer faults. The phone operators are guided through the decision tree by your answers to their questions. The idea of a decision tree is that we break classification down into a set of choices about each feature in turn, starting at the root (base) of the tree and progressing down to the leaves, where we receive the classification decision. The trees are very easy to understand, and can even be turned into a set of if-then rules, suitable for use in a rule induction system. In terms of optimisation and search, decision trees use a greedy heuristic to perform search, evaluating the possible options at the current stage of learning and making the one that seems optimal at that point. This works well a surprisingly large amount of the time. 12.1   Using Decision Trees As a student it can be difficult to decide what to do in the evening...

  • Creating Theoretical Research Frameworks using Multiple Methods
    eBook - ePub
    • Sergey V. Samoilenko, Kweku-Muata Osei-Bryson(Authors)
    • 2017(Publication Date)

    ...5 Overview on Decision Tree Induction The chapter provides an overview of decision tree induction. Its main purpose is to introduce the reader to the major concepts underlying this data mining technique, particularly those that are relevant to the chapters that involve the use of this technique. Introduction Decision tree (DT) induction is a popular data mining modeling technique that is being increasingly used in information systems research (e.g., Wang & Chuang 2016; Müller et al. 2016; Mansingh et al. 2015a,b; Takieddine & Andoh-Baidoo 2014; Andoh-Baidoo et al. 2012; Osei-Bryson & Ngwenyama 2011; Lee 2010; Samoilenko 2008; Samoilenko & Osei-Bryson 2008; Zhou et al. 2004). A DT is an inverted tree structure representation of a given decision problem (e.g., Figure 5.1). There are two main types of DTs: classification trees (CTs) and regression trees (RTs). For a CT, the target variable takes its values from a discrete domain, while for an RT, the target variable takes its values from an interval or continuous domain (e.g., Osei-Bryson 2004; Ko & Osei-Bryson 2002; Torgo 1999; Kim & Koehler 1995; Breiman et al. 1984). Figure 5.1 Classification tree—example 1. Classification Tree A DT can also be described as a model of a decision problem in the form of interpretable and actionable rules (see Figure 5.1). Associated with each leaf node of the DT is an if – then rule. For a given rule, the condition component of the rule [i.e., the potential predictor variable(s) and their values] is described by the set of relevant internal nodes and branches from the root node to the given leaf node; the action part of the rule is described by the relevant leaf node that provides the relative frequencies for each class of the target variable. To generate a DT from a given data set, a single variable must be identified as the target (or dependent) variable, and the potential predictor variables must be identified as the input variables...

  • The Project Manager's Guide to Making Successful Decisions

    ...APPENDIX C An Introduction to Decision Trees I n this appendix, Robert Dees and Ken Gilliam briefly describe some of the fundamental concepts of decision analysis and move to a discussion about one of the favorite structural models of decision analysts: the decision tree. Decisions and Outcomes To think about decision analysis, we must begin with the decision. A decision, as already stated, is an irrevocable allocation of resources. In the project manager’s world, this means that the decision is not made until contracts are signed or money changes hands. We know that we have made a decision when we cannot go back, or at least that we cannot go back without incurring a penalty. Going back would be another decision, and in this case we have at least decided to commit resources up to the amount of the penalty. This penalty could be time, money, public opinion, or any other resource. On the flip side, when we do nothing or wait to decide, we are choosing to allocate resources in a particular way right now, and we might incur an opportunity cost. We must decide about not only what to do but also when to do it. This definition of a decision also implies that a decision is more than just thought; a decision is an action. We would like our decision to be characterized by logical thought, but it isn’t a decision until action takes place. Consider a person who says that he or she is on a diet but routinely visits the pantry for junk food; the dieter hasn’t truly decided on the diet until the actions performed reflect his or her spoken words. Along the same lines, a project manager hasn’t decided on a project management plan, or anything else, until it is implemented. The most important, and still most commonly misunderstood, distinction in decision analysis is that between a decision and an outcome (Howard 2007)...