A discussion on decision trees is best understood with an analogy of our daily lives. Think about how we’re often put in situations where we make choices based on certain conditions, where one choice leads to a specific result or consequence.
Decision Trees in Our Lives
Decision trees are essentially diagrammatic approaches to problem-solving. As an example, let’s say, while driving a car, you reach an intersection, and you’re required to decide whether to take either a left turn or right turn. You’ll make this decision based on where you’re going.
If we consider other examples, like organizing a closet or buying a car, the same logical step-by-step approach is used to arrive at the final stage. When buying a car, we look at different models and finally choose one based on specific attributes, such as cost, performance, and mileage, the type of fuel it uses, appearance, etc.
The examples above can become our use cases. What we necessarily do is apply a logical approach to break down a complicated situation or data set. This same approach of logical decision making is applied in decision trees.
Looking forward to becoming a Machine Learning Engineer? Check out the AI and ML Course and get certified today.
The Approach to Decision Trees
If we’re given a problem to solve, we can use a graphical approach to analyze and explain the concept of decision making based on conditions; the diagram will look like an inverted tree with the root at the top and branches spreading underneath.
Why is this so? The root represents the starting position, where we have a set of data or options, which we analyze with the help of certain attributes and then choose the action. In an inverted tree diagram, the root is called the root node, and the branches represent the outcome of a decision, which are called the leaf nodes.
The diagrammatic approach helps to explain the concept visually to others about the probability and outcome. If we were to speak in plain English or write pseudocode (in a programmatic approach), it would be written as ‘IF… ELSE… IF’ statements and the number of levels would depend on the number of conditions. They’re often in a nested or loop form to handle the many iterations required to traverse through the complex data.
Classification, Segregation, Regression
In machine learning, we use decision trees also to understand classification, segregation, and arrive at a numerical output or regression.
In an automated process, we use a set of algorithms and tools to do the actual process of decision making and branching based on the attributes of the data. The originally unsorted data—at least according to our needs—must be analyzed based on a variety of attributes in multiple steps and segregated to reach lower randomness or achieve lower entropy.
While completing this segregation (given that the same attribute may appear more than once), the algorithm needs to consider the probability of a repeat occurrence of an attribute. Therefore, we can also refer to the decision tree as a type of probability tree. The data at the root node is quite random, and the degree of randomness or messiness is called entropy. As we break down and sort the data, we arrive at a higher degree of accurately-sorted data and achieve different degrees of information, or ‘“Information gain.”
Decision Tree Algorithms
The most common algorithm used in decision trees to arrive at this conclusion includes various degrees of entropy. It’s known as the ID3 algorithm, and the RStudio ID3 is the interface most commonly used for this process. The look and feel of the interface is simple: there is a pane for text (such as command texts), a pane for command execution, and a pane for displaying the outcome or the environment setup.
A Quick Overview of the Environment Pane:
- Under the “Plots” subfolder, users can access installed files, packages, and libraries.
- Under the “Files” subfolder, there may be other folders where the source data is located in Excel or CSV form, and this data is imported into the R studio data frame for analysis.
- The columns of data from CSV files are the attributes or parameters, and users need to specify the following:
- Which parameters are needed
- The condition for the split or segregation
- What percent of data should be sorted
- Whether the outcome from the split process will be a numeric value
Further Analysis with Decision Trees
Since the objective of this article is not to provide an in-depth look into the syntax of the R studio interface, but rather an attempt to familiarize you with decision trees, its approach and mechanism make the analysis much more efficient. Once we have the outcome, we can do further analysis and compare different sets of data and predictions.
The command window will also display various key statistics on the level of accuracy of the data analysis. The R studio tool also gives the option to generate a diagrammatic representation of the decision tree to display the various levels of splits or to create grids and matrix graphs showing the data distribution.
In summary, we can say that the intuitive nature of decision making is also reflected in the concept of the decision tree, and tools such as the R studio empower the user with any level of slicing and dicing with a high level of accuracy. In turn, this aids in a high degree of predictive decisions.
Test your knowledge on Naive Bayes, Decision tree, and more with the Machine Learning Multiple Choice Questions. Try answering now!
Unlock the Power of Decision Trees and Machine Learning
- Master the concepts of supervised, unsupervised, and reinforcement learning concepts and modeling.
- Gain practical mastery over principles, algorithms, and applications of Machine Learning through a hands-on approach, which includes working on 28 projects and one capstone project.
- Acquire thorough knowledge of the mathematical and heuristic aspects of Machine Learning.
- Understand the concepts and operation of support vector machines, kernel SVM, naive Bayes, decision tree classifier, random forest classifier, logistic regression, K-nearest neighbors, K-means clustering, and more.
- Be able to model a wide variety of robust Machine Learning algorithms, including deep learning, clustering, and recommendation systems.