The table below contains 15 pieces of data divided on three attributes (Color, Class, and Degree). You will use this data to conduct a "paper pencil" walk through of two different learning techniques. NOTE: The ID# are there to help you keep track of where you are in the process. They are NOT part of the classification.
Use the "current best hypothesis" technique (discussed in session 33) to develop a model for the data as each training example is added to your knowledge domain. At each step in the process your model should be consistent for that training example AS WELL, as all prior examples. When confronted with a choice between several new conjunctions or disjunctions of the same simplicity level you should always pick the "leftmost" option in the table.
|Training example #||Is this example consistent with the current model?||If not, false positive or false negative||Current Consistent Hypothesis (Model)|
Calculate the initial Entropy of this problem
We want to use this data and the ID3 algorithm and the concept of change in entropy to construct an accurate yet compact decision tree for this domain. To determine the optimal first attribute, you should calculate the entropy after independently dividing the data using each of the three attributes as the first choice. Complete the table below.
|Attribute||Information Gain if splitting on the attribute|
As we know from our study of Entropy and the ID3 algorithm, the attribute with the lowest Entropy will provide the most information gain. Thus, using your results from the table above, split the fifteen pieces of training data on the best attribute. Begin to construct the tree resulting from this split. Notice that some of the resulting categories will be perfectly classified and, thus, leaves in the decision tree. For each of those leaves, label the node with the correct classification. For each node not yet a leaf, label the node with the number of training examples in each classification.
For each of the nodes not yet a leaf, recursively calculate (independently) which of the remaining two attributes would make the appropriate second choice by calculating the entropy of that portion of the tree using that attribute split. Continue making these calculations until you can complete the tree.