Lab 8: Decision Trees and Learning Theory

Part I: Learning

  1. In our general algorithm for handling decision trees, we can’t handle the case when the attribute value is missing. In real life, this happens all the time. This could happen because the datum was not collected, the datum had an error and was removed, or many other real-world reasons. Given the general algorithm for inferring a decision tree as a baseline, let’s try to find a way to modify it in order to handle this special case.
    1. We need to be able to classify such training examples that are missing attributes. So, suppose an example x has a missing value for attribute A and that the decision tree tests for A at a node that x reaches. One way to handle this case is to pretend that the example has all possible values for the attribute, but to weight each value according to its frequency among all of the examples that reach that node in the decision tree. The classification algorithm should follow all branches at any node for which a value is missing and should multiply the weights along each path. Write a modified classification algorithm (in pseudocode) for decision trees that has this behavior.
    2. Modify the information-gain calculation so that in any given collection of examples C at a given node in the tree during the learning, the examples with missing values for any of the remaining attributes are given default values corresponding to the frequencies of those values in the set C.
  2. (Warmup for tomorrow) Consider the problem of separating N data points into positive and negative examples located in d dimensions, using a linear separator of dimension d-1. Obviously, with N=2 points, on a line of dimension d=1, a point will separate these two points regardless of where these points are located, assuming the points are distinct.
    1. Show that it can always be done for N=3 points on a plane of dimension d=2, unless they are collinear.
    2. Show that it cannot always be done for N=4 points on a plane of dimension d=2.
    3. Show that it can always be done for N=4 points in a space of dimension d=3, unless they are coplanar.
    4. Show that it cannot always be done for N=5 points in a space of dimension d=3.
    5. (Super Challenge) Show that N points in general, but not N+1 points, are linearly separable in a space of dimension N-1, with an N-2-dimensional hyperplane..

Part II: FaceThingy, continued

For the next part of FaceThingy, implement the most basic decision tree learner in the following steps.

  1. Feature selection. If you are to train with Haar-like features, you will need to make some of these Haar-like features! Go to the original paper that presented Violet-Jones (http://research.microsoft.com/en-us/um/people/viola/Pubs/Detect/violaJones_CVPR2001.pdf), and refer to figure 3. Create these four features, and any other that you think would be good predictors of faces. Create at least ten features.
  2. Using the information gain function, write a method that, given a set of labeled images, will use the information gain function to choose a single decision node that best classifies the data. You may use the DecisionTree class, or make your own.
  3. You should now have a method that finds a single node. Now, write a recursive method that uses the method you just wrote to build the entire tree. Your algorithm should reflect the general algorithm presented in class.

3 responses to “Lab 8: Decision Trees and Learning Theory”

  1. vincent

    Something that I don’t recall hearing in class: BufferedImage.getRGB(int, int) does NOT return a value from 0 to 255, even for a grayscale image. Instead, it returns a huge negative based on the R, G, and B values;

    To get the 0 to 255 RGB value for BufferedImage img at location (x,y):

    Color c = new Color(img.getRGB(x,y));
    int grayscaleRGB = c.getRed(); // getGreen() and getBlue() also work

    Hope this saves all of you a ton of time debugging!

  2. vincent

    Another error that I found: this time in DetectionWindow.
    At the very bottom, in method next(), replace

    if (rect.x + rect.width >= imageDims.width)
    with
    if (rect.x + rect.width + multiplier >= imageDims.width)

    and

    if (rect.y + rect.height >= imageDims.height)
    with
    if (rect.y + rect.height + multiplier >= imageDims.height)

    Otherwise, it will go out of bounds almost every single time.

Leave a Reply

You must be logged in to post a comment.