Home Title

July 2 - July 6

The Prediction Challenge qualifying round was this Monday. There were no major problems, but the results surprised me a little. We had expected to do poorly in the computer prices category, since we were using the mathematical programming approach, which was still worse than the sample predictor (just the average of yesterday's min and max order prices for that SKU). And indeed, we did pretty badly in that category. However, we did really well in the component prices category just using the model from Botticelli that I implemented. In fact, we beat Deep Maize in the current component prices category. I thought Deep Maize had better price predictions than we did!

I fixed the bug (in fact, several of them) in the component prices predictor that just solves a system of linear equations, and found that none of the systems are solvable when I use all past and current supplier offers in the system. Disappointed, I went on to implement something similar but with slack variables, using CPLEX to solve the resulting quadratic programs (quadratic because the objective function contains the squares of variables). I didn't want to just use Aysun's code, since things were a little different on the supplier side and I thought I could make things more efficient. However, I had to give up on the efficiency idea in the end because CPLEX apparently doesn't allow one to just set the objective function rather than add one.

Amy had some ideas for improving upon the quadratic programming approach on the customer side, and Aysun started implementing them. Then she left Tuesday evening for a five-day break since Wednesday was the 4th of July. In the meantime, I thought things were getting a little cluttered on the supplier side. I had to copy a lot of code from existing predictors every time I wanted to make a new predictor, and I realized that I should just make a more general component prices predictor class that contains all the shared code and make every other predictor inherit from it. I made an abstract class for this purpose, since an abstract class may have instance variables and implement some but not all of its methods, which is exactly what I needed. I refactored the existing predictors and made sure that everything still worked.

I'm very happy with this new design. Not only does it save me the trouble of having to copy code every time, it is also more elegant and intuitive, and takes advantage of Java inheritance. It allows the programmer to focus on the unique characteristics of a certain predictor's strategy rather than worry about the details of how to gather the necessary information to make predictions and how to interface with other parts of the prediction software. Thankfully, the refactoring was successful and everything still worked like it was supposed to. It's nice to have something that makes sense also work these days!

Amy had an idea for making future predictions. She thought we could solve for a transformation matrix that when applied to the weight vector on a certain day, produces the weight vector 20 days later. Because of the nature of the data, I am dubious as to how effective this will be. I wanted to find out more prediction techniques, so I looked at the Wikipedia page on predictive analytics and found a nice, though probably not comprehensive, list of prediction techniques that other people have used. I briefly read about classification and regression trees and found the idea to be interesting, though probably not applicable to our problem. It seems that we've implemented most (or at least simple versions) of the machine learning techniques listed on the Wikipedia page: a neural network (single-layer perceptron), k-nearest neighbors (1-NN), and, as I'm beginning to suspect after having read more about support vector regression, an SVM (i.e. the quadratic programs).


Previous Top Next