Home Title

July 9 - July 13

After confirming that what we've been doing with the quadratic programs is very similar to support vector regression, I tried making the quadratic programs on the supplier side match those in the paper. This involved not squaring the slack variables in the objective function, instead requiring them to be positive, and having twice as many slack variables, half of which are added and half of which are subtracted from the constraints. However, to my perplexity, this worsened the performance of the predictor, perhaps because there are more slack variables.

I also tried adding more features, such as the mean production capacity for each supplier product line over the last 20-day period, which is included in a market report that the SCM agent receives every 20 days. This is an indicator of the supplier's "supply." Adding this improved performance slightly, so I wanted to try adding something about the supplier's "demand." Ideally, this would be the amount ordered from the supplier by the other SCM agents, but since we don't have access to this information, I thought maybe we could approximate the supplier's demand as customer demand. Deep Maize has released their source code for modeling customer demand, a Bayesian model that updates using actual customer demand when it is known. I learned how to use the model and added some features for the current customer demand, but this didn't seem to yield much. I need to do more testing and try adding past/future customer demand.

Mauricio, another student who has been working on the SCM agent but who has focused on computer price predictions, found some papers on TacTex's customer-side model. TacTex is a TAC team from University of Texas at Austin that is also participating in the Prediction Challenge, so it would be useful to read how they are making predictions. Mauricio wanted to implement in our SCM agent a simple heuristic that they tried, while I was interested in learning about their more sophisticated methods. Apparently, in 2004 they implemented some machine learning methods, including a neural network, variations of regression trees, and variations of BoosTexter. They found a version of regression trees and a version of BoosTexter to be the best predictors (so I was wrong that regression trees aren't useful). However, they actually found an SVM method and k-nearest neighbors to be poor predictors! I'm not sure I trust that claim, since Deep Maize uses k-nearest neighbors and their predictions are pretty good. So maybe there's still hope for our SVM method.

One thing I did gain from the papers is that it might be helpful to learn from past games. The SCM agent we are making predictions for in the Prediction Challenge plays 16 games against the same competitors, and then switches competitors. We might be able to generalize something from those games. Past games' data wasn't too useful when I first tried it with the simple supplier-side model I imported from Botticelli, but perhaps it's useful on the customer side and/or with the more sophisticated method we're using, since this method doesn't just predict the offered prices, but rather tries to learn a general rule for predicting prices from knowledge about the RFQ and market conditions, which should be independent of the particular game.

In 2006, TacTex used a particle filter in their customer-side model, each filter representing a different possible price-probability model. The particles are updated with each day's price report (lowest and highest selling price per SKU) and the results of the agent's bids. They also tried to learn a consistent way in which prices change over a fixed number of days within a game, which lends support to Amy's idea of solving for a transformation matrix to map a current weights vector into a future one, although they used a completely different method.

Finally, I implemented Amy's idea of predicting future weights, using a nice Java linear algebra package called JAMA. I didn't even have to worry about taking the pseudoinverse of a matrix using singular value decomposition! JAMA returns a least-quares solution for x in A x = b when A is not square, which is exactly what I needed. I implemented two methods, one assuming that weights change in a consistent way from any day to 20 days later within the same game, and another assuming that weights change in a consistent way from the same day to 20 days later across different games. Neither seems to work very well right now.


Previous Top Next