Journal


Week 1:
  I've mostly spent this week getting acclimated to the area and the project. I'm really glad I brought comfortable walking shoes with me; it's almost 2 miles from my apartment to the campus! I feel like I'm fitting in pretty easily with the research team, everyone has been really welcoming. The project I'm working on is fascinating. Indri - the grad student I'm working with - has developed a method of quantifying and localizing Plagiocephaly by manipulating 2D histograms that are composed of the surface normal vectors of the 3D meshes. To summarize the main point, histogram bins with high values represent flatter regions of the head because the surface normal vectors that span a flatter surface will point in generally the same direction. Using this knowledge, she discovered a way of manipulating the histogram data that would accurately quantify Plagiocephaly. My task is to do the same, but for Brachycephaly.
  Indri doesn't want to overwhelm me, so she hasn't shown me any code she's written yet, but I've been looking through all the 3D meshes and histograms and just visually trying to come up with factors that indicate Brachycephaly. I've been comparing cases that have just Brachycephaly to cases that have just Plagiocephaly, and also cases that have both. I'm trying to isolate which features just indicate Brachy. Specifically, I've been looking at the symmetry of the forehead, the ratio of the length of the head to the width of the head, and the ratio of the width of the back of the head to the width of the front of the head. On the side, I'm also cleaning all the 3D meshes from a new collection of data. I have weekly meetings with the entire research team as well as with just Indri.

Week 2:
  The numerical data sets behind the histograms are stored in 2D arrays and then manipulated in MATLAB, so Indri and I decided it would be beneficial for me to learn MATLAB. I've spent a good deal of time this week with some trial-and-error experiments, as well as reading the help documentation that comes with MATLAB - which I must say is actually quite useful. I don't have full confidence with the program yet, but I can correctly manipulate the histograms. I have also successfully written a MATLAB script, which reads in the histogram data from the files, manipulates it, and writes the results to an Excel spreadsheet - an accomplishment I was quite proud of.
  I am slowly developing a hypothesis on a criterion that could indicate Brachy; once I am sure of this I will develop it further towards quantification. I am looking at the circumference of the infants' skulls, at -15 to 15 degrees elevation, and vertically adding all the histogram bins in this range so that I end up with one circumference (horizontal) vector. Then I am looking at the maximum values along this vector, treating it as a continuous loop around the head. There are a few exceptions which I have not qualified yet, but for the most part it looks like there will be at least 3 areas along this circumference where the values will reach a maximum (as in, there will be a flat spot) in Brachy cases, whereas in Plagio cases there will only be 2. My initial ideas for turning this into something that will be able to quantify Brachy include evaluating the amount by which the maximum differs from its surrounding values (i.e. a maximum that is a large 'jump' from its surrounding values would possibly indicate more severe Brachy) and evaluating whether the surrounding values of a maximum are very similar or not (i.e. the symmetry of the flatness, where more or less symmetry would indicate more or less severe Brachy). It has been a productive week, and I am reasonably satisfied with my progress.

Week 3:
  The method I was developing turned out to have too many exceptions to be worthwhile. I've been a little frustrated, but as Indri has pointed out to me, in research even bad results are results that you can learn from. After my discussion with Indri, I am refining my approach to the problem. First, thus far I have been evaluating almost the entire data set and then manually trying to sift through them all to compare the cases that are the most relevant to diagnosing Brachy. Now I have an idea of how to narrow down the complete set into a smaller one that contains the cases of Brachy and the controls that are necessary in order to conduct a meaningful investigation. It's cliche, but consistency is key. The original doctor's judgement on whether a case was abnormal or not had to agree with the experts' judgement, and the scores from the two experts had to agree. The dataset created from such cases is much more consistent and thus will give stronger results.
  The second way in which I am refining my approach is to divide all Brachy cases into 3 categories: Brachy with symmetric flattening, Bracy with assymetric or acute angle flattening, and normal heads. The only problem with this method is that, at this point in the project, there is some ambiguity in the judgement on whether the flattening on a head is symmetric or not. Basically, Indri and I each went through the Brachy dataset and labeled each case with our judgement on whether we thought the flattening was symmetric. Later on, we may be able to come up with an automated method to determine the symmetry of the flattening within a certain range so that there is less ambiguity, but I believe that will only make sense after we determine whether there is a benefit to separating the symmetric Brachy cases from the rest. Next week, I plan to compare our results and see how much we agreed, and from there do some analysis to see if there is an easy way to identify Brachy on those symmetric heads. At this point, I am envisioning that there will eventually be a system in the form of a decision tree, that first classifies whether or not there is Plagio, then whether or not there is Brachy.

Week 4:
  I took the union of the set of Brachy cases that I labeled as symmetrical and the set that Indri labeled as symmetrical. There were just a few that we disagreed on. This union gave me 14 heads to work with. My thought was that the Assymetry Score (AS) that Indri used to quantify Plagio would have no problem identifying the symmetrical Brachy cases, as these cases would have right and left bin values that would be roughly equal, and so the AS would be very close to 0 (the AS is the difference between the right and left histogram bins). This did not turn out to be true. I tried modifying the AS, only using the histogram bins from the very center of the back of the head, but this did not yield better results. I would have assumed that symmetric Brachy cases would have scores so close to zero that they would be significantly lower than even the control cases, but this was not the case.
  During the research meeting Indri and I opened the problem up to the rest of the research team, to see if they had any ideas. My mentor, Linda, suggested we look at equally spaced points on the head and the distances of these points to each other, creating a matrix of these distances. I've spent most of the week looking into this idea and trying to come up with a way to compute this data.
  In the process, I discovered that the way Indri had used to compute the circumference of the heads was not coming up with the true circumference; it was merely a count of the number of coordinate points that made up the contour of the head. So my first hurdle was to fix this problem, which turned out to be fairly difficult. The problem itself was not necessarily complex, but I was hoping that MATLAB would have a built-in function that I could use to compute the circumference fairly easily. I spent most of my time on Google trying to see if anyone had attempted a similar problem on MATLAB, but with no success. On the upside, I garnered a lot more knowledge on MATLAB as well as on the general problem of fitting curves to points (known as interpolation). After talking with Indri, we decided that I would have to program my own script from scratch to solve the problem. My solution is an approximation only, but hopefully it is a useful one. Basically, it goes around the contour of the head pixel by pixel and computes the straight distance between each pixel, coming up with the circumference as the sum of all these distances.
  Once I had that script working, I returned to Linda's idea of trying to place points on the circumference that were equally spaced and then computing the distance between each of these points. I used the same basic algorithm from computing the circumference to find points that were equally spaced. Side note, we were going for points that were the same distance from each other ON THE CURVE, rather than straight distances.
  So, the overall product of this week is the script I've written that computes the circumference, places 10 equally spaced points, computes the distances between these points, and then puts these points into a 10x10 matrix. The structure of the matrix is such that any entry, given by [n, m], is the distance between the nth and mth points of the head (the points being numbered with 1 as the point at the back and center of the head).

Week 5:
  There were still a few bugs in my script that I hadn't anticipated, so it took me a little while to realize that they existed and then fix them. The circumference of the head, being a closed shape, caused some weird things to happen when I tried to compute the Euclidean distance around it - namely, some distances would come out to zero.
  Indri also forwarded me a paper that documented the use of this distance matrix in a previous project by former grad students of my mentor, Linda's. This was interesting because they went about it differently than I did. When they calculated the distances between all the points on the circumference, they 'normalized' each distance by dividing it by the maximum length of the head, or in other words the length of the bounding box of the head. Also, rather than 10 points, they used hundreds - anywhere from 100 to 500. Indri and I realized that 500 points were not possible for our project, since the contour for a head in our dataset only contains 300-400 points. However, it did seem wise to try increasing the number of points to see the results that would yield.
  So after I made several adjustments to my script, I ran it again for the Brachy dataset, using n = 10, 50, and then 100 points. Also, for each n I calculated not only the original Euclidean distances but also the normalized distances, to see what the difference between the two would be. After a cursory glance at the results, I did not immediately pick up on any patterns or drastic differences between the different sets of results. However, Indri put each set through a classifier, and next week I intend to go through the data that the classifier returned. We hope that the classifier will tell us which areas of the distance matrix are more important for classifying Brachy, and that from that we will be able to devise a score that will quantify the disease.

Week 6:
  The distance matrices were all run through two different classifiers, which gave some really interesting and fairly accurate results. The classifiers identified Brahcycephaly with around 92% - 96% accuracy, which is not bad. The normalized distance matrices did slightly better, so we have decided to just use them from now on. One of the classifiers returned the 10 bins in the matrix that it found to be the most important in classifying Brahcychephaly, so I spent some time making note of those and going through the images of the distance matrices, to see if there was a visible pattern in those bins. Side note: by 'images of the distance matrices', I mean the matlab function 'imagesc' used on the distance matrices. This turns the matrices into color-coded grid images, where higher numbers are shown by warmer-colored bins (yellow, orange, red) and lower numbers by cooler-colored bins (green, purple, blue) - this function is the same that Indri used to get the images of the histograms that she used to quantify Plagiocephaly. I have saved the images of all the distance matrices, so it is fairly simple to scan through them all and detect simliarities/differences between them. As I was doing this, looking particularly at the bins marked important by the classifier, I noticed that a lot of these important bins were clustered around blue diagonal lines in the upper left-hand corner of the images. Comparing Brachy cases directly to normal cases, I realized it was possible that the Brachy cases were simply a shift in these lower-valued bins from the normals.
  As I was looking at all the images, however, it started coming to my attention that the bottom and right sides of the images didn't look quite right. For the 100 point images, it was as if each row of the matrix, once it got to column 90, stayed the same value to the end of the matrix, and the same for each column once it got to row 90. For the 50 point images, this same effect happened from row/column 40 on. Only the 10 point matrices didn't do this. I looked at the actual numbers, and in fact many values were duplicated at the bottom right of the matrices. Unfortunately, it turned out that my matlab script was still not computing the distances correctly. It was a really small thing, but when the script marked the n points around the contour of the head, it would sometimes overshoot the fixed distance slightly, and so it would return to the point on the head from which it had started too soon, and would then have to mark each remaining point in the same exact place. So once I fixed this, the matrices all had to be rerun through the classifiers.
  Also, I had noticed that the classifier always returned 10 bins marked as important, no matter how large the matrix was (i.e. 10, 50, or 100 points). I suggested to Indri that it might be interesting to see what the classifier would return if we asked it for more important bins, and so she had the classifier run three times for each size matrix (10, 50, and 100 points), so that it would give us 10, 20, and then 30 important bins.
  The new results from the classifiers were not quite the same as the previous results. The 'shift' that I had seen in the images was not as clear, so I started over, looking at the bins that the new classifier results noted as important. Using matlab, I drew these important distances on the image of the head, so that I could see roughly where they were. This was interesting, because for all sizes of matrix and across all the important bins (no matter how many were marked, 10, 20, or 30) there was always an upside-down 'v' pattern at the back of the head - or in other words, the distances from the middle point of the back of the head to the sides of the head were important. This 'v' was not always in the same exact place, but I am curious as to what the results will be if I attempt to do some analysis soley on the distances from the middle point of the back of the head to the middle point of the sides of the head as well as the points in between the back and these middle points (in other words, looking at progressively shorter and wider v's as you approach the middle back of the head). I will work more on this next week.
  There was something slightly odd in the results the classifier returned, which was that it would mark the same bin as important multiple times, either at different thresholds or the same threshold. Indri and I are not sure exactly what this means, and I know too little about how classifiers work to really investigate. It will be interesting to see what the rest of the research team thinks, though our next meeting isn't for a few weeks since my mentor, Linda, is going out of town.

Week 7:
  It was so frustrating, but this week I discovered and had to fix yet another problem with my script, and then the results had to be run through the classifier again. The good news is that I'm now 100% positive that the script is doing exactly what I want it to. The bad news is that, in the process of fixing it, I realized that it was physically impossible for the script to compute exactly equal distances around the head. When the script computes the distances between each point on the circumference, each distance will always be 1 or sqrt(2). When it finds the circumference and divides it by 10, 50, or 100, however, that distance is never evenly divisible by 1 or sqrt(2). So it marks the points that are spaced by something close to this distance (circumference/10, 50, or 100), but that distance can be off by a length of sqrt(2) - and that difference can add up, especially when it is marking 100 points. So it is impossible to not end up with the problem that I noticed last week, where the last few points are very close to each other or are even the same point. I am hoping this won't throw the results off by too much. However, if it does, we may have to go back to the set of contour points and just divide that up into even distances - which is not as accurate as finding the actual distance, but if there is such a vital flaw in the distance, it might actually be more accurate.
  In any case, the classifer again gave different results. When I was looking at the distances it marked as important, it seemed that a lot of them stretched diagonally across the head; it was curious because many of them seemed nearly parallel with each other. I was curious as to what would happen if we took the distances that cut through the points that divided the head into 4 equal parts, starting with the first point at the middle of the back of the head - so, in other words, looked at the distances that made a diamond on the head - and took their ratio. A method for classifying Brachy has already been introduced that takes the ratio of the length and width of the head, but that can be thrown off if the patient has Plagio in addition to Brachy, because that will make the head longer on one side. So I thought if we took the ratio of the diagonal width and length of the head (the diamond), it would help correct for the assymetry of Plagio. Unfortunately, this did not seem to give a clear threshold to divide the affected cases from the controls; however, it lead me to something else. Since the classifier had returned several distances that ran diagonally across the head, I decided to take all the diagonal distances on the head between the distances that made the 'diamond'. The ratio of all of these distances did not do any better, but curiously enough their sum seemed to give a really good threshold between the controls and affected.
  Then I started thinking about the distance matrix, and how the distance matrix for a perfect circle would look, and I realized that there could be a fairly simple way to classify Brachy from this matrix. Brachy cases, as opposed to controls, have heads that are more similar in length and width. Therefore, in the distance matrix for a Brachy case, there aren't clear maximum lengths; the greatest distances are around the same value. In contrast, the distance matrix for normals has very clear maximums, because the maximum distance for a normal head is always its length. Going from this, I decided to look at the sum of each row in the distance matrix, getting a new vertical vector. The images of these vectors were very interesting; on the whole, Brachy case images were made of much cooler colors than the normals (the distances were much shorter). This image also gave an indication of any Plagio assymetry, because a 'hotter' line in this image on one side indicated longer distances on one side of the head. So I took the maximum value in this vector, and this number also seemed to give a good threshold to distinguish between controls and affected.
  From there, I realized that if I was looking at the maximum row sum in the distance matrix I could probably just sum the entire matrix - and indeed, in general the distance matrix for controls have a much higher sum than those of Brachy cases. I am not sure, however, that this technique would be technically and medically sound.
  The next step is to take the results that I have been getting and see if they correlate with severity - since the ultimate goal is to derive a score for each head. The numbers I've been computing may give good thresholds, but it is something else altogether to see if they indicate how severe an affected case is. This will be my focus for next week, to see if I can take my findings and realizations from this week and translate them into a score.

Week 8:
  I created Excel scatter charts of the results from the three methods I came up with last week; the separation between affected and controls is fairly clear. Side note, I graphed each method for each number of points, 10, 50, and 100, and all were very comparable. I graphed each set of results against the set of Cephalic Indices for each case, since the Cephalic Index (CI) presented by Hutchison et al. is the current leading method for classification of brachycephaly. All methods had a smaller number of false negatives than the CI. The charts also showed a decent spread in the severity of cases; in general, the cases that were barely affected were on one side whereas the severe cases were on the other. It is worth noting, however, that I am working with a somewhat imbalanced data set; the cases are ranked on a scale of 0-3, 3 being the most severe, and I only have one case ranked as 3. On all the charts, however, that case is clearly separated from the rest. There is some overlap between the 1 and 2 cases, but in general the chart shows the progression from unaffected through the increasing severity of affected. I also calculated the ROC curves for all methods, the results of which were very good and gave me the best threshold to use.
  At the research meeting this week, I showed images of my distance matrices and demonstrated the point that I realized last week, how for the normal cases there are very clear maximum values (which in the image shows up as red spots). They found the images highly interesting and gave some great suggestions. I had already realized that the images were symmetrical about the main diagonal (since the distance from point 2 to point 8 is the same distance as that from point 8 to point 2), but they made the point that I shouldn't be using the entire distance matrix with the classifier, since it might be double-counting distances. So I changed my script to only give the upper triangle of the distance matrix (since there wasn't any difference between the upper and lower, I just made a random choice). Then Indri noticed that the main diagonal was being masked; the main diagonal is always 0 (because the distance from an point to itself is 0), and so when the lower triangle was just 0 there wasn't a way to differentiate between it and the main diagonal. So I gathered the maximum value out of all the matrices, and filled the lower triangle with a constant that was above this value; this way, in the images the lower triangle was always darkest red and none of the actual data in the upper triangle could ever match it.
  Another suggestion I received from the team was to use PCA (Principle Components Analysis) on the distance matrices. It was not a technique I had ever heard of before, not having taken any upper-division computer vision classes. So I spent most of this week giving myself an intensive introduction to PCA; the mechanics of it are not terribly complicated, but the intuition of exactly what it is doing is very elusive. The member of the research team who gave me this suggestion, Kasia, was really helpful; she sent me the matlab script to accomplish PCA and sent emails back and forth with me to answer my questions about the script and PCA in general. I still don't understand PCA inside and out, but I have a fairly good intuition of it. The results that it gave were highly interesting; as one might suspect, the first eigenvector picked out the maximum values in the distance matrix perfectly. Thus the data in the first eigenvector space gave the spread between the controls and affected. The ROC curve for this data is excellent. It also gives a fairly good spread of severity for quantification; it is slightly better than the 3 methods I've worked with previously.
  There are still quite a few things to try; including using only the mean of the affected versus the mean of the controls as well as trying to relate the PCA results back to plagio quantification. However, at this point I only have 2 weeks left, and so I am having to wind down and put together my final report. I will continue to fine tune my results, and I do want to try the mean of the controls versus the affected, but I think I will leave the correlation back to plagio for the rest of the team; that could easily be another project in itself.

Week 9:   I've written a first draft of my final report and given it to Indri and Linda so they can give me some feedback. In the meantime, I've been trying some of the things I mentioned last week and fine-tuning my results.
  PCA on just the mean of the controls versus PCA on just the mean of the affected did not give significantly different results. The resulting transformations were simply a shift from the original transformations, either towards the controls or towards the affected (when using the mean of the controls or affected, respectively). Now that I've done the experiment, this result makes perfect sense, though I hadn't realized that would be the result beforehand; I am not sure why the research team felt that this method might be significant. PCA works by adjusting each data point by the mean of the entire data set, making the mean of the data set zero. Thus if you change that to only take the mean of a part of the data set, the results will shift because the mean is shifted towards one part of the data set. As for subtracting the results of the controls from the mean of the controls and likewise for the affected, this does not give significant results either because of the wide spread of the control results.
  I've also come up with an idea for expanding my project to include localization of brachy deformation; this wouldn't be quite as impressive as Indri's 3D localization system (which works on heads that have any kind of deformation, as it simply highlights flat areas), but I think it would tie up the project nicely, as well as help anyone trying to understand my project visualize it better. I will work on this more next week.

Week 10:   I completed the localization system and included it in my final report; basically it locates the maximum values in the distance matrix and draws those distances on the head. Finding the maximum values in a 2-dimensional array (while keeping track of where those values are) in MATLAB was a little trickier than I expected, but I accomplished it after a little trial and error. The results are fairly interesting, because the method demonstrates the differences in head length between controls and affected by locally drawing the distances. For controls, almost all the distances run along the midline from the forehead to the back of the head. For pure brachy cases, most of the distances actually span the width of the head, rather than the length. And when a head is affected with plagio, the length is skewed towards one side. All of this can be seen simply in the distance matrix images, but it is harder to visualize it intuitively. So at the very least, this localization system helps with visualizing the distance matrix data.
  This is my final week, but Indri and Linda plan to continue with the results of my research. In order to make it possible for them to do that, I've been cleaning up my files and copying them all over to the server they use for their project data. They'll keep me in the loop, so I'll be able to keep up with any developments they make; they might even be able to publish this research in a journal in the near future, which would be amazing.
  I'm really glad I had the opportunity to work on this project, and I'm proud of what I was able to accomplish. It's been a great summer!