home_ERC

Madelyn Gatchel
DREU at Brown University
Summer 2019

Blog

This is my weekly blog -- check back for new posts each week!

Week 1 Accomplishments:
  • Met the members of Professor Bahar's research lab
  • Learned the basics about convolutional neural networks (CNNs) and their use in object detection
  • Began learning HTML

Monday, May 27:

  • (Memorial Day) First full day in Providence
  • Spent the morning walking around Brown's campus and figuring out which route I would walk to get to and from work
  • In the afternoon, my dad and I drove up to Groton, Massachusetts to visit the prep school he attended for high school. We ate lunch at my dad's favorite restaurant too: Groton House of Pizza.
  • On the way back to Providence, we drove through Boston, which was super cool! I definitely want to go back at some point this summer.

Tuesday, May 28:

  • Got my Brown ID!
  • Went on a short tour of the engineering building where I will be working
  • Learned about Professor Bahar's current projects and met her research team
  • Began to read the project proposal for the project I will be working on
    • Lots of terms and acronyms I had never heard of
    • Tried to look up many of these terms and acronyms and didn't have much success (the definitions referenced other terms I was not familiar with), which was pretty stressful and frustrating
  • Began outlining a calendar for what needs to happen/when both on the research side and on the DREU program side
  • Began learning HTML/started to set up this website on my Davidson domain
  • Got to meet and talk with Kevin, Jasmine, and Giuseppe (grad students on my project team) and talk about life-related topics (New England, grocery stores, the South, study abroad, good restaurants in town, etc.). This was especially helpful considering it is my first time north of D.C.

Wednesday, May 29:

  • Jasmine, project leader and CS PhD student at Brown, sent me a "Getting started" email with many helpful and relevant resources
  • Watched/listened to/took notes on first two lectures from Stanford's Convolutional Neural Networks for Visual Recognition course
  • Since I ate lunch while watching the last part of the second lecture, I decided to take a walk during my lunch break; I walked up Thayer Street to see what restaurants are there (have heard good things); crossed over to Hope Street and stopped by the Nelson Fitness Center to see about getting a summer membership
  • Spent afternoon learning more HTML and working on my website

Thursday, May 30:

  • Spent the morning talking with Giuseppe about my CS background; he explained many concepts that were familiar but slightly modified (particularly because I come from a CS background and this project crosses into computer engineering); it was a good opportunity for us to bond more
  • Watched/listened to/took notes on the third and fourth lecture videos; even though I'm a math and computer science double major, learning about backpropagation and trying to remember Calculus III (which I took almost 2 years ago) was a bit of a challenge
  • Feels like I'm starting to get the big picture regarding the project but everything still feels pretty abstract

Friday, May 31:

  • Giuseppe said he watched the same Stanford YouTube lectures and that he finished them all in less than a week because he put the speed at 1.5; I've not even finished half of the videos and have had to stop the lecture videos often to process/think about what was being said
  • Spent rest of morning rereading the project proposal; overall it makes more sense but I still don't know what generative-discriminative techniques are; also, the math seems pretty complicated (I feel like I've forgotten so much math)
  • Finished up lesson 4; took a long time to complete because I paused it to work out each example and do the math
  • Ate a late lunch with Giuseppe and Elahe at a Korean restaurant on Thayer Street.
  • Had a meeting with Zhiqiang Sui from the University of Michigan to finalize the project proposal; was actually able to be helpful/find things to be fixed; this meeting took a decent part of the afternoon
  • Talked a little bit with Giuseppe about the project and looked at my website again before heading out for the weekend
Week 2 Accomplishments:
  • Coded two different sorting algorithms for Giuseppe in Vivado (and even though he didn't end up needing them, it was still good practice to work with his code)
  • Learned about specific layers in CNNs--what they do, strengths/weaknesses, how they affect the dimensions of matrices/the depth--and also how they come together to make neural networks; also clarified how the training and testing stages work in practice
  • Finished structuring/setting up this website

Monday, June 3:

  • Spent some time talking with Giuseppe about sorting algorithms
  • Resumed watching lecture 5
  • A little less than half-way through, Giuseppe asked me for more help with sorting algorithms. This time he gave me more context for what needed to be sorted; we spent about two hours talking about how we could implement the algorithm, and he said he's going to actually have me write it!
  • Finished the day with "tea time," getting to know members from the other project group (in particular, Casey, another undergraduate student)

Tuesday, June 4:

  • Got my gym membership and have started going to classes in the morning before work (BodyPump, cycling, etc.)!
  • Spent the entire day looking over, writing, and analyzing the sorting algorithm for Giuseppe; learned how to run the project in Vivado

Wednesday, June 5:

  • Determined an even more efficient sorting algorithm and spent the morning coding it
  • Went to the Coffee Exchange with Elahe and Giuseppe and then to PVDonuts
  • When we got back, Jasmine told us (Giuseppe and me) that last semester an undergraduate student implemented a sorting algorithm but Giuseppe doesn't know where it is; it was frustrating to find this out since I had spent a day and a half working on the problem, but it shows the importance of communication within the team
  • Feeling a bit overwhelmed by all of the information related to convolutional neural networks (CNNs) because I'm not sure if it's "sticking" (get lots of different aspects but am not sure how they all fit together/how they work in practice)
  • Going to try to take notes on my notes and then try to learn how to use PyTorch
  • Talked with Jasmine in the early afternoon and am understanding a lot more
  • After work I went to a "social" for undergraduate CS students doing summer research at Brown; we played a round of Catchphrase and then everyone introduced themselves/broke out into groups to chat. It was kind of fun but awkward at times because some people already knew many other people because they go to school together

Thursday, June 6:

  • Read more information about convolutional neural networks
  • Watched lecture video 6
  • Began assignment 2 (learning how to use Jupyter notebooks, anaconda, etc.) from the CS231n class from Stanford

Friday, June 7:

  • Spent pretty much the entire day working on this website!
Week 3 Accomplishments:
  • Submitted URL for this site to DREU for 2nd milestone (Mon)
  • Finished all the Stanford CS231n lecture videos on Jasmine's list
  • Completed most of the PyTorch Jupyter notebook from CS231n Assignment 2 (first time using PyTorch)

Monday, June 10:

  • Spent the morning transferring more information from OneNote to this website (particularly this blog)!
  • Went to lunch with Casey (undergraduate student on other project) and Professor Bahar at a lunch for summer undergraduate researchers at Brown
  • Watched CS231n lectures 7 and 8; am looking forward to learning how to use PyTorch in the next few days

Tuesday, June 11:

  • Watched lecture 9, the last on Jasmine's list
  • Researched FPGAs--the basics, and also their role in our project
  • Began reading about how to use PyTorch
  • Completed parts I-IV of PyTorch tutorial in assignment #2 of CS231n. It was pretty challenging but interesting!

Wednesday, June 12:

  • Spent morning working on website
  • Worked to create a CNN that is over 70% accurate in the test phase; I tried various CNN architectures but from what I found, adding more convolution layers significantly boosted the accuracy rate
  • Skype meeting with University of Michigan about project and later with just Professor Bahar about the project

Thursday, June 13:

  • (Out for brother's graduation)
  • First time flying by myself!

Friday, June 14:

  • (Out for brother's graduation)
Week 4 Accomplishments:
  • Successfully ran SpooNN!
  • Learned how small mistakes can have a huge impact on progress--I incorrectly trained the network a few times, and although each time the errors were minor, I had to retrain the entire network, which took up to 2 hours in some cases (well, until I realized that I had the batch size as 1 which meant it was looking at all 2600 images at each iteration)
  • Wrote Intersection over Union evaluation method

Monday, June 17:

  • Spent the entire day working with SpooNN (halfsqueezenet)
    • Read about how this network is a modified SqueezeNet ("half" SqueezeNet); the biggest differences are there is one fewer "halffire" (SpooNN) than "fire" (SqueezeNet), the maxpool layer placement is slightly altered, and in halffire the expand phase only has one convolution layer with 3x3 filters instead of 2 convolution layers (one with 1x1 filters and the other with 3x3 filters)
    • Restructured halfsqueezenet_objdetect.py by breaking up the file into multiple files (there were multiple classes defined within the one file, which made it extremely lengthly and hard to follow at times)
    • Spent most of the afternoon installing various packages/modules in an attempt to run the network; had some issues because the code is written in Python 3 (not 2.7) but the TensorFlow installed on the computer is for 2.7/the GPU can only run Python up to version 3.6 (not 3.7) and pip was being weird about installing TensorFlow/TensorPack/other modules for 3.6
  • Professor Bahar is lending me her Weekend Walks in Rhode Island: 40 Trails for Hiking, Birding & Nature Viewing book and I'm excited to go hiking (although I wish I had brought my actual hiking boots with me...tennis shoes will do though)
  • The day felt pretty slow overall, probably because I was pretty tired (my flight back to PVD got delayed so I didn't get back until about 3 am...)

Tuesday, June 18:

  • Added an accomplishments section for each week on this blog
  • Finished installing required modules
  • Followed the instructions to train the network on the ycb image dataset and then tested the network; just training the network took almost two hours :/ (I later found out why)
  • I did not realize that in the ycb dataset folder Jasmine gave me there was another folder with .txt files that specified which images from the dataset were train images and which were test images (I had trained and tested the network on the entire dataset); I wrote a shell script that sorted the images and .xml files to the appropriate test and train folders and then retrained the network on the specified train images

Wednesday, June 19:

  • I spent the morning trying to extract the classify label that corresponds to the bounding box guess, but had no luck
  • In the afternoon I wrote a function to evaluate the overlapping area (or intersection) between two bounding boxes; I later found out that this evaluation method for bounding boxes is called "Intersection over Union" because it is equal to the area of intersection of the boxes divided by the union area of the boxes.
  • I also realized that it's okay to look on GitHub, etc. to see if other people have already written the function you need (which they have); I guess it's just a habit because at Davidson we are not allowed to search for code (which makes sense...they want us to have the practice of writing it)

Thursday, June 20:

  • Even though I had separated the SpooNN code into multiple programs so that it was easier to read/follow (one class per program), I had still been running the network from the original code because I had some import problems with my edited version; I fixed these import problems and began running from my edited version, which helped a lot
  • I realized that the SpooNN authors had hard-coded the class size as an out_channel dimension in a layer in their network; because the code is so long, when I was changing the code for the ycb dataset, I missed this which meant that the classifier was trained on 12 classes instead of 22; at first I thought this was why the classification labels were wrong, but after rerunning with the appropriate number of classes (I used a variable equal to the length of the classes list this time), the labels were still wrong
  • I also discovered that at some point early on I had changed the batch number to 1 when I meant to change another global variable to 1; this is why the network was taking so long to train, and when I changed the batch size to 100, the training only took about 3 or 4 minutes

Friday, June 21:

  • We had our usual Skype meeting with the University of Michigan; sometimes I get frustrated during these meetings because I really want to be able to contribute, but often don't fully understand what is going on (I guess I get the big picture but the specifics I don't get)
  • I spent almost the entire day trying to get classify to work (adding softmax layers, multiplying obj_detect and classify, changing how loss is calculated, etc.), but still didn't have any success. This network just wants to classify almost everything as a mustard bottle :/
  • I was frustrated that classify wasn't working and stressed because I thought Jasmine needed this to work as soon as possible, but when I apologized for how long it was taking, she explained that she thought that fixing classify/adding multiple object detection might take a good part of the summer, which made me feel a little better
Week 5 Accomplishments:
  • Finished formatting/writing information on the homepage!
  • Began formatting photo gallery (even though I'm not particularly artistic, I'm excited to be able to share my adventures with others)
  • Got SpooNN working on a new dataset
  • Fixed the classification aspect of SpooNN!!! I was also able to run my Intersection over Union analysis on the network
  • Made a lot of progress on various parts of this website

Monday, June 24:

  • Last week I got so caught up in trying to run SpooNN that I forgot to update this blog! I spent a large portion of the day udpating the blog, reformatting parts of this website (and learning the HTML to do so), and writing the introduction that will appear under the "Overview" section on the Home page (I will uncomment it at the end of the summer and will add stuff as needed)
  • Since Jasmine said that fixing SpooNN to correctly classify the objects in the bounding box guess/identifying multiple objects per image might take the whole summer, I decided to regroup and create a checklist and timeline for everything I could think of for the rest of the summer (a surprising amount); the checklist and timeline also include items related to this website, the Brown Summer Research Symposium and the DREU final report, which makes me feel more organized
  • I think going forward I would like to learn a little more TensorFlow so I may better understand the SpooNN code (at this point, I'm stuck and don't know how to fix the classification aspect of the network)
  • Since I watched the USWNT play Spain in the World Cup during lunch, I stayed until 7 pm; it was a long day, but I finished the day by working on the photo gallery which was fun

Tuesday, June 25:

  • I worked a little more on the photo gallery, and I like the way it's starting to look. I'm going to have to learn how to sort the images based on their orientation and location so that the orientations aren't mixed (on a given line) under both the "show all" tab or the specific location tab
  • Jasmine said that she thinks the classification feature might work on a dataset where just one object is identified. She sent me the new dataset and I cleaned it up (fixed folder hierarchy, randomly selected 10% of each object/position to be a test image, sorted images using a shell script again, etc.)
  • I ran into some problems initially because the images in this dataset are .png whereas in the other dataset they were .jpg so I had to make appropriate changes to the code
  • I also ran into some problems with mysterious ghost files (well not actually ghosts but they would show up in the actual folder but not in the terminal) and when I did select the "non-ghost" file, the compiler would give some error or another (that didn't make sense based on the context), which I've never had happen before

Wednesday, June 26:

  • I started the morning frustrated because this time the network was classifying everything as a blue cup and later as all background, so I decided to go through the code again to eliminate code that we won't need (code specifically for the DAC) and also I consolidated the analysis code I had written
  • I got classify to work on the new dataset!! For some reason the default was to pass through this label array as all 0s instead of an array of indices corresponding to the various class labels. Once I added two lines of code to fix the array, the classification stage worked!
  • Additionally, my analysis code worked, and the IoU accuracy was 82% with only 4 misclassifications! Pretty cool!
  • This time I also knew that I could apply a softmax layer to their logits tensor (classify tensor after global average pooling so it's just 16x1) and see the object probabilities for each label. This was especially helpful when looking at the 4 misclassifications because I could see how far off (or not) the network was from picking the correct label

Thursday, June 27:

  • I have started to run the network on the original dataset, but I think adapting the classify part will be harder than I originally thought because of the fact that there are multiple objects per image and the network has been trained on all of them.
  • Since there are multiple labels per image and the network asks for one, I added a section of code that reads the .xml file and gets the object name of the first object that appears in the file. I'm slowly getting more comfortable reading .xml files...
  • After some frustration with weird errors from SpooNN, I decided to spend the rest of the day working on this website. I had noticed that there were some formatting issues with the background image as well as the navigation bar, so I learned how to fix these issues and then made the appropriate changes. Giuseppe was also helpful. I also reorganized and redesigned this blog so now it is a collapsible accordion as opposed to one long file with various links.

Friday, June 28:

  • Our meeting with Michigan lasted longer than it usually does, but I understood more today and was even able to give an update on my progress with SpooNN!
  • Worked on and off on various parts of the website. I know how much I have looked at previous DREU students' websites, so I want to make sure this website looks good/provides useful information!
  • Today I ate lunch with Professor Bahar, Jasmine and Giuseppe at Flatbread Company. It was fun!
  • I wrote up an informal summary of the SpooNN results on the VOC dataset (hopefully I'll be able to use some, if not all, of the results in my final report/poster for the research symposium)
  • The current problem with SpooNN and the ycb dataset is that the bounding box guesses are so far off, and I think it has something to do with the training phase (even though I haven't touched that part of the code). When it is training, the mean squared error (corresponding to the bounding boxes) is zero from the beginning. Maybe that means it's overfitting? I'm not exactly sure. I have traced the error back to the labels keyword argument for the mean squared error function, but cannot seem to get anywhere else. For context, the labels keyword argument in another function call was the reason why classify wasn't working before, so obviously I'm not fully understanding what the parameter does.
Week 6 Accomplishments:
  • I have an idea to try for adding object detection for multiple objects, which is hopeful!
  • I began the implementation of this idea, and by the end of the week my network was detecting multiple objects (the bounding boxes were way off and my threshold idea might not be working the way I thought it would, but this is still progress)!

Monday, July 1:

  • After about an hour and a half, I found the line that was causing the mean squared error to be 0 (I was missing one [0]...); I fixed it and then retrained/reran with the proper label numbers (since there are multiple objects, I just used the index corresponding to the first label found in the .xml file). This resulted in an increase in overall accuracy, but only to about 35%.
  • While watching the images with their bounding boxes as well as their classifications, I noticed that sometimes the label was wrong but the bounding box guess was around the same object as the ground truth; there were also a few examples where the model detected a different object than the ground truth (but that wasn't annotated in the .xml file). I went through each image and looked at the label guess, ground truth guess, object in green bounding box (guess) and object in blue bounding box (one ground truth), and discovered that about half the time, the bounding box was correct but the label was incorrect. There were also cases where both bounding boxes were on one object and both labels were for another object. I concluded that the next step is to look into writing code for multiple object detection.

Tuesday, July 2:

  • Spent most of the day trying to determine what changes need to be made for multiple object detection; I'm definitely starting to understand the details of the training process more, and after talking with Jasmine, have a direction (or two) for the implementation
  • I read the YOLO paper that Jasmine recommended. I was surprised that I understood as much of it as I did. I got a little confused by the way they kept track of their bounding box information (turns out their h and w variables mean something slightly different than h and w in SpooNN), but eventually I figured it out. I will definitely have to do more research into their loss function if my current idea doesn't end up working out.
  • While looking at the training stage code again, I noticed that there was a call to a function called intersection that was used in filling an array called iou. I realized this stood for Intersection Over Union, which I had written a function to compute for the analysis part (I hadn't looked at this part of the code before). That being said, when I looked at the function itself, I discovered some differences, particularly with scaling and how they calculated union, so Jasmine said to use my IOU function for the analysis part.
  • Today was pretty frustrating though because I found out I didn't get into Machine Learning for the fall. This is significant because it's only offered once every other fall, so I will have already graduated the next time it is offered. I'm hoping to be able to switch into it, but I've also been thinking about trying to do research with a professor as an independent study (a few professors have asked me in previous semesters, but I already had a full schedule at that point)
  • Sometimes it gets so hard to keep track of all of the dimensions and how they change throughout the model, but I've found that drawing it out helps

Wednesday, July 3:

  • Wrote my DREU 3rd Milestone report. In doing so, I realized just how much I have learned in the past 5 weeks!
  • Spent part of the day going through the code and tracking how the dimensions of everything would change (both in their construction and in operations) with my MOD implementation; I also began leaving comments based on where the code needs to be rewritten/what is no longer needed
  • Began the MOD implementation in the training phase (dataset.py); so far, it seems to be working as intended

Thursday, July 4:

  • (Out for Independence Day)

Friday, July 5:

  • After our weekly phone meeting with the University of Michigan, I spent the rest of the day working on the implementation of multiple object detection (both in the training phase and the testing phase)
Week 7 Accomplishments:
  • SpooNN is detecting multiple objects (that are actually there) and correctly classifying them! There are still entire objects the network misses, but it's still major progress!
  • Implemented AP/mAP code to analyze the accuracy of the network--with iou_threshold = 0, mAP is aobut 90%*! Multi-SpooNN is showing some promise!
  • Submitted my abstract and poster title to the DREU to be considered for the Grace Hopper Celebration!
*See next week's blog posts for some sad news :(

Monday, July 8:

  • Researched non-maximal suppression algorithms (NMS). This will definitely be useful in combining bounding boxes that encompass the same object, but I will have to do more research to see which algorithm is the most effective.
  • I fixed a bounding box error--when I left on Friday, all of the bounding boxes were limited to the bottom lefthand corner. Evidently I had switched an 'h' and a 'w' in scaling the bounding boxes back to the original image size, which was the culprit. When I fixed this, the bounding boxes were definitely more accurate.
  • I have been playing around with how I choose which bounding boxes to keep (I have 14x14=196 different bounding boxes, and I only want the ones that are around objects in my image). For example, I have tried changing the loss function for objdetect as well as eliminating an activation layer for the classify stage. So far I haven't had any luck, but I will keep trying.
  • Because I was feeling a little frustrated at the MOD implementation, I spent the last hour or so updating this website (blog, places I've visted, etc.). I'm hoping that tomorrow I have a few more ideas to try.

Tuesday, July 9:

  • I spent the day tweaking various parts of the model to see if they had a positive impact on the bounding box accuracy or classification accuracy. For example, I tried different loss functions, I added confidence thresholds for objdetect and classify and changed the kernel shape to 3 on the last conv layer. Changing the kernel shape seemed to be the most effective
  • I later tried to vary the iou_threshold in the training stage and found that with a very low iou_threshold, the bounding boxes are very accurate but the labels are incorrect, and with a higher iou_threshold (like 0.1), the labels are correct but the bounding boxes are way off. Tomorrow I will try to see if there is a value in between 0.0 and 0.1 that produces good bounding boxes as well as accurate labels
  • It was a little frustrating/almost boring to have to retrain the network after making each little change. Even though it takes less than 5 minutes to train, when you train the network many times in one day, that time adds up

Wednesday, July 10:

  • After not having much success by changing the iou_threshold this morning, I decided to try using the hinge loss function for objdetect (I had not tried it since I changed the kernel shape to 3)
  • My network actually detects multiple objects per image with decent bounding boxes and correct labels!! There are still several images where it doesn't detect any objects (??) but there are also significantly more images where it detects 3+ objects! Definitely a step in the right direction
  • I wrote an abstract draft for this project (to be submitted to the DREU and Brown Summer Research Symposium); Professor Bahar and Jasmine made many helpful suggestions, which I appreciated!

Thursday, July 11:

  • Today Jasmine said that maybe the best approach is to analyze all of the bounding boxes based on their class probabilities and not try to use multiple objects to try to find the "best" bounding boxes.
  • Jasmine sent me some code she thinks I will be able to use parts of to do this type of analysis. I spent the day trying to understand the code and figure out how parts would fit into Multi-SpooNN
  • We also had our meeting with Michigan since Professor Bahar will be out of town tomorrow.

Friday, July 12:

  • I worked to implement the Average Precision (AP) and mean Average Precision (mAP) analysis in Multi-SpooNN. When I first got it working, the mAP was only about 11%, which was disappointing and a little frustrating. Jasmine thought it might just be the accuracy of the network, but I wanted to try changing the iou_threshold back to 0.0 to see if there would be improvements. I'm glad I did, because the new mAP is about 90%!
  • Afterwards, I began working to generate heatmaps based on the classification probabilities. I've run into some problems with dimensions not aligning though, so I'll have to continue to look at this next week.
Week 8 Accomplishments:
  • Implemented code to generate heatmaps for each object in a given image
  • Found "settings" that will produce decent heatmaps (but the bounding boxes are awful); found other settings that will produce a slightly better mAP (20% to 25%), but the heatmaps are awful
  • Hoping to have more success next week in finding settings that produce decent heat maps and bounding boxes; also haven't seen an mAP value higher than 25% yet which is a bit disappointing

Monday, July 15:

  • I spent the morning updating this blog/the photo gallery. It's going to be a lot of work to annotate all of these photos, but hopefully someone will be able to appreciate it!
  • Spent the rest of the day working to get viz_utils.py to work (working specifically to generate heatmaps); I was successful, but the heatmaps don't make the model look particularly accurate. As a result, Jasmine asked to look at the eval_det.py code again, and she said she had added some code that didn't penalize for false positives and that's why my model had been performing at 90% (or so I thought). When I reran eval_det.py without her modifications, the model only performed at about 20% accuracy :(

Tuesday, July 16:

  • Since I have quantitative way to evaluate the model, Jasmine suggested testing the model at various iou thresholds. Since this involved a lot of training and retraining (with a new iou threshold value), I multitasked and also worked on the photo gallery (the photos still take so much time to load!)
    • In the afternoon, Jasmine and I spent almost 2 hours discussing what could

Wednesday, July 17:

  • Brainstormed a bunch of different things I wanted to try over the next few days to see if they would have a positive impact on bounding box accuracy or heat map accuracy
  • More tweaking, retraining, testing, and working on photo gallery

Thursday, July 18:

  • More tweaking, retraining, testing, and working on photo gallery

Friday, July 19:

  • Meeting with Michigan
  • More tweaking, retraining, testing, and working on photo gallery
Week 9 Accomplishments:
  • Implemented anchor boxes with k-means clustering
  • Improved mAP from low 20s to mid 20s

Monday, July 22:

  • Brainstormed a new list of things to try. In particular, I am going to try combining the settings that produce decent heat maps and decent bounding boxes to see if the network can produce both
  • Hoping I will have some success this week considering that the Brown Summer Research Symposium is next Friday...

Tuesday, July 23:

  • Began researching anchor boxes, YOLOv2, and k-means clustering
  • Attended thesis defense presentation, which was interesting
  • Began determining how to change current version of code to implement anchor boxes

Wednesday, July 24:

  • Spent the entire day implementing the anchor box code

Thursday, July 25:

  • In the morning, I debugged the anchor box code
  • Had lunch from Baja's outside with other lab members/Professor Bahar
  • In the afternoon, had meeting with Michigan and discussed many possible reasons our ideas hadn't been working

Friday, July 26:

  • Tested anchor box code with different threshold values
  • Tested anchor box code with two classify (train on two anchor boxes per grid cell); caused training time to increase significantly, but mAP improved
  • Worked on website while waiting for network to train
Week 10 Accomplishments:
  • Created and presented poster at Brown's Undergraduate Summer Research Symposium
  • Had lots of ideas of thngs to test/got to actually test them (unfortunately no luck so far)

Monday, July 29:

  • Worked on poster while testing on both datasets
  • Also looked at Jasmine's version of the network

Tuesday, July 30:

  • Continued working on poster while testing; in particular, implemented a background class mask

Wednesday, July 31:

  • Spent the entire day revising poster (and eventually finalizing poster)

Thursday, August 1:

  • Got poster printed (lots of walking across campus and back)
  • Tested choosing one anchor box to train on
  • Also reran class-SpooNN to watch how different losses changed during the training process
  • As more time passes, the more it seems that the model spends so many epochs training on the bounding boxes that it neglects the classifier which is why the classifications are so off (but the bounding boxes are so good).

Friday, August 2:

  • Had final meeting with Michigan; continued conversation from last week regarding why the model might not work (not training on enough data, etc.)
  • Presented poster at Brown's Summer Undergraduate Research Symposium
  • Began testing model on larger dataset (with synthetic data); left overnight to continue training because it was taking so long to train
Week 11 Accomplishments:
  • Outlined, wrote and finalized final report
  • Found other settings that produce good mAP (2 classify, one to train, mask)

Monday, August 5:

  • Trained model on larger dataset with two different settings (took all day to train twice); training on larger dataset produced mainly the same results
  • Outlined and began to write final report

Tuesday, August 6:

  • Continued to write final report; also made figures/got pictures as needed
  • Met with Professor Bahar to reflect on summer; conversation went well

Wednesday, August 7:

  • Continued conversation with Professor Bahar
  • Updated blog
  • Finalized final report
  • Tested classify with mask; tested choosing only one anchor box to train on with mask

Madelyn Gatchel
magatchel@davidson.edu
madelynegatchel.com