Ian Torres

Computer Science: UMass Amherst

Year: Senior

Undergraduate Graduation: Spring 2018

Current Research: Automatic Semantic-Aware Furniture Composition

Past Research: Biophysics Lipid Membrane Research

Email: itorres@umass.edu

CRA-W Distributed Research Experience for Undergraduates

Host University: Univeristy of Virginia, Charlottesville VA

Mentorship Professor: Vicente Ordonez Roman
Website: Vicente's Website
Area of Research: Computer Vision

Research Project Description

My research project over the summer is to develop a program that is
capable of taking image backgrounds of various rooms (living rooms,
dining rooms, bedrooms, etc.) and composite these backgrounds with image
foregrounds (furniture). These image foregrounds that match the background
semantically (i.e. does this chair belong in an ecletic dining room)
will be considered for composition. If the first criteria is met the
program will then proceed to place said furniture foreground in a position
that matches the context of said room (i.e. making sure a chair is not
floating in the air). Once the position is determined the colors of the
furniture foreground will be blended with respect to the room background
to ensure proper colorization. The last step is to relight (or harmonize)
the furniture foreground in order to match the lighting of the room

Research Goals

  1. Develop a Web Scrapper to collect image backgrounds and foregrounds
  2. Create a database to store said images based on their semantics
  3. Design a Convolutional Neural Network (CNN) to analyze the context
    of an image background and calculate possible foreground placement
  4. Use an alpha matting implementation to properly colorize the image
  5. Harmonize the image foreground with respect to the background
  6. Display original image backgrounds next to their modified counterparts

Weekly Journal

Week 1

My first week at UVA was mainly used to set myself up in my living space
and register with the University as Student Staff over the summer. The
Computer Science faculty/administrative staff at UVA have taken great
efforts to ensure that my stay at the university is comfortable/productive.

The remainder of my time was used to get up to speed with Vicente's current
research topics as well as reading as much background information on these
topics as I could. Unfortunately, there is much more background information
to be learned on my part, but I am confident that I will be able to absorb
the proper amount of background materials to successfully complete my project.

Week 2

After the progress I have made from my first week I am now more up to speed
when it comes to using UVA systems (i.e. NetID, WiFi, CS Servers, etc.)
I have also been assigned an official UVA workstation computer to use over
the summer (see picture below)! This week my goal is to finish all pytorch,
(deep learning python framework) Pytorch beginner tutorials and to
start working on my webscrapper for image backgrounds. Hopefully by the end
of this weekend it will be functional (fingers crossed). I would also like to get
this website looking more professional looking by the end of next week, so it
does not look so droll.

Week 3

Unfortunately, I did not get to finish all of the pytorch tutorials or the
webscrapper like I had planned, but I am actively working on it. On Sunday,
Vicente, Xuwang (Vicente's Grad Student), and I went to Walnut Creek
to join Dan Weller, Scott Acton, and their graduate students for a BBQ
picnic. I also got to hike around Walnut Creek to see some beautiful
Virginia Wilderness.

In order to make my life easier, in regards to counting the number of files
in a working directory, I decided to make a Bash script. It certainly
will be useful in order to keep track of the number of pictures I am scraping
from the internet. If my fellow Linux users wish to access this script
anywhere on their terminal all you need to do is configure your '.bashrc'
file in your home directory to include the following line:
alias countFiles='directory_location_of_file/./countFiles.sh'.
However, if the file is not executable (chances are it may be) do the
following: (assuming you are in the directory where the file is located)
[prompt > chmod +x countFiles.sh]. Also note that when you
download it from pastebin the name of the file may be lowercase, so
adjust the commands accordingly

Week 4

After a gruling amount of research and some clever scripting I,
with Vicente's help, was able to draft an image crawler. This
crawler was able to collect over 200 000 images of various
types of chairs from "thrones" to "s chairs" (yes there is
such a thing as "s chairs") that have transparent backgrounds.
My next task is to filter through these images to create a set
of 5000 images we can use as reliable training data for our
chair convolutional neural network. Since we want the most
accurate amount of data, unfortunately, I will have to manually
go through the first 100 images of each category of chair until
I have 5000 useable images. However, achieving this first
benchmark is invigorating, so I will keep with this current
momentum and focus on the task at hand! Also, yes I know the
web site is still bland looking, but it will soon look

Week 5

The grueling work of going through over 5000 images to get
5000 images certainly proved to be monotonous, but never-
theless the task was finished in time. Once I sorted
through all of the images, I created a python simple http
server, so the computer vision research group could view
the images at their leisure.

Vicente and I were later able to continue drafting the CNN
model that would act on these images as training, validation,
and testing data used to identify positive chair objects.
The images that were left over from the chair n-gram
categories I went through, which I deemed were not fit
to be positive chair results, were used to identify
negative chair objects (not a chair). Results from the first
set of experiments, using artificially negated chair images
along with the images that were deemed not fit, suggest that
this model could, perhaps, help us distinguish chair fore-
grounds that have been properly separated from their back-
ground by grabcut .

Week 6

This weekend Vicente invited his lab group and I to a weekened
excursion to Luray Caverns which included an automobile/
virginia settlement museum along with the cavern tour (see
pictures below). After our tours we ate good food at the dining
facility at the caverns. Since they also offered a wine tasting
I partook; I wasn't disappointed, the wine was wonderful. The
next activity was our journey through a garden maze, and
though some were concerned that we may get lost I had semi-
jokingly said that was the point. However, I was confident
that a group of computer scientists would find their way
through, and obviously we did make it through. Our final leg of
the execursion was our hike through the hawksbill mountain,
which had an amazing view at the top of the summit. All in
all it was a wonderful time.

The first half of my week consisted of running grabcut on
more stock photos from the internet that had non-transparent
backgrounds. By running grabcut on these images we would be
able to expand our test set masks to see if the CNN is
functioning properly. After running through these images we
created almost 1400 more test masks for our dataset. But,
since we did not annotate any of the results ourselves the
data was ambigious to us as well. So, keeping this in mind
we fed the images through the CNN as a test set under an
unknown classfication to see what would happen. Our results
were surprisingly accurate. The CNN was able to classify
the top 20 results as chair masks, and with additional
grabcut data we could enhance its classification accuracy.

Week 7

From here on out I am just trying to gather more data to improve
the performance of the CNN. Now I am placing furiture foregrounds
on room backgrounds to confuse the grabcut algorithm in order to
create a large number of negative grabcut masks to feed into the

I have also prepared a poster for the upcoming poster
presentation that UVA is hosting for undergraduate researchers
over the summer. Next week I will be presenting the poster, so,
hopefully, all will go well.

Week 8

During this week I presented my poster at a Center for Undergraduate
Excellence event a UVA. The poster session was full of people and
from what I could see there were at least 200 people attending the
event. After the poster session I was able to keep my poster print,
which I thought was pretty cool, considering I had not been able to
keep my posters that I had made during my past REU's. By weeks end I
was able to analyze some more GrabCut data that could be used for the

Week 9

After some toiling hours trying to get the automation of creating
GrabCut data working efficiently the week had ended.

Week 10

This week was more or less dedicated to moving myself out of my
apartment, incorporating the newly generated data into the model,
and starting to draft my final report for the REU.

I liked working at UVA doing Computer Vision research. It is valuable
experience I will be taking with me during my career as a researcher
and computer scientist.

PDF link