Working with Prof. Shapiro and the Structural Informatics group was a great experience. I got to work with people from different backgrounds(Anatomists, Neurologists, computer Scientists). It was an experience not only making me aware of new avenues, but actually broadening my horizons.
The first couple of weeks I worked with Emily, in understanding how it worked and its relationship with the Foundational Model,the Image Manager what it was and some stuff regarding the Scene Generator. I started working on the Image Manager from the third week onwards and it took time to understand the Database- how it was maintained , the relationships that existed between the tables, understand WIRM(unfortunately there was very little documentation for WIRM), how the Image Manager is built on the WIRM toolkit. It took another couple of weeks to understand and get familiarized with how things were and learning Perl at the same time. I was required to add the functionality of adding a collection of Images to the Image Manager with a single operation as opposed to what existed which was, that only a single image could be added at a time, its annotation file(which exists in an IML format) also had to be uploaded separately. I finished working on this task around the 6th week. Meanwhile also had a presentation in the SIRS meeting where I laid out some ideas regarding the Interface for Image Retrieval in Emily, how it would look like and what could be the requirements. I also had some ideas regarding the changes and improvements that can be made to the Image Manager, at this point I was very comfortable working with the Image Manager and understood the System(well, i had spent a lot of time with it).
After working on that problem, I was supposed to be integrating Image Retrieval with Emily, but in one of the meetings, Prof. Shapiro had a presentation regarding Content Based Image Retrieval and I got interested in that. I only had 3 1/2 weeks to do something substantial. I started working on Building a Java interface for Image Retrieval. I had a simple java program which would talk to the image_repo database and retrieve the images.There are two aspects to it -- The first is retrieval based on the Annotations, where by the system looks for an exact match for the query in the database and the second aspect is to retrieve Images based on the captions, for queries like 'lungs', 'heart', the program retrieves all the images which would have the stemmed word of the original query(lungs, in this case) and display them and also will retrieve all the annotated regions(parts) related with that base Image.The next step was to order these retrieved images using Content Based Retrieval techniques. I used the idea of ordering the images based on the area that they occupy working on the simple principal that the larger the area a region occupies the better that particular image is with respect to the query. My last week and a half were spent in figuring out how to calculate the area. The database image_repo stored all the boundary coordinates of an annotated region. I converted this Vector notation into a Raster one, i.e. generating all the pixels that lie in a particular annotated region. I generated a labeled image for each of the files, it currently exists in the form of a 2D array the area could be easily calculated from that. This 2D array generated can be written out to a file and a false image can be generated if required, also various neighborhood operations can be performed. The last couple of days have been spent in documenting the code and just improving the interface with it.