With help from John and the OpenCV documentation, I managed to get a matchTemplate method working. However, determining whether it was working right was difficult. The way matchTemplate is supposed to work is it takes the full image and the template image, compares them using the normalized correlation coefficient, and makes a result image. From reading the documentation, the best matches are the global maximums of the result that can be found using the CvMinMaxLoc method. Again, using the OpenCV documentation, I tried to implement a MinMaxLoc method for the Camera Mouse. But so far all the output points always came out as (0, 0).
To try and see what the problem was, I started saving the result images as bitmaps. When looking at the result image, this is what it looked like:
This is problematic because the eyes in the image are not distinguishable by finding the maximum (closest to white) or the minimum (highest concentration of color) value. So whether I get the MinMaxLoc method working or not, it won't help to find the eye templates. No matter how many times I try to reformat the output of the matchTemplate method, I cannot get it to show the eyes as either a global maximum or minimum. It is quite frustrating that the documentation for OpenCV is not very detailed. After experimenting for a week, I still cannot figure out how matchTemplate makes the result image.