The interviews are coming in; it’s difficult to work on the function or prototype during the day while also doing interviews, because each interview takes about an hour or so, and to travel between each interview has the potential to take a lot of time. For each interview, the participant is recorded by an audio-only device, a camra on a tripod behind them watching their movements, and a GoPro harness attached to their chest that provides a "first person" view as they complete the three given tasks.
Our interview portion with the GoPro harness on participant’s chest, or the "first person" view, has proven to be a great tool. From that angle, we can easily see exactly their movements, and in some ways experience their confusion (or lack thereof). One thing we are noticing is that the longer a participant has had a smart phone or used navigation applications, the more effective they are when attempting tasks. Obviously, repetition and familiarity play a huge role in this particular study.
The final probability function depends on map movement - specifically panning the viewport. Each panning movement executed by the user is categorized into one of eight standard directions, and all points of interest tyhat lie in that particular direction are placed into a TRUE set. All other points of interest are placed in a FALSE set, and probabilities for TRUE and FALSE are determined; I chose 85% for TRUE and 15% for FALSE. Then, a normalizer is calculated so that the collective summation of all probabilities stays at one, and every probability is multiplied by the normalizer and its respective probability after each movement. In the end, the goal is to have these probabilities constantly shifting in the background, allowing for suggestions of points of interest based off of past movement commands, and easily making up for human error when necessary.