Summer 2008 DMP Project at Brown
June
17

Trying to deal with criticalities

So I figured out my problem from last week: not fitting all of the faults into Open Office. The solution the people in the lab gave: use Excel. It can load all but 40ish lines, which is in my opinion okay. It'll work for now. I brought my laptop to work, and have that working for graphing.

Now I am trying to think of ways to detect the faults that have high criticality but not decected often. The problem is, calculating criticality is very expensive, we just happen to have those values for the circuit I am looking at (color converter). Criticality means, 'how bad is it if I miss this fault'. So the goal is to do as much pre-processing to not have to calculate so many criticalities of faults.

I am checking the performance of my selected vectors versus those that Yiwen selects. She gave me a file that lists vectors from best to worst, and I select the number of vectors that my script produces from the top of her lists. I check which faults are detected by my vectors, versus Yiwens. They are generally similar... but still Yiwens' vectors catch practically all critical faults

I am not sure how to go on now. For now, I have set a script running, which in retrospect might be a bad idea, to create files for each of the faults, listing the vectors that detect them. Now I realize that there are 66000+ faults, and 5700 vectors, so this might take a while, and be hard to... manage. It would be better than trying to calculate the vectors each time though, I'll see how much space it takes up when it finishes.... if it finishes.

June
20

ELF Value: Calculating Vector Sets

I've worked on a couple of different approached to my problem this week. I generated a lot of graphs that look at the log of faults of detected of the faults detected by my selection of vector sets, versus the log of faults detected of a vector set that is of the same size of Yiwens' list of vectors (she has a list of vectors, best to worst, and I just grab the top 'x'). I have found that on the overall scale, they correlate pretty well, but with some outliers. I am assuming Yiwens' list of vectors is the optimal selection, which might not be the case. I am trying to find a less expensive way of obtaining it. The problem is that my selection method does not include criticalities, so I've been poking at that subject.

After talking to Prof.Dworak, I have a couple of new approaches. The first that was easy to implement was to change my regular implementation that finds the faults least detected in the list of vectors to stop searching for a fault in the list as soon as it is found in any vector. Before I had a 1-1 ratio of faults that I am looking for to vectors, and now its much larger. Waiting on those graphs until I get to to Excel.

The other approach that is promising is ELF values. An ELF value is calculated for every vector, and is the sum of the criticality times the number of detections of each fault that is detected by that vector. This is what Yiwen uses for her ranking of vectors. Yet criticalities are very expensive to calculate. So I had a script that will calculate ELF values for each vector based on faults that have under a certain amount of detections. This minimizes the number of faults that criticalities have to be calculated for. Then I rank the vectors from highest ELF to lowest, and grab the top 'X' number of them to see the number of detections of ALL vectors for that set of vectors. Will see how those look when I can graph them, once again. Looks very promising. Only thing is, trying to decide if ELF should be num detections * criticality, or inverse of num detections * criticality. That would give more weight to faults with less detections.