Published 22 Jun 2013
Week Two Accomplishments:
- Added some code to the spice simulations script! Ported the command line arguments
from a deprecated module (optparse) to a more advanced one (argparse)
- Commented and cleaned up code in a few different scripting files
- Fixed fastscan and most other simulation errors
Week Three TODOs:
- Consolidate find-chains into one all-encompassing script that meets the finalized
set of parameters.
- Start investigating fault simulations and how they work
- Start SPICE simulations (maybe? hopefully?)
Day 5 (6/21)
- Today I spent a little bit more time commenting up a storm. I filled in a lot of
function explanations in gen_spice.py, then moved on to comment up some of find-chains.py,
the behemoth script with 11 different versions. Unfortunately, when I wrote the file and quit
back out to the shell, I realized that I had been commenting up the WRONG VERSION of
the script!! Bad news. I should have planned that out a little better, because I was
nowhere near done going over that script, and I'd originally planned on opening up all the
different versions and figuring out what to throw and what to keep, and then
commenting up the resulting script. But, I guess that's ok, because now I have some pre-written
comments, and the purpose of the functions themselves haven't changed version-to-version, so
my work wasn't moot.
- There were some places in some of the fault analysis scripts that had hard-coded path names,
and Python versions, so I went ahead and changed it so it dynamically gets the executor's home directory path and
appends the path to our newer version of python that way.
- Also, yesterday I noted that the scripts weren't working as I thought they should, but
decided to leave that battle for today. I decided to isolate and hopefully
solve whatever the problem was. Essentially, when I ran "make all", which to my knowledge
was working fine previously,
the program started crashing during the spice-file generation, claiming that there
were "Too many imps". "Too many imps"?! What does that mean?! It means that there
are a lot of ambiguous error messages in this code base still, and I need to fix them up.
- Still waiting on my Brown ID. This is frustrating. I'd love to just run some simulation
on Oscar.
Day 4 (6/20)
- Today I've busied myself by updating the command line arguments module for the
spice-parsing script. When I was dissecting the code I noticed that the module used
for parsing command-line arguments (optparse) had been deprecated, and python was
only supporting a newer argument parser (aptly named: argparse) for creating your
own arguments and flags.
- I read over the documentation for argparse quickly and figured I could just go
ahead and replace the code (obviously after saving a copy and making sure svn
was tracking both versions) and everything would be okay. Wrong move, because once
I finished python started throwing all kinds of errors and I realized that the new
module handles positional and optional arguments differently than the older one,
and I had to take a little bit of time to understand how argparse was different from
optparse (most critically, everything is an argument, rather than differentiating
arguments and options).
- After I properly replaced all the positional and optional argument code, then I
had to go through all the instances of using the arguments internally within the
spice script, which I didn't realize would also pose a problem (but should have;
different modules = different access functions, duh).
- Marco had suggested that wrt documenting the code I should just comment all of
the existing find-chains code in an appropriate manner, starting with helpers.py.
So I did just that, a pretty tedious but useful process, I was able to clarify
a lot of the input-output names and cleared a lot of things up for myself.
- In my testing the gen_spice argument parsing updates, I found out a lot of other
parts of the script suite were not working as intended either, which was not great.
But I have successfully completed my first contribution to the code base, so I'll
take my victories where I can get them.
Day 3 (6/19)
- Today was a pretty slow day overall. I thought I felt comfortable with my understanding
of how the find-chains code worked, but then I moved on to go through find-chains-v2 and
find-chains-rd53 (the circuit-specific find chains that includes a lot more tests and
noise-immunity heuristics) and got totally overwhelmed and realized there was a lot I still
didn't understand about how the code works. It's becoming super clear to me that one of the
biggest pieces of operational overhead in a summer project is just figuring out how everything
works. So then I went back and really really convinced myself of almost every line in the original
script, and after about three hours of going back and forth between different function definitions,
I feel much more satisfied in my understanding of how the code works.
- I still don't have a Brown ID, so no SPICE simulations yet. I am very excited for the day
when I won't have to include a line about this in my log.
Day 2 (6/18)
- It took me about five hours to get through the helpers.py file that
houses most of the implication and file parsing functions. I really wanted to
(and didn't already) understand how the code was parsing through the verilog
and creating the implications and calculating things from the gate-level description,
so I stepped through each line of every function in the python interpreter at
my own pace, so I could really figure out what was going on.
There are one or two variable names here and there that I still don't understand,
but they will probably clear themselves up as I see them being used elsewhere.
- For now, I have
all the functions and classes from that python file documented in my notebook
for now, but I do need to figure out what the documentation should finally look
like and start planning. I looked up a few documentation packages for Python,
and it seems like Sphinx is a popular one, but there because there are also some
command-line tools for earlier parts of the code in Perl and C, I may want to look
for a language-agnostic tool. Maybe I'll just write the whole thing in LaTeX, which would
be ideal for me because I find that enjoyable.
- I went over the find-chains code also, but after stepping through helpers.py it seems like
the actual find-chains code just invokes all the relevant methods at the right time, so it
wasn't too exciting or novel. I thought at one point I found a really big oversight in the code,
but upon further investigation it seems that it was just a problem with the way I'd inspected
the variables and not the actual computation itself. And the find-chains scripts do work properly, so I'm
not sure why I was so quick to assume that there was a big error.
Day 1 (6/17)
- Surprise of the day: I finished two of my next week TODO's in like 5 minutes!
I got fastscan working over the weekend, with help from Marco (again), and once
that got fixed, I was able to find the clearly broken paths much more easily.
Now I have the whole scripting bundle operating properly. Turns out I was
missing a fairly critical script that got all the fastscan permissions and paths
and now that that works, the scripts seem to be doing
a lot more of what they're supposed to.
- I spent most of the morning trying to figure out what's going on in the "validation"
part of the script (part two of the circuit synthesis workflow). It's all in perl,
which is really nebulous to me, but the scripts seem to be missing some parameter
that I can't seem to figure out.
- Then Dr. Bahar chatted with Marco and I about what I should be
getting up to this week and next, especially because she's going to be away soon, and
we decided that my efforts would be best spent looking at the
"find-chains", "fault" and "spice" scripts, because that was where the least generic
code had been written that needs the most work. So I tabled my investigation of the
"validation" scripts until Marco finishes prelims and then we can sourt out annoying things
like that.
- The afternoon was spent combing through all the different versions (ten of them!) of the
find-chains script. It's a little dense, and after combing through all of the initial find-chains
script only to find that v2 had changed most of the organization, I decided a better
tactic would be to pick a critical function that spans most-to-all of the versions and
map how they change across iterations of the script. The first function I picked was "score_chains",
because in order to make the best chain of implications, the code needs to be able to
rank all the possible chains somehow. There were some interesting differences in how
the chain scoring was evaluated, and the comments in the later versions noted that
most weights were chosen arbitrarily (I'm sure there was some testing involved
to come up with reasonable numbers) and it got me thinking about how we could automate
finding the optimal weights for the different heuristics used.
- I'm still a little confused about how the fault simulations are connecting with
the implication chain scripts as they currently stand. It would be helpful if I
clarified that, but that should also be able to wait until I finish consolidating the find-chains
script, which is looking to be on the herculean side.