kmod's blog

25Mar/144

Troubles of using GNU Make

There seem to be lots of posts these days about people "discovering" how using build process automation can be a good thing.  I've always felt like the proliferation of new build tools is largely a result of peoples' excitement at discovering something new; I've always used GNU Make and have always loved it.

As I use Make more and more, I feel like I'm getting more familiar with some of its warts.  I wouldn't say they're mistakes or problems with Make, but simply consequences of the assumptions it makes.  These assumptions are also what make it so easy to reason about and use, so I'm not saying they should be changed, but they're things I've been running into lately.

Issue #1: Make is only designed for build tasks

Despite Make's purpose as a build manager, I tend to use it for everything in a project.  For instance, I use a makefile target to program microcontrollers, where the "program" target depends on the final build product, like this:

program.bin: $(SOURCES)
    ./build.py

.PHONY: program
program: program.bin
    ./program.py program.bin

 

This is a pretty natural usage of Make; typing "make program" will rebuild what needs to be remade, and then calls a hypothetical program.py to program the device.

 

Making the outcome more complicated, though, quickly makes the required setup much more complicated.  Let's say that I also want to use Make to control my actual program -- let's call it run.py -- which communicates with the device. I want to be able to change my source files, type "make run", and have Make recompile the program, program the microcontroller, and then call run.py.  The attractive way to write this would be:

.PHONY: run
run: program other_run_input.bin
    ./run.py other_run_input.bin

 

This has a big issue, however: because "program" is defined as a phony target, Make will execute it every time, regardless of whether its prerequisites have changed.  This is the only logical thing for Make to do in this situation, but it means that we'll be programming the microcontroller every time we want to run our program.

How can we avoid this?  One way is to have "program" be an actual file that gets touched, so that program is no longer a phony target, with the result that we track the last time the microcontroller was programmed and will only reprogram if the binary is newer.  This is pretty workable, although ugly, and for more complicated examples it can get very messy.

 

Issue #2: Make assumes that it has no overhead

There are two main ways to structure a large Makefile project: using included Makefiles, or to use recursive Makefiles.  While the "included Makefiles" approach seems to often be touted as better, many projects tend to use a recursive Make setup.  I can't speak for other projects for why they choose to do that, but one thing I've noticed is that Make can itself take a long time to execute, even if there are no recipes that are executed.  It seems not too surprising: with a large project with hundreds or thousands of source files, and many many rules (which can themselves spawn exponentially more implicit search paths), it can take a long time to determine if anything needs to be done or not.

This often isn't an issue, but for my current project it is: I have a source dependency on a large third-party project, LLVM, which is large enough that it's expensive to even check to see if there is anything that needs to be rebuilt.  Fortunately, I very rarely modify my LLVM checkout, so most of the time I just skip checking if I need to rebuild it.  But sometimes I do need to dive into the LLVM source code and make some modifications, in which case I want to have my builds depend on the LLVM build.

This, as you might guess, is not as easy as it sounds.  The problem is that a recursive make invocation is not understood by Make as a build rule, but just as an arbitrary command to run, and thus my solution to this problem runs into issue #1.

 

My first idea was to have two build targets, a normal one called "build", and one called "build_with_llvm" which checks LLVM.  Simple enough, but it'd be nice to reduce duplication between them, and have a third target called "build_internal" which has all the rules for building my project, and then let "build" and "build_with_llvm" determine how to use that.  We might have a Makefile like this:

.PHONY: build build_internal build_with_llvm llvm
build_internal: $(SOURCES)
    ./build_stuff.py

build: build_internal
build_with_llvm: build_internal llvm

 

This mostly works; typing "make build" will rebuild just my stuff, and typing "make build_with_llvm" will build both my stuff and LLVM.  The problem, though, is that build_with_llvm does not understand that there's a dependency of build_internal on llvm.  The natural way to express this would be by adding llvm to the list of build_internal dependencies, but this will have the effect of making "build" also depend on llvm.

Enter "order-only dependencies": these are dependencies that are similar to normal dependencies, but slightly different: it won't trigger the dependency to get rebuilt, but if the dependency will be rebuilt anyway, the target won't be rebuilt until the dependency is finished.  Order-only dependencies sound like the thing we want, but they unfortunately don't work with phony targets (I consider this a bug): phony order-only dependencies will always get rebuilt, and behave exactly the same as normal phony dependencies.  So that's out.

The only two solutions I've found are to either 1) use dummy files to break the phony-ness, or 2) use recursive make invocations like this:

build_with_llvm: llvm
    $(MAKE) build_internal

This latter pattern solves the problem nicely, but Make no longer understands the dependence of build_with_llvm on build_internal, so if there's another target that depends on build_internal, you can end up doing duplicate work (or in the case of a parallel make, simultaneous work).

Issue #3: Make assumes that all build steps result in exactly one modified file

I suppose this is more-or-less the same thing as issue #1, but feels different in a different context: I'm using a makefile to control the building and programming of some CPLDs I have.  The Makefile looks somewhat like this:

# Converts my input file (in a dsl) into multiple cpld source files:
cpld1.v: source.dsl
    ./process.py source.dsl # generates cpld1.v and cpld2.v

# Compile a cpld source file into a programming file (in reality this is much more complicated):
cpld%.svf: cpld1.v
    ./compile.py cpld%.v

program: cpld1.svf cpld2.svf
    ./program.py cpld1.svf cpld2.svf

 

I have a single input file, "source.dsl", which I process into two Verilog sources, cpld1.v and cpld2.v.  I then use the CPLD tools to compile that to a SVF (programming) file, and then program that to the devices.  Let's ignore for the fact that we might want to be smart about knowing when to program the cplds, and just say we only call "make program" as the target.

The first oddity is that I had to choose a single file to represent the output of processing the source.dsl file.  Make could definitely represent that both files depended on processing that file, but I don't know of any other way of telling it that they can both use the same execution of that recipe, ie that it generates both files.  We could also make both cpld1.v and cpld2.v depend on a third phony target, maybe called "process_source", but this has the same issue with phony targets that it will always get run.  We'll need to make sure that process.py spits out another file that we can use as a build marker, or perhaps make it ourselves in the Makefile.

In reality, I'm actually handling this using a generated Makefile.  When you include another Makefile, by default Make will check to see if the candidate Makefile needs to be rebuilt, either because it is out of date or because it doesn't exist.  This is interesting because every rule in the generated makefile implicitly becomes dependent on the the rule used to generate the Makefile.

 

Another issue, which is actually what I originally meant to talk about, is that in fact process.py doesn't always generate new cpld files!  It's common that in modifying the source file, only one of the cpld.v outputs will get changed; process.py will not update the timestamp of the file that doesn't change.  This is because compiling CPLD files is actually quite expensive, with about 45 seconds of overhead (darn you Xilinx and your prioritization of large projects over small ones), and I like to avoid it whenever possible.  This is another situation that took quite a bit of hacking to figure out.

 

 

Conclusion

Well this post has gotten quite a bit more meandering than I was originally intending, and I think my original point got lost (or maybe I didn't realize I didn't have one), but it was supposed to be this: despite Make's limitations, the fact that it has a straightforward, easy to understand execution model, it's always possible to work around the issues.  If you work with a more contained build system this might not be possible, which is my guess as to why people branch off and build new ones: they run into something that can't be worked around within their tool, so they have no choice but to build another tool.  I think this is really a testament to the Unix philosophy of making tools simple and straightforward, because that directly leads to adaptability, and then longevity.

Filed under: Uncategorized 4 Comments
18Mar/140

More BGA testing

In my previous post I talked about my first attempts at some BGA soldering.  I ran through all three of my test chips, and ordered some more; they came in recently, so a took another crack at it.

I went back to using liquid flux, since I find it easier to use (the felt-tip flux pen is a brilliant idea), and it didn't have any of the "getting in the way of the solder" issues I found with the tack flux I have.

Test #1: ruined due to bad alignment.  I have some alignment markers on the board which usually let me align it reliably, but I tricked myself this time by having my light shine on the board at an angle; the chip's shadow gave a misleading indication of how it was aligned.  Later, I discovered that if I applied pressure on the chip, it would actually snap into place -- I think this is from the solder balls fitting into the grooves defined by the solder mask openings.  This probably isn't the best way to do things but I haven't had any alignment issues since trying this.

Test #2: ruined by taking it out of the oven too quickly, and the chip fell off.  After I saw that, I tested some of the other solder indicators on the board and saw that they were still molten...

Test #3: fixed those issues, took it out of the oven and the chip was simply not attached.  I don't really know what happened to this one; my theory is that I never melted the solder balls.  My explanation for this is that I've been using my reflow process on leaded-solder only; I think these BGA parts have lead-free solder balls, which means that they most likely have a higher melting point, and I might not have been hitting it.

Test #4: kept it in the oven for longer, and let it cool longer before moving it.  And voila, the chip came out attached to the board!  This was the same result I got on my first attempt but hadn't been able to recreate for five subsequent attempts; a lot of those failures were due to silly mistakes on my part, though, usually due to being impatient.

 

So I had my chip-on-board, and I connected it up to my jtag programmer setup.  Last time I did this I got a 1.8V signal from TDO (jtag output), but this time I wasn't getting anything.  One thing I wanted to test is the connectivity of the solder balls, but needless to say this is no easy task.  Luckily, the jtag balls are all on the outer ring, ie are directly next to the edge of the chip with no balls in between.  I tried using 30-gauge wire to  get at the balls, but even that is too big; what I ended up doing is using a crimper to flatten the wire, at which point it was thin enough to fit under the package and get to the balls.  I had some issues from my scope probe, but I was eventually able to determine that three of the four jtag lines were connected all the way to their balls -- a really good sign.  The fourth one was obstructed by the header I had installed -- I'll have to remember to test this on my next board before soldering the header.

The ground and power supply pins are all on the inner ring, though, so I highly doubt that I could get in there to access them.  I'm optimistically assuming that I got one of the three ground balls connected and that that is sufficient; that means I just have to have two of the power pins connected.  At this point I feel like it's decently likely that there's an issue with my testing board; this was something I whipped up pretty quickly.  I already made one mistake on it: apparently BGA pinouts are all given using a bottom-up view -- this makes sense, since that's how you'd see the balls, but I didn't expect that at first, since all other packages come with top-down diagrams.  Also, to keep things simple, I only used a 1.8V supply for all the different power banks; re-reading the datasheet, it seems like this should work, but this is just another difference between this particular board and things that I've gotten working in the past.

 

So, what I ended up doing is I made a slightly altered version of the board, where the only difference is that the CPLD comes in a TQFP package instead of BGA; all electrical connections are the same.  Hopefully there'll be some simple thing I messed up with the circuit and it's not actually a problem with the assembly, and with a quick fix it'll all work out -- being able to do BGAs reliably would open up a lot of possibilities.

Filed under: electronics No Comments
10Mar/143

First trials with BGA parts

So I decided to try my hand at BGA reflow; this is something I've wanted to do for awhile, and recently I read about some people having success with it so I decided to give it a shot.  I'm trying to start moving up the high-speed ladder to things like larger FPGAs, DRAM, or flash chips, which are largely coming in BGA parts (especially FPGAs: except for the smallest ones, they *only* come in BGA packages).  I've generally been happy with the reflow results I've been getting so I felt confident and tried some BGAs.  It didn't work out very well.

First of all, the test:

2014-03-10 02.20.40

This is a simple Xilinx CPLD on the left, along with a simple board that just has the BGA footprint along with a JTAG header.  The CPLD costs $1.60 on Digikey, and the boards were incredibly cheap due to their small size, so the entire setup is cheap enough to be disposable (good thing).  This CPLD uses 0.5-mm pitch balls, which is really really small; it's so small that I can only route it because they only use two of the rows.  It also means that getting this to work properly is probably in some ways much more difficult than a larger-pitch BGA, which is good since in the future when I use those, they'll have many more balls that need to be soldered reliably.

Test 1

The first test I did was, in hindsight, probably the one that worked out the best.  What I did was applied some flux to the board using a flux pen, carefully placed the BGA, and reflowed it.  An unexpected complication to this process was that since the only thing being reflowed was a BGA, I had no visual indication of how the reflow was going!  This is important to me because I don't have a reflow controller for my toaster oven, so I just control it manually (which works better than you might think).  I did my best to guess how it was going, and at the end the chip was pretty well-attached to the board, but I felt pretty in inconfident in it.

I hooked up my JTAG programmer, and had it spew out a stream of commands that should get echoed back: what I got back was a constant 1.8V signal (ie Vcc).  I was disappointed with this result, since I really had no idea how to test this board.  In retrospect, that constant signal was quite a bit better than I thought: it means that at least three of the pins (1.V, TDO, and presumably GND) were connected.

I was still feeling pretty bad about the soldering reflow, so I decided to try putting it back in the oven for a second go.  Turns out that that's a pretty bad idea:

2014-03-10 03.17.41

So I moved on to test #2

Test 2

I used the same exact process for the second test as I did for the first, though this time I put some "indicator solder" on a separate board just to get a visual gauge for the temperature.  This test ended up being pretty quick, though, since I was too hasty in aligning the BGA (or maybe I knocked it out of place when transferring it to the oven), and it came out clearly-misaligned.  I put it through the tester anyway, for good measure, but then moved on to the next test.

Test 3

For the third test, I used tacky gel flux, instead of liquid flux from a flux pen, to see if that would help.  Unfortunately, I think the problem was that I added way too much flux, and there was so much residue that the solder balls did not make good contact with their pads.  In fact, as I was soldering on the test pins, the CPLD came off entirely.

 

At this point, I was out of CPLDs to test on -- you only get one shot at reflowing them, unless you're willing to re-ball them (which is probably much more expensive than just buying new ones anyway).  I ordered a bunch more, so I'll take another crack at it soon.  The things I'm going to try are:

  • Using liquid flux again but with a solder indicator and not messing up the alignment
  • Using gel flux again but with less of it
  • Using a stencil with solder paste, instead of just flux

 

 

Filed under: electronics 3 Comments
4Mar/140

Followup: picking an ARM chip

In my last post I talked a little about the process of picking an ARM microcontroller to start using.  After doing some more research, I've decided for now to start using the STM32 line of chips.  I don't really know how it stands on the technical merits vs the other similar lines; the thing I'm looking at the most is suitability for hobbyist usage.  It looks like Freescale is pursuing the hobbyist market more actively, but from what I can make out it looks like the STM32 line of chips has been around longer, gathered some initial adoption, and now there's enough user-documentation to make it usable.  A lot of the documentation is probably applicable to all Cortex-M3/M4's, but it's all written for the STM32's and that's what most people seem to be using.  So I feel pretty good about going with the STM32 line for now -- I'll be sure to post once I make something with one.

Within the STM32 line, though, there are still a number of options; I've always been confused too why different chips within the same line are so non-portable with each other.  Even within the same sub-line (ex STM32 F3) at the same speed and package, there will be multiple different chips as defined by having different datasheets.  I also ran into this with the ATmegas -- even though they're "pin and software compatible", there are still a number of breaking differences between the different chips!  I guess I had always hoped that picking a microcontroller within a line would be like selecting a CPU for your computer: you select the performance/price tradeoff you want and just buy it.  But with microcontrollers I guess there's a bit more lock-in since you have to recompile things, change programmer settings, etc.

At first I was planning on going with a mid-range line of theirs, the STM32 F3, since that's the cheapest Cortex-M4.  Turns out that the different between Cortex-M3's and M4's is pretty small: M4's contain more DSP functionality (multiplication and division operations), and an optional FPU (which is included in all STM32 M4's).  It looks like the F3 is targeted at "mixed signal applications", since they have a bunch of ADCs including some high-precision SDADCs.  I thought about moving down a line to the F1's, which are cheaper M3's that can have USB support, but in the end I decided to move up a line to their top-end F4's.  Even a 168MHz chip only ends up being about $11; a fair bit more than the $3 for an ATmega328P, but once you consider the total cost of a one-off project, I think it'll end up being a pretty small expense.

My first intended use is to build a new JTAG adapter; my current one uses an ATmega328 and is limited to about 20KHz.  My development platform project, which I keep on putting off finishing, uses a lot of CPLDs and will soon start using FPGAs as well, so JTAG programming time would be a nice thing to improve.  Hopefully I'll have something to post about in a few weeks!