I happened to be watching the Oscars last night, and I was pretty shocked to see the mistake with the Best Picture award. Thinking back on it, this is a bit surprising to me: many things are happening that should be more "shocking" (all the craziness in Washington) but don't seem to affect me the same way.
I think this comes down to selection bias: the internet has made it so much easier to find extreme examples that the impact of them is dulled. In contrast, seeing something for yourself -- such as watching the Oscars mistake live -- has a realness to it that is much more impactful. Maybe another way of putting it is that it has become much easier to cherry-pick examples now.
I thought of some other examples of this: I don't feel very persuaded when someone says to me "there was a paper that shows X", because there's probably also a paper that shows the opposite of X. Similarly, quoting an "expert" on something doesn't mean that much to me anymore either. Particularly when their qualification is simply "[subject] expert", but even quotes from generally-respected people don't have that much impact, since I'm sure someone else famous said the opposite.
Maybe this is all wishful thinking. There's the meme that "a terrorist attack is more frightening than X even though X kills more people", and if true is fairly opposite to what I'm saying here. And I don't really know how to solve the selection bias problem -- words seem to hold less value in new internet regime where anyone can say anything they want, and it's not clear what to replace words with. Or maybe this whole thing is just me being a bit jaded. Either way, it will be interesting to see how society ends up adapting.
Hey all, I'm investigating an idea and it's gotten to the point that I'd like to solicit feedback. The idea is to use Theano or TensorFlow to accelerate existing NumPy programs. The technical challenges here are pretty daunting, but I feel like I have a decent understanding of its feasibility (I have a prototype that I think is promising). The other side of the equation is how valuable this would be. The potential benefits seem very compelling (cross-op optimizations, GPU and distributed execution "for free"), and I've heard a lot of people ask for better NumPy performance. The worrying thing, though, is that I haven't been able to find anyone willing to share their code or workflow. Not that I'm blaming anyone, but that situation makes me worried about the demand for something like this.
So, what do you think, would this be valuable or useful? Is it worth putting more time into this? Or will it be just another NumPy accelerator that doesn't get used? If you have any thoughts, or want to chime in about your experiences with NumPy performance, I'd definitely be interested to hear about it in the comments.
I recently ordered some junk food from Amazon, despite my wife's objections. I ordered it from an Amazon Market (aka third party) seller since that was the choice picked by Amazon for one-click ordering.
The food arrived, and the interesting thing is that it arrived in a Walmart box, with a Walmart packing slip. Evidently, someone savvy recognized that the Walmart price was lower than the Amazon price, and undercut Amazon's price using Walmart as the fulfillment. I was pretty annoyed to have been caught by this, but at the same time I have to respect that they pulled this off, and that I got the food cheaper than if they hadn't done this.
Anyway, just thought that it is interesting that people are out there doing this!
I meant to post more of these, but here's one for fun:
class A(object): def __eq__(self, rhs): return True class B(object): def __eq__(self, rhs): return False print A() in [B()] print B() in [A()]
Maybe not quite as surprising once you see the results and think about it, but getting this wrong was the source of some strange bugs in Pyston.
There seems to be a consensus that register bytecodes are superior to stack bytecodes. I don't quite know how to cite "common knowledge", but doing a google search for "Python register VM" or "stack vs register vm" supports the fact that many people believe this. There was a comment on this blog to this effect as well.
Anyway, regardless of whether it truly is something that everyone believes or not, I thought I'd add my two cents. Pyston uses a register bytecode for Python, and I wouldn't say it's as great as people claim.
Lifetime management for refcounting
Why? One of the commonly-cited reasons that register bytecodes are better is that they don't need explicit push/pop instructions. I'm not quite sure I agree that you don't need push instructions -- you still need an equivalent "load immediate into register". But the more interesting one (at least for this blog post) is pop.
The problem is that in a reference-counted VM, we need to explicitly kill registers. While the Python community has made great strides to support deferred destruction, there is still code (especially legacy code) that relies on immediate destruction. In Pyston, we've found that it's not good enough to just decref a register the next time it is set: we need to decref a register the last time it is used. This means that we had to add explicit "kill flags" to our instructions that say which registers should be killed as a result of the instruction. In certain cases we need to add explicit "kill instructions" whose only purpose is to kill a register.
In the end it's certainly manageable. But because we use a register bytecode, we need to add explicit lifetime management, whereas in a stack bytecode you get that for free.
I don't think it's a huge deal either way, because I don't think interpretation overhead is the main factor in Python performance, and a JIT can smooth over the differences anyway. But the lifetime-management aspect was a surprise to me and I thought I'd mention it.
I've seen this question come up a couple times, most recently on the python-dev mailing list. When you want to benchmark something, you naturally want to run the workload multiple times. But what is the best way to aggregate the multiple measurements? The two common ways are to take the minimum of them, and to take the average (but there are many more, such as "drop the highest and lowest and return the average of the rest"). The arguments I've seen for minimum/average are:
- The minimum is better because it better reflects the underlying model of benchmark results: that there is some ideal "best case", which can be hampered by various slowdowns. Taking the minimum will give you a better estimate of the true behavior of the program.
- Taking the average provides better aggregation because it "uses all of the samples".
These are both pretty abstract arguments -- even if you agree with the logic, why does either argument mean that that approach is better?
I'm going to take a different approach to try to make this question a bit more rigorous, and show that there in different cases different metrics are better.
The first thing to do is to figure out how to formally compare two aggregation methods. I'm going to do this by saying the statistic which has lower variance is better. And by variance I mean variance of the aggregation statistic as the entire benchmarking process is run multiple times. When we benchmark two different algorithms, which statistic should we use so that the comparison has the lowest amount of random noise?
Quick note on the formalization -- there may be a better way to do this. This particular way has the unfortunate result that "always return 0" is an unbeatable aggregation. It also slightly penalizes the average, since the average will be larger than the minimum so might be expected to have larger variance. But I think as long as we are not trying to game the scoring metric, it ends up working pretty well. This metric also has the nice property that it only focuses on the variance of the underlying distribution, not the mean, which reduces the number of benchmark distributions we have to consider.
The variance of the minimum/average is hard to calculate analytically (especially for the minimum), so we're going to make it easy on ourselves and just do a Monte Carlo simulation. There are two big parameters to this simulation: our assumed model of benchmark results, and the number of times we sample from it (aka the number of benchmark runs we do). As we'll see the results vary pretty dramatically on those two dimensions.
The first distribution to try is probably the most reasonable-sounding: we assume that the results are normally-distributed. For simplicity I'm using a normal distribution with mean 0 and standard deviation 1. Not entirely reasonable for benchmark results to have negative numbers, but as I mentioned, we are only interested in the variance and not the mean.
If we say that we sample one time (run the benchmark only once), the results are:
stddev of min: 1.005 stddev of avg: 1.005
Ok good, our testing setup is working. If you only have one sample, the two statistics are the same.
If we sample three times, the results are:
stddev of min: 0.75 stddev of avg: 0.58
And for 10 times:
stddev of min: 0.59 stddev of avg: 0.32
So the average pretty clearly is a better statistic for the normal distribution. Maybe there is something to the claim that the average is just a better statistic?
Let's try another distribution, the log-normal distribution. This is a distribution whose logarithm is a normal distribution with, in this case, a mean of 0 and standard deviation of 1. Taking 3 samples from this, we get:
stddev of min: 0.45 stddev of avg: 1.25
The minimum is much better. But for fun we can also look at the max: it has a standard deviation of 3.05, which is much worse. Clearly the asymmetry of the lognormal distribution has a large effect on the answer here. I can't think of a reasonable explanation for why benchmark results might be log-normally-distributed, but as a proxy for other right-skewed distributions this gives some pretty compelling results.
Update: I missed this the first time, but the minimum in these experiments is significantly smaller than the average, which I think might make these results a bit hard to interpret. But then again I still can't think of a model that would produce a lognormal distribution so I guess it's more of a thought-provoker anyway.
Or, the "random bad things might happen" distribution. This is the distribution that says "We will encounter N events. Each time we encounter one, with probability p it will slow down our program by 1/Np". (The choice of 1/Np is to keep the mean constant as we vary N and p, and was probably unnecessary)
Let's model some rare-and-very-bad event, like your hourly cron jobs running during one benchmark run, or your computer suddenly going into swap. Let's say N=3 and p=.1. If we sample three times:
stddev of min: 0.48 stddev of avg: 0.99
Sampling 10 times:
stddev of min: 0.0 stddev of avg: 0.55
So the minimum does better. This seems to match with the argument people make for the minimum, that for this sort of distribution the minimum does a better job of "figuring out" what the underlying performance is like. I think this makes a lot of sense: if you accidentally put your computer to sleep during a benchmark, and wake it up the next day at which point the benchmark finishes, you wouldn't say that you have to include that sample in the average. One can debate about whether that is proper, but the numbers clearly say that if a very rare event happens then you get less resulting variance if you ignore it.
But many of the things that affect performance occur on a much more frequent basis. One would expect that a single benchmark run encounters many "unfortunate" cache events during its run. Let's try N=1000 and p=.1. Sampling 3 times:
stddev of min: 0.069 stddev of avg: 0.055
Sampling 10 times:
stddev of min: 0.054 stddev of avg: 0.030
Under this model, the average starts doing better again! The casual explanation is that with this many events, all runs will encounter some unfortunate ones, and the minimum can't pierce through that. A slightly more formal explanation is that a binomial distribution with large N looks very much like a normal distribution.
There is a statistic of distributions that can help us understand this: skewness. This has a casual understanding that is close to the normal usage of the word, but also a formal numerical definition, which is scale-invariant and just based on the shape of the distribution. The higher the skewness, the more right-skewed the distribution. And, IIUC, we should be able to compare the skewness across the different distributions that I've picked out.
The skewness of the normal distribution is 0. The skewness of this particular log-normal distribution is 6.2 (and the poor-performing "max" statistic is the same as taking the min on a distribution with skewness -6.2). The skewness of the first binomial distribution (N=3, p=.1) is 1.54; the skewness of the second (N=1000, p=.1) is 0.08.
I don't have any formal argument for it, but on these examples at least, the larger the skew (more right-skewed), the better the minimum does.
So which is "better", taking the minimum or average? For any particular underlying distribution we can emprically say that one is better or the other, but there are different reasonable distributions for which different statistics end up being better. So for better or worse, the choice of which one is better comes down to what we think the underlying distribution will be like. It seems like it might come down to the amount of skew we expect.
Personally, I understand benchmark results to be fairly right-skewed: you will frequently see benchmark results that are much slower than normal (several standard deviations out), but you will never see any that are much faster than normal. When I see those happen, if I am taking a running average I will get annoyed since I feel like the results are then "messed up" (something that these numbers now give some formality to). So personally I use the minimum when I benchmark. But the Central Limit Theorem is strong: if the underlying behavior repeats many times, it will drive the distribution towards a normal one at which point the average becomes better. I think the next step would be to run some actual benchmark numbers a few hundred/thousand times and analyze the resulting distribution.
While this investigation was a bit less conclusive than I hoped, at least now we can move on from abstract arguments about why one metric appeals to us or not: there are cases when either one is definitively better.
One thing I didn't really write about is that this analysis all assumes that, when comparing two benchmark runs, the mean shifts but the distribution does not. If we are changing the distribution as well, the question becomes more complicated -- the minimum statistic will reward changes that make performance more variable.
I've done a number of projects involving Xilinx FPGAs and CPLDs, and honestly I'm frustrated with them enough to be interested in trying out one of their competitors. This is pretty rant-y, so take it with a grain of salt but some of my gripes include:
- Simply awful toolchain support. The standard approach is to reverse-engineer the Xilinx file formats and write your own tooling on top of them.
- Update -- looks someone else posted a much lengthier blog post about this, which is a good read.
- Terrible software speed. I suppose they care much more about large design teams where the entire synthesis time will be measured in hours, but for a simple hobby project, it's pretty infuriating that a syntax error still takes a 10 second edit-compile-debug cycle. This is not due to any complexities in the language they support (as opposed to C++ templates, for example), but is just plain old software overhead on their part: it takes 5 seconds for them to determine that the input file doesn't exist. If you use their new 7-series chips, you can use their new Vivado software which may or may not be better, but rather than learn a new line and software I decided to try the competitor.
- Expensive prices. They don't seem to feel like they need to compete on price -- I'm sure they do for the large contracts, but for the "buy a single item on digikey" they seem to charge whatever the market will bear. And I was paying it, so I guess that's their prerogative, but it makes me frustrated.
So anyway, I had gone with Xilinx, the #1 (in sales I believe) FPGA vendor, since when learning FPGAs I think that makes sense: there's a lot of third-party dev boards for them, a lot of documentation, and a certain "safety in numbers" by going with the most common vendor. But now I feel ready to branch out and try the #2 vendor, Altera.
I saw a cheap little dev kit for Altera: the BeMicro CV. This is quite a bit less-featured than the Nexys 3 that I have been using, but it's also quite a bit cheaper: it's only $50. The FPGA it has is quite a bit beefier as well: it has "25,000 LEs [logic elements]", which as far as I can tell is roughly equivalent to the Xilinx Spartan-6 LX75. The two companies keep inflating the way they measure the size of their FPGAs so it's hard to be sure, and they put two totally different quantities in the sort fields in digikey (Xilinx's being more inflated), but I picked the LX75 (a $100 part) by assuming that "1 Xilinx slice = 2 Altera LEs", and the Cyclone V on this board has 25k LEs, and the LX75 has 11k slices.
My first experience with Altera was downloading and installing the software. They seem to have put some thought into this and have broken the download into multiple parts so that you can pick and choose what you want to download based on the features you want -- a feature that sounds trivial but is nice when Xilinx just offers a monolithic 6GB download. I had some issue right off the bat though: the device file was judged to be invalid by the installer, so when I start up Quartus (their software), it tells me there are no devices installed. No problemo, let's try to reinstall it -- "devices already installed" it smugly informs me. Luckily the uninstaller lets you install specific components, so I was able to remove the half-installed device support, but since the software quality was supposed to be one of their main selling points, this was an ominous beginning.
Once I got that out of the way, I was actually pretty impressed with their software. Their "minimum synthesis time" isn't much different from Xilinx's, which I find pretty annoying, and it also takes them a while to spot syntax errors. So unfortunately that gripe isn't fully satisfied. Overall the software feels snappier though -- it doesn't take forever to load the pin planner or any other window or view. There's still an annoying separation between the synthesis and programming flows -- the tools know exactly what file I just generated, but I have to figure out what it was so that I can tell the programmer what file to program. And then the programmer asks me even time if I would like to save my chain settings.
The documentation seems a bit lighter for Altera projects, especially with this dev board -- I guess that's one drawback of not buying from Digilent. Happily and surprisingly, the software was intuitive enough that I was able to take a project all the way through synthesis without reading any documentation! While it's not perfect, I can definitely see why people say that Altera's software is better. I had some issues with the programmer where the USB driver hadn't installed, so I ended up having to search on how to do that, but once I got that set up I got my little test program on the board without any trouble.
So at this point, I have a simple test design that connects some of the switches to some of the LEDs. Cool! I got this up way faster than I did for my first FPGA board; that's not really a comparison of the two vendors since there's probably a large experience component, but it's still cool to see. Next I'll try to find some time to do a project on this new board -- this FPGA is quite a bit bigger than my previous one, so it could possibly fit a reasonable Litecoin miner.
Overall it's hard to not feel like the FPGA EDA tools are far behind where software build tools are. I guess it's a much smaller market, but I hope that some day EDA tools catch up.
I remarked to a friend recently that technology seems to increase our expectations faster than it can meet them: "why can't my pocket-computer get more than 6 hours of battery life" would have seemed like such a surreal complaint 10 years ago. For that reason I want to recognize an experience I had lately that actually did impress me even in our jaded ways.
The background is that I wanted a dedicated laptop for my electronics work. Normally I use my primary laptop for the job, but it's annoying to connect and disconnect it (power, ethernet [the wifi in my apartment is not great], mouse, electronics projects), and worries about lead contamination lead me to be diligent about cleaning it after using it for electronics. So, I decided to dust off my old college laptop and resurrect it for this new purpose.
I didn't have high hopes for this process, since now my college laptop is not just "crappy and cheap" (hey I bought it in college) but also "ancient"! But anyway I still wanted to try it, so I pulled out my old laptop, plugged it in... and was immediately shown the exact screen I had left three years ago. Apparently the last day I used it was May 1 2011, and I had put it into hibernation. Everything worked after all these years! This thing had been banged around like crazy during college, and sat around for a few years afterwards, and yet it still worked. I'm pretty happy when a piece of electronics lives through its 3 year warranty, but this thing was still going strong after 7 years -- crazy.
I was generally impressed by the laptop too -- this is comparing by 7-year-old college laptop with my 3-year-old current one. The screen was a crisp 1920x1200 (quite a bit better than my new laptop), and it didn't feel sluggish at all. I checked out the processor info and some online benchmarks, and it looks like the processor was only ~10% slower than my new one. Of course, not everything was great: the old laptop feels like it is definitely over 6lbs, and I can't believe I lugged that around campus. But it's just going to sit on a desk now so it doesn't matter.
Part 2: Ubuntu
This laptop was running 10.04, which I remember being a major pain to get running at the time. I decided to upgrade it to 14.04, but I was worried about this process as well. I had spent several days getting Linux to work on this laptop when I first decided to switch to it, which involved some crazy driver work from some friends to get the wifi card working. I was worried that I would run into the same problems and have to give up on this.
So, first I tried an in-place Ubuntu upgrade to 14.04, and to my surprise everything worked! I wanted a clean slate, though, so I tried a fresh install of 14.04: again, everything worked. I haven't done an extensive run through the peripherals but all the necessary bits were certainly working.
I know that it's probably just a single driver that got added to the Linux kernel, but the experience was night-and-day compared to the headache I endured the first time.
So anyway, this was crazy! I have always panned Dell and my old laptop as being "crappy", and Linux as "not user friendly", but at least in this particular case the hardware proved to be remarkably robust (let's just ignore the bezel that came loose), and the software remarkably smooth.
Part 3: Weird desktop
Freshly bolstered by this experience, and with a 14.04 CD in hand, I decided to upgrade my work desktop as well. I had for some reason decided to install 11.04 on that machine, which has been causing me no end of pain recently. This Ubuntu release is so unsupported that all the apt mirrors are gone, and the only supported upgrade path is a clean install. (Side note: because of this experience, I've decided to never use a non-LTS release again.) I've put off reinstalling it with a new version since I also had a horrible experience getting it up and running: I'm running a three-monitor setup and it took me forever (a few days of work) to figure out the right combination of drivers and configurations.
This one didn't go quite as smoothly with this transition, but within a day I was able to get 14.04 up and running and everything pretty much back to the way it was before, but minus the random memory corruptions I used to get from a buggy graphics driver! I also no longer get warnings from every web app out there that I am running an ancient version of Chrome.
All in all, I've been extremely impressed with the reliability of the electronics hardware and the comprehensiveness of modern Linux / Ubuntu.
Part 4: Using the new setup
While this post is mostly about how easy it apparently has become to get Ubuntu running on various hardware, I'm also extremely happy with the new electronics setup of having a dedicated laptop. It is definitely nice to not have to swap my main laptop in and out, and it also means that I can do the software side of my electronics work from anywhere. I set up a SSH server on this laptop, and I am able to log in remotely (even outside of my apartment) into it and work with any electronics projects I left attached! (I plan to point my Dropcam at the workbench so that I can see things remotely, though I haven't gotten around to that.) I made use of this ability over the Thanksgiving break to work on an FPGA design (got DDR3 ram working with it!), which I will hopefully have time to blog about shortly.
Overall, I'm definitely glad I decided to go through this process: the dedicated laptop is very helpful and getting it set up was way less painful than I expected.
I was excited to see recently that ARM announced their new Cortex-M7 microcontroller core, and that ST announced their line using that core, the STM32F7. I had briefly played around with the STM32 before, and I talked about how I was going to start using it -- I never followed up on that post, but I got some example programs working, built a custom board, didn't get that to work immediately, and then got side-tracked by other projects. With the release of the Cortex M7 and the STM32F7, I thought it'd be a good time to get back into it and work through some of the issues I had been running into.
First of all though, why do I find these chips exciting? Because they present a tremendous value opportunity, with a range of competitive chips from extremely low-priced options to extremely powerful options.
The comparison point here is the ATmega328: the microcontroller used on the Arduino, and what I've been using in most of my projects. They currently cost $3.28 [all prices are for single quantities on digikey], for which you get a nice 20MHz 8-bit microcontroller with 32KB of flash and 2KB of ram. You can go cheaper by getting the ATmega48 which costs $2.54, but you only get 4KB of program space and 512B of ram, which can start to be limiting. There aren't any higher-performance options in this line, though I believe that Atmel makes some other lines (AVR32) that could potentially satisfy that, and they also make their own line of ARM-based chips. I won't try to evaluate those other lines, though, since I'm not familiar with them and they don't have the stature of the ATmegas.
Side note -- so far I'm talking about CPU core, clock speeds, flash and ram, since for my purposes those are the major differentiators. There are other factors that can be important for other projects -- peripheral support, the number of GPIOs, power usage -- but for all of those factors, all of these chips are far far more than adequate for me so I don't typically think about them.
The STM32 line has quite a few entries in it, which challenge the ATmega328 on multiple sides. On the low side, there's the F0 series: for $1.58, you can get a 48MHz 32-bit microcontroller (Cortex M0) with 32KB of flash and 4KB of RAM. This seems like a pretty direct competitor to the ATmega328: get your ATmega power (and more) at less than half the price. It even comes in the same package, for what that's worth.
At slightly more than the cost of an ATmega, you can move up to the F3 family, and get quite a bit better performance. For $4.14 you can get a 72MHz Cortex M3 with 64KB of flash and 16KB of RAM.
One of the most exciting things to me is just how much higher we can keep going: you can get a 100MHz chip for $7.08, a 120MHz chip for $8.26, a 168MHz chip for $10.99, and -- if you really want it -- a 180MHz chip for $17.33. The STM32F7 has recently been announced and there's no pricing, but is supposed to be 200MHz (with a faster core than the M4) and is yet another step up.
When I saw this, I was pretty swayed: assuming that the chips are at least somewhat compatible (but who knows -- read on), if you learn about this line, you can get access to a huge number of chips that you can start using in many different situations.
But if these chips are so great, why doesn't everyone already use them? As I dig into trying to use it myself, I think I'm starting to learn why. I think some of it has to do with the technical features of these chips, but it's mostly due to the ecosystem around them, or lack thereof.
Working with the STM32 and the STM32F3 Discovery board I have (their eval board), I'm gaining a lot of appreciation for what Arduino has done. In the past I've haven't been too impressed -- it seems like every hobbyist puts together their own clone, so it can't be too hard, right?
So yes, maybe putting together the hardware for such a board isn't too bad. But I already have working hardware for my STM32, and I *still* had to do quite a bit of work to get anything running on it. This has shown me that there is much more to making these platforms successful than just getting the hardware to work.
The Arduino takes some fairly simple technology (ATmega) and turns it into a very good product: something very versatile and easy to use. There doesn't seem to be anything corresponding for the STM32: the technology is all there, and probably better than the ATmega technology, but the products are intensely lacking.
Ok so I've been pretty vague about saying it's harder to use, so what actually causes that?
Family compatibility issues
One of the most interesting aspects of the STM32 family is its extensiveness; it's very compelling to think that you can switch up and down this line, either within a project or for different projects, with relatively little migration cost. It's exciting to think that with one ramp-up cost, you gain access to both $1.58 microcontrollers and 168MHz microcontrollers.
I've found this to actually be fairly lackluster in practice -- quite a bit changes as you move between the different major lines (ex: F3 vs F4). Within a single line, things seem to be pretty compatible -- it looks like everything in the "F30X" family is code-compatible. It also looks like they've tried hard to maintain pin-compatibility for different footprints between different lines, so it looks like (at a hardware level) you can take an existing piece of hardware and simply put a different microcontroller onto it. I've learned the hard way that pin compatibility in no way has to imply software compatibility -- I thought pin compatibility would have been a stricter criteria than software compatibility, but they're just not related.
To be fair, even the ATmegas aren't perfect when it comes to compatibility. I've gotten bitten by the fact that even though the ATmega88 and ATmega328 are supposed to be simple variations on the same part (they have only a single datasheet), there some differences there. There's also probably much more of a difference between the ATmegaX8 and the other ATmegas, and even more of a difference with their other lines (XMEGA, AVR32).
For the ATmegas, people seem to have somewhat standardized on the ATmegaX8, which keeps things simple. For the STM32, people seem to be pretty split between the different lines, which leads to a large amount of incompatible projects out there. Even if you're just trying to focus on a single chip, the family incompatibilities can hurt you even if you're not trying to port code -- it means that the STM32 "community" ends up being fragmented more than it potentially could be, with lots of incompatible example code out there. It means the community for any particular chip is essentially smaller due to the fragmentation.
What exactly is different between lines? Pretty much all the registers can be different, the interactions with the core architecture can be different (peripherals are put on different buses, etc). This means that either 1) you have different code for different families, or 2) you use a compatibility library that masks the differences. #1 seems to be the common case at least for small projects, and mostly works but it makes porting hard, and it can be hard to find example code for your particular processor. Option #2 (using a library) presents its own set of issues.
Lack of good firmware libraries
This issue of software differences seems like the kind of problem that a layer of abstraction could solve. Arduino has done a great job of doing this with their set of standardized libraries -- I think the interfaces even get copied to unrelated projects that want to provide "Arduino-compatibility".
For the STM32, there is an interesting situation: there are too many library options. None of them are great, presumably because none of them have gained enough traction to have a sustainable community. ST themselves provide some libraries, but there are a number of issues (licensing, general usability) and people don't seem to use it. I have tried libopencm3, and it seems quite good, but it has been defunct for a year or so. There are a number of other libraries such as libmaple, but none of them seem to be taking off.
Interestingly, this doesn't seem to be a problem for more complex chips, such as the Allwinner Cortex-A's I have been playing with -- despite the fact that they are far more complicated, people have standardized on a single "firmware library" called Linux, so we don't have this same fragmentation.
So what did I do about this problem of there being too many options leading to none of them being good? Decide to create my own, of course. I don't expect mine homebrew version to take off or be competitive with existing libraries (even the defunct ones), but it should be educational and hopefully rewarding. If you have any tips about other libraries I would love to hear them.
Down the rabbit hole...
Complexity of minimal usage
I managed to get some simple examples working on my own framework, but it was surprisingly complicated (and hence that's all I've managed to do so far). I won't go into all the details -- you can check out the code in my github -- but there are quite a few things to get right, most of which are not well advertised. I ended up using some of the startup code from the STM32 example projects, but I ended up running into a bug in the linker script (yes you read that right) which was causing things to crash due to an improper setting of the initial stack pointer. I had to set up and learn to use GDB to remotely debug the STM32 -- immensely useful, but much harder than what you need to do for an Arduino. The bug in the linker script was because it had hardcoded the stack pointer as 64KB into the sram, but the chip I'm using only has 40KB of sram; this was an easy fix, so I don't know why they hardcoded that, especially since it was in the "generic" part of the linker script. I was really hoping to avoid having to mess with linker scripts to get an LED to blink.
Once I fixed that bug, I got the LEDs to blink and was happy. I was messing with the code and having it blink in different patterns, and noticed that sometimes it "didn't work" -- the LEDS wouldn't flash at all. The changes that caused it seemed entirely unrelated -- I would change the number of initial flashes, and suddenly get no flashes at all.
It seems like the issue is that I needed to add a delay between the enabling of the GPIO port (and the enabling of the corresponding clock) and the setting of the mode registers that control that port. Otherwise, the mode register would get re-reset, causing all the pins get set back to inputs instead of outputs. I guess this is the kind of issues that one runs into when working at this level on a chip of this complexity.
So overall, the STM32 chips are way, way more complicated to use than the ATmegas. I was able to build custom ATmega circuits and boards very easily and switch away from the Arduino libraries and IDE without too much hassle, but I'm still struggling to do that with the STM32 despite having spent more time and now having more experience on the subject. I really hope that someone will come along and clean up this situation, since I think the chips look great. ST seems like they are trying to offer more libraries and software, but I just don't get an optimistic sense from looking at it.
So, I'm back where I was a few months ago: I got some LEDs to blink on an evaluation board. Except now it's running on my own framework (or lack thereof), and I have a far better understanding of how it all works.
The next steps are to move this setup to my custom board, which uses a slightly different microcontroller (F4 instead of F3) and get those LEDs to blink. Then I want to learn how to use the USB driver, and use that to implement a USB-based virtual serial port. The whole goal of this exercise is to get the 168MHz chip working and use that as a replacement for my arduino-like microcontroller that runs my other projects, which ends up getting both CPU and bandwidth limited.
Sometimes I start a project thinking it will be about one thing: I thought my FPGA project was going to be about developing my Verilog skills and building a graphics engine, but at least at first, it was primarily about getting JTAG working. (Programming Xilinx FPGAs is actually a remarkably complicated story, typically involving people reverse engineering the Xilinx file formats and JTAG protocol.) I thought my 3D printer would be about designing 3D models and then making them in real life -- but it was really about mechanical reliability. My latest project, which I haven't blogged about since I was trying to hold off until it was done, is building a single board computer (pcb photo here) -- I thought it'd be about the integrity of high-speed signals (DDR3, 100Mbps ethernet), but it's actually turned out to be about BGA soldering.
I've done some BGA soldering in the past -- I created a little test board for Xilinx CPLDs, since those are 1) the cheapest BGA parts I could find, and 2) have a nice JTAG interface which gives us an easy way of testing the external connectivity. After a couple rough starts with that I thought I had the hang of it down, so I used a BGA FPGA in my (ongoing) raytracer project. I haven't extensively tested the soldering on that board, but the basic functionality (JTAG and VGA) were brought up successfully, so for at least ~30 of the pins I had a 100% success rate. So I thought I had successfully conquered BGA soldering, and I was starting to think about whether or not I could do 0.8mm BGAs, and so on.
My own SBC
Fast forward to trying to build my own single board computer (SBC). This is something I've been thinking about doing for a while -- not because I think the world needs another Raspberry-Pi clone, but because I want to make one as small as possible and socket it into a backplane for a small cluster computer. Here's what I came up with:
Sorry for the lack of reference scale, but these boards are 56x70mm, and I should be able to fit 16 of them into a mini-ITX case. The large QFP footprint is for an Allwinner A13 processor -- not the most performant option out there, but widely used so I figured it'd be a good starting point. The assembly went fairly smoothly: I had to do a tiny bit of trace cutting and added a discrete 10k resistor, and I forgot to solder the exposed pad of the A13 (which is not just for thermal management, but is also the only ground pin for the processor), but after that, it booted up and I got a console!
The console was able to tell me that there was some problem initializing the DDR3 DRAM, at which point the processor would freeze. I spent some time hacking around in the U-Boot firmware to figure out what was going wrong, and the problems started with the processor failing in "training", or learning of optimal timings. I spent some time investigating that, and wasn't able to get it to work.
So I bought an Olimex A13 board, and decided to try out my brand of memory on it, since it's not specified to be supported. I used my hot air tool to remove the DDR3 chip from the Olimex board and attach one of mine, and... got the same problem. I was actually pretty happy with that, since it meant that there was a problem with the soldering or the DRAM part, which is much more tractable than a problem with trace length matching or single integrity.
I tried quite a few times to solder the DRAM onto the Olimex board, using a number of different approaches (no flux, flux, or solder paste). In the end, on the fifth attempt, I got the Olimex board to boot! So the memory was supported, but my "process yield" was abysmal. I didn't care, and I decided to try it again on my board, with no luck. So I went back to the Olimex board: another attempt, didn't work. Then I noticed that my hot air tool was now outputting only 220C air, which isn't really hot enough to do BGA reflow. (I left a 1-star review on Amazon -- my hopes weren't high for that unit, but 10-reflows-before-breaking was not good enough.)
I ordered myself a nicer hot air unit (along with some extra heating elements for the current one in case I can repair it, but it's not clear that the heating element is the issue), which should arrive in the next few days. I'm still holding out hope that I can get my process to be very reliable, and that there aren't other problems with the board. Hopefully my next blog post will be about how much nicer my new hot air tool is, and how it let me nail the process down.