kmod's blog

1Feb/178

Personal thoughts about Pyston’s outcome

I try to not read HN/Reddit too much about Pyston, since while there are certainly some smart and reasonable people on there, there also seem to be quite a few people with axes to grind (*cough cough* Python 3).  But there are some recurring themes I noticed in the comments about our announcement about Pyston's future so I wanted to try to talk about some of them.  I'm not really aiming to change anyone's mind, but since I haven't really talked through our motivations and decisions for the project, I wanted to make sure to put them out there.

Why we built a JIT

Let's go back to 2013 when we decided to do the project: CPU usage at Dropbox was an increasingly large concern.  Despite the common wisdom that "Python is IO-bound", requests to the Dropbox website were spending around 90% of their time on the webserver CPU, and we were buying racks of webservers at a worrying pace.

At a technical level, the situation was tricky, because the CPU time was spread around in many areas, with the hottest areas accounting for a small (single-digit?) percentage of the entire request.  This meant that potential solutions would have to apply to large portions of the codebase, as opposed to something like trying to Cython-ize a small number of functions.  And unfortunately, PyPy was not, and still is not, close to the level of compatibility to run a multi-million-LOC codebase like Dropbox's, especially with our heavy use of extension modules.

So, we thought (and I still believe) that Dropbox's use-case falls into a pretty wide gap in the Python-performance ecosystem, of people who want better performance but who are unable or unwilling to sacrifice the ecosystem that led them to choose Python in the first place.  Our overall strategy has been to target the gap in the market, rather than trying to compete head-to-head with existing solutions.

And yes, I was excited to have an opportunity to tackle this sort of problem.  I think I did as good a job as I could to discount that, but it's impossible to know what effect it actually had.

Why we started from scratch

Another common complaint is that we should have at least started with PyPy or CPython's codebase.

For PyPy, it would have been tricky, since Dropbox's needs are both philosophically and technically opposed to PyPy's goals.  We needed a high level of compatibility and reasonable performance gains on complex, real-world workloads.  I think this is a case that PyPy has not been able to crack, and in my opinion is why they are not enjoying higher levels of success.  If this was just a matter of investing a bit more into their platform, then yes it would have been great to just "help make PyPy work a bit better".  Unfortunately, I think their issues (lack of C extension support, performance reliability, memory usage) are baked into their architecture.  My understanding is that a "PyPy that is modified to work for Dropbox" would not look much like PyPy in the end.

For CPython, this was more of a pragmatic decision.  Our goal was always to leverage CPython as much as we could, and now in 2017 I would recklessly estimate that Pyston's codebase is 90% CPython code.  So at this point, we are clearly a CPython-based implementation.

My opinion is that it would have been very tough to start out this way.  The CPython codebase is not particularly amenable to experimentation in these fundamental areas.  And for the early stages of the project, our priority was to validate our strategies.  I think this was a good choice because our initial strategy (using LLVM to make Python fast) did not work, and we ended up switching gears to something much more successful.

But yes, along the way we did reimplement some things.  I think we did a good job of understanding that those things were not our value-add and to treat them appropriately.  I still wonder if there were ways we could have avoided more of the duplicated effort, but it's not obvious to me how we could have done so.

Issues people don't think about

It's an interesting phenomenon that people feel very comfortable having strong opinions about language performance without having much experience in the area.  I can't judge, because I was in this boat -- I thought that if web browsers made JS fast, then we could do the same thing and make Python fast.  So instead of trying to squelch the "hey they made Lua fast, that means Lua is better!" opinions, I'll try to just talk about what makes Python hard to run quickly (especially as compared to less-dynamic languages like JS or Lua).

The thing I wish people understood about Python performance is that the difficulties come from Python's extremely rich object model, not from anything about its dynamic scopes or dynamic types.  The problem is that every operation in Python will typically have multiple points at which the user can override the behavior, and these features are used, often very extensively.  Some examples are inspecting the locals of a frame after the frame has exited, mutating functions in-place, or even something as banal as overriding isinstance.  These are all things that we had to support, and are used enough that we have to support efficiently, and don't have analogs in less-dynamic languages like JS or Lua.

On the flip side, the issues with Python compatibility are also quite different than most people understand.  Even the smartest technical approaches will have compatibility issues with codebases the size of Dropbox.  We found, for example, that there are simply too many things that will break when switching from refcounting to a tracing garbage collector, or even switching the dictionary ordering.  We ended up having to re-do our implementations of both of these to match CPython's behavior exactly.

Memory usage is also a very large problem for Python programs, especially in the web-app domain.  This is, unintuitively, driven in part by the GIL: while a multi-process approach will be conceptually similar to a multi-threaded approach, the multi-process approach uses much more memory.  This is because Python cannot easily share its memory between different processes, both for logistical reasons, but also for some deeper reasons stemming from reference counting.  Regardless of the exact reasons, there are many parts of Dropbox that are actually memory-capacity-bound, where the key metric is "requests per second per GB of memory".  We thought a 50% speed increase would justify a 2x memory increase, but this is worse in a memory-bound service.  Memory usage is not something that gets talked about that often in the Python space (except for MicroPython), and would be another reason that PyPy would struggle to be competitive for Dropbox's use-case.

 

So again, this post is me trying to explain some of the decisions we made along the way, and hopefully stay away from being too defensive about it.  We certainly had our share of bad bets and schedule overruns, and if I were to do this all over again my plan would be much better the second time around.  But I do think that most of our decisions were defensible, which is why I wanted to take the time to talk about them.

Filed under: Pyston 8 Comments
17Jan/173

NumPy to Theano / TensorFlow: Yea or Nay?

Hey all, I'm investigating an idea and it's gotten to the point that I'd like to solicit feedback.  The idea is to use Theano or TensorFlow to accelerate existing NumPy programs.  The technical challenges here are pretty daunting, but I feel like I have a decent understanding of its feasibility (I have a prototype that I think is promising).  The other side of the equation is how valuable this would be.  The potential benefits seem very compelling (cross-op optimizations, GPU and distributed execution "for free"), and I've heard a lot of people ask for better NumPy performance.  The worrying thing, though, is that I haven't been able to find anyone willing to share their code or workflow.  Not that I'm blaming anyone, but that situation makes me worried about the demand for something like this.

So, what do you think, would this be valuable or useful?  Is it worth putting more time into this?  Or will it be just another NumPy accelerator that doesn't get used?  If you have any thoughts, or want to chime in about your experiences with NumPy performance, I'd definitely be interested to hear about it in the comments.

Filed under: Uncategorized 3 Comments
15Jan/171

Amazon-Walmart arbitrage

I recently ordered some junk food from Amazon, despite my wife's objections. I ordered it from an Amazon Market (aka third party) seller since that was the choice picked by Amazon for one-click ordering.

The food arrived, and the interesting thing is that it arrived in a Walmart box, with a Walmart packing slip. Evidently, someone savvy recognized that the Walmart price was lower than the Amazon price, and undercut Amazon's price using Walmart as the fulfillment. I was pretty annoyed to have been caught by this, but at the same time I have to respect that they pulled this off, and that I got the food cheaper than if they hadn't done this.

Anyway, just thought that it is interesting that people are out there doing this!

Filed under: Uncategorized 1 Comment
3Oct/160

What does this print, #2

I meant to post more of these, but here's one for fun:

class A(object):
    def __eq__(self, rhs):
        return True

class B(object):
    def __eq__(self, rhs):
        return False

print A() in [B()]
print B() in [A()]

Maybe not quite as surprising once you see the results and think about it, but getting this wrong was the source of some strange bugs in Pyston.

Filed under: Uncategorized No Comments
28Jul/162

Stack vs Register bytecodes for Python

There seems to be a consensus that register bytecodes are superior to stack bytecodes.  I don't quite know how to cite "common knowledge", but doing a google search for "Python register VM" or "stack vs register vm" supports the fact that many people believe this.  There was a comment on this blog to this effect as well.

Anyway, regardless of whether it truly is something that everyone believes or not, I thought I'd add my two cents.  Pyston uses a register bytecode for Python, and I wouldn't say it's as great as people claim.

Lifetime management for refcounting

Why?  One of the commonly-cited reasons that register bytecodes are better is that they don't need explicit push/pop instructions.  I'm not quite sure I agree that you don't need push instructions -- you still need an equivalent "load immediate into register".  But the more interesting one (at least for this blog post) is pop.

The problem is that in a reference-counted VM, we need to explicitly kill registers.  While the Python community has made great strides to support deferred destruction, there is still code (especially legacy code) that relies on immediate destruction.  In Pyston, we've found that it's not good enough to just decref a register the next time it is set: we need to decref a register the last time it is used.  This means that we had to add explicit "kill flags" to our instructions that say which registers should be killed as a result of the instruction.  In certain cases we need to add explicit "kill instructions" whose only purpose is to kill a register.

In the end it's certainly manageable.  But because we use a register bytecode, we need to add explicit lifetime management, whereas in a stack bytecode you get that for free.

 

I don't think it's a huge deal either way, because I don't think interpretation overhead is the main factor in Python performance, and a JIT can smooth over the differences anyway.  But the lifetime-management aspect was a surprise to me and I thought I'd mention it.

Filed under: Uncategorized 2 Comments
2Jul/167

Why is Python slow

In case you missed it, Marius recently wrote a post on the Pyston blog about our baseline JIT tier.  Our baseline JIT sits between our interpreter tier and our LLVM JIT tier, providing better speed than the interpreter tier but lower startup overhead than the LLVM tier.

There's been some discussion over on Hacker News, and the discussion turned to a commonly mentioned question: if LuaJIT can have a fast interpreter, why can't we use their ideas and make Python fast?  This is related to a number of other questions, such as "why can't Python be as fast as JavaScript or Lua", or "why don't you just run Python on a preexisting VM such as the JVM or the CLR".  Since these questions are pretty common I thought I'd try to write a blog post about it.

The fundamental issue is:

Python spends almost all of its time in the C runtime

This means that it doesn't really matter how quickly you execute the "Python" part of Python.  Another way of saying this is that Python opcodes are very complex, and the cost of executing them dwarfs the cost of dispatching them.  Another analogy I give is that executing Python is more similar to rendering HTML than it is to executing JS -- it's more of a description of what the runtime should do rather than an explicit step-by-step account of how to do it.

Pyston's performance improvements come from speeding up the C code, not the Python code.  When people say "why doesn't Pyston use [insert favorite JIT technique here]", my question is whether that technique would help speed up C code.  I think this is the most fundamental misconception about Python performance: we spend our energy trying to JIT C code, not Python code.  This is also why I am not very interested in running Python on pre-existing VMs, since that will only exacerbate the problem in order to fix something that isn't really broken.

 

I think another thing to consider is that a lot of people have invested a lot of time into reducing Python interpretation overhead.  If it really was as simple as "just porting LuaJIT to Python", we would have done that by now.

I gave a talk on this recently, and you can find the slides here and a LWN writeup here (no video, unfortunately).  In the talk I gave some evidence for my argument that interpretation overhead is quite small, and some motivating examples of C-runtime slowness (such as a slow for loop that doesn't involve any Python bytecodes).

One of the questions from the audience was "are there actually any people that think that Python performance is about interpreter overhead?".  They seem to not read HN :)

 

Update: why is the Python C runtime slow?

Here's the example I gave in my talk illustrating the slowness of the C runtime.  This is a for loop written in Python, but that doesn't execute any Python bytecodes:

import itertools
sum(itertools.repeat(1.0, 100000000))

The amazing thing about this is that if you write the equivalent loop in native JS, V8 can run it 6x faster than CPython.  In the talk I mistakenly attributed this to boxing overhead, but Raymond Hettinger kindly pointed out that CPython's sum() has an optimization to avoid boxing when the summands are all floats (or ints).  So it's not boxing overhead, and it's not dispatching on tp_as_number->tp_add to figure out how to add the arguments together.

My current best explanation is that it's not so much that the C runtime is slow at any given thing it does, but it just has to do a lot.  In this itertools example, about 50% of the time is dedicated to catching floating point exceptions.  The other 50% is spent figuring out how to iterate the itertools.repeat object, and checking whether the return value is a float or not.  All of these checks are fast and well optimized, but they are done every loop iteration so they add up.  A back-of-the-envelope calculation says that CPython takes about 30 CPU cycles per iteration of the loop, which is not very many, but is proportionally much more than V8's 5.

 

I thought I'd try to respond to a couple other points that were brought up on HN (always a risky proposition):

If JS/Lua can be fast why don't the Python folks get their act together and be fast?

Python is a much, much more dynamic language that even JS.  Fully talking about that probably would take another blog post, but I would say that the increase in dynamicism from JS->Python is larger than the increase going from Java->JS.  I don't know enough about Lua to compare but it sounds closer to JS than to Java or Python.

Why don't we rewrite the C runtime in Python and then JIT it?

First of all, I think this is a good idea in that it's tackling what I think is actually the issue with Python performance.  I have my worries about it as a specific implementation plan, which is why Pyston has chosen to go a different direction.

If you're going to rewrite the runtime into another language, I don't think Python would be a very good choice.  There are just too many warts/features in the language, so even if you could somehow get rid of 100% of the dynamic overhead I don't think you'd end up ahead.

There's also the practical consideration of how much C code there is in the C runtime and how long it would take to rewrite (CPython is >400kLOC, most of which is the runtime).  And there are a ton of extension modules out there written in C that we would like to be able to run, and ideally some day be able to speed up as well.  There's certainly disagreement in the Python community about the C-extension ecosystem, but my opinion is that that is as much a part of the Python language as the syntax is (you need to support it to be considered a Python implementation).

Filed under: Pyston 7 Comments
10Jun/162

Benchmarking: minimum vs average

I've seen this question come up a couple times, most recently on the python-dev mailing list.  When you want to benchmark something, you naturally want to run the workload multiple times.  But what is the best way to aggregate the multiple measurements?  The two common ways are to take the minimum of them, and to take the average (but there are many more, such as "drop the highest and lowest and return the average of the rest").  The arguments I've seen for minimum/average are:

  • The minimum is better because it better reflects the underlying model of benchmark results: that there is some ideal "best case", which can be hampered by various slowdowns.  Taking the minimum will give you a better estimate of the true behavior of the program.
  • Taking the average provides better aggregation because it "uses all of the samples".

These are both pretty abstract arguments -- even if you agree with the logic, why does either argument mean that that approach is better?

I'm going to take a different approach to try to make this question a bit more rigorous, and show that there in different cases different metrics are better.

Formalization

The first thing to do is to figure out how to formally compare two aggregation methods.  I'm going to do this by saying the statistic which has lower variance is better.  And by variance I mean variance of the aggregation statistic as the entire benchmarking process is run multiple times.  When we benchmark two different algorithms, which statistic should we use so that the comparison has the lowest amount of random noise?

Quick note on the formalization -- there may be a better way to do this.  This particular way has the unfortunate result that "always return 0" is an unbeatable aggregation.  It also slightly penalizes the average, since the average will be larger than the minimum so might be expected to have larger variance.  But I think as long as we are not trying to game the scoring metric, it ends up working pretty well.  This metric also has the nice property that it only focuses on the variance of the underlying distribution, not the mean, which reduces the number of benchmark distributions we have to consider.

Experiments

The variance of the minimum/average is hard to calculate analytically (especially for the minimum), so we're going to make it easy on ourselves and just do a Monte Carlo simulation.  There are two big parameters to this simulation: our assumed model of benchmark results, and the number of times we sample from it (aka the number of benchmark runs we do).  As we'll see the results vary pretty dramatically on those two dimensions.

Code

Normal distribution

The first distribution to try is probably the most reasonable-sounding: we assume that the results are normally-distributed.  For simplicity I'm using a normal distribution with mean 0 and standard deviation 1.  Not entirely reasonable for benchmark results to have negative numbers, but as I mentioned, we are only interested in the variance and not the mean.

If we say that we sample one time (run the benchmark only once), the results are:

stddev of min: 1.005
stddev of avg: 1.005

Ok good, our testing setup is working.  If you only have one sample, the two statistics are the same.

If we sample three times, the results are:

stddev of min: 0.75
stddev of avg: 0.58

And for 10 times:

stddev of min: 0.59
stddev of avg: 0.32

So the average pretty clearly is a better statistic for the normal distribution.  Maybe there is something to the claim that the average is just a better statistic?

Lognormal distribution

Let's try another distribution, the log-normal distribution.  This is a distribution whose logarithm is a normal distribution with, in this case, a mean of 0 and standard deviation of 1.  Taking 3 samples from this, we get:

stddev of min: 0.45
stddev of avg: 1.25

The minimum is much better.  But for fun we can also look at the max: it has a standard deviation of 3.05, which is much worse.  Clearly the asymmetry of the lognormal distribution has a large effect on the answer here.  I can't think of a reasonable explanation for why benchmark results might be log-normally-distributed, but as a proxy for other right-skewed distributions this gives some pretty compelling results.

Update: I missed this the first time, but the minimum in these experiments is significantly smaller than the average, which I think might make these results a bit hard to interpret.  But then again I still can't think of a model that would produce a lognormal distribution so I guess it's more of a thought-provoker anyway.

Binomial distribution

Or, the "random bad things might happen" distribution.  This is the distribution that says "We will encounter N events.  Each time we encounter one, with probability p it will slow down our program by 1/Np".  (The choice of 1/Np is to keep the mean constant as we vary N and p, and was probably unnecessary)

Let's model some rare-and-very-bad event, like your hourly cron jobs running during one benchmark run, or your computer suddenly going into swap.  Let's say N=3 and p=.1.  If we sample three times:

stddev of min: 0.48
stddev of avg: 0.99

Sampling 10 times:

stddev of min: 0.0
stddev of avg: 0.55

So the minimum does better.  This seems to match with the argument people make for the minimum, that for this sort of distribution the minimum does a better job of "figuring out" what the underlying performance is like.  I think this makes a lot of sense: if you accidentally put your computer to sleep during a benchmark, and wake it up the next day at which point the benchmark finishes, you wouldn't say that you have to include that sample in the average.  One can debate about whether that is proper, but the numbers clearly say that if a very rare event happens then you get less resulting variance if you ignore it.

But many of the things that affect performance occur on a much more frequent basis.  One would expect that a single benchmark run encounters many "unfortunate" cache events during its run.  Let's try N=1000 and p=.1.  Sampling 3 times:

stddev of min: 0.069
stddev of avg: 0.055

Sampling 10 times:

stddev of min: 0.054
stddev of avg: 0.030

Under this model, the average starts doing better again!  The casual explanation is that with this many events, all runs will encounter some unfortunate ones, and the minimum can't pierce through that.  A slightly more formal explanation is that a binomial distribution with large N looks very much like a normal distribution.

Skewness

There is a statistic of distributions that can help us understand this: skewness.  This has a casual understanding that is close to the normal usage of the word, but also a formal numerical definition, which is scale-invariant and just based on the shape of the distribution.  The higher the skewness, the more right-skewed the distribution.  And, IIUC, we should be able to compare the skewness across the different distributions that I've picked out.

The skewness of the normal distribution is 0.  The skewness of this particular log-normal distribution is 6.2 (and the poor-performing "max" statistic is the same as taking the min on a distribution with skewness -6.2).  The skewness of the first binomial distribution (N=3, p=.1) is 1.54; the skewness of the second (N=1000, p=.1) is 0.08.

I don't have any formal argument for it, but on these examples at least, the larger the skew (more right-skewed), the better the minimum does.

Conclusion

So which is "better", taking the minimum or average?  For any particular underlying distribution we can emprically say that one is better or the other, but there are different reasonable distributions for which different statistics end up being better.  So for better or worse, the choice of which one is better comes down to what we think the underlying distribution will be like.  It seems like it might come down to the amount of skew we expect.

Personally, I understand benchmark results to be fairly right-skewed: you will frequently see benchmark results that are much slower than normal (several standard deviations out), but you will never see any that are much faster than normal.  When I see those happen, if I am taking a running average I will get annoyed since I feel like the results are then "messed up" (something that these numbers now give some formality to).  So personally I use the minimum when I benchmark.  But the Central Limit Theorem is strong: if the underlying behavior repeats many times, it will drive the distribution towards a normal one at which point the average becomes better.  I think the next step would be to run some actual benchmark numbers a few hundred/thousand times and analyze the resulting distribution.

 

While this investigation was a bit less conclusive than I hoped, at least now we can move on from abstract arguments about why one metric appeals to us or not: there are cases when either one is definitively better.

 

Addendum

One thing I didn't really write about is that this analysis all assumes that, when comparing two benchmark runs, the mean shifts but the distribution does not.  If we are changing the distribution as well, the question becomes more complicated -- the minimum statistic will reward changes that make performance more variable.

Filed under: Uncategorized 2 Comments
27Mar/160

Xilinx Zynq: Initial Impressions

I've been passively watching the FPGA space for the past few years.  Partially because I think they're a really interesting technology, but also because, as The Next Platform says:

[T]here are clear signs that the FPGA is set to become a compelling acceleration story over the next few years.

From the relatively recent Intel acquisition of Altera by chip giant Intel, to less talked-about advancements on the programming front (OpenCL progress, advancements in both hardware and software from FPGA competitor to Intel/Altera, Xilinx) and of course, consistent competition for the compute acceleration market from GPUs, which dominate the coprocessor market for now

I'm not sure it's as sure a thing as they are making it out to be, but I think there are several reasons to think FPGAs have a good chance of becoming much more mainstream over the next five years.  I think there are some underlying technological forces underway (FPGA's power-efficiency becomes more and more attractive over time), as well as some "the time is ripe" elements such as the Intel/Altera aquisition and the possibility that deep learning will continue to drive demand in computational accelerators.

One of the commonly-cited drawbacks of FPGAs [citation needed] is their difficulty of use.  I've thought about this a little bit in the context of discrete FPGAs, but with the introduction of CPU+FPGA hybrids, I think the game has changed pretty considerably and there are a lot of really interesting opportunities to come up with new programming models and systems.

There are some exciting Xeon+FPGA parts coming out later this year (I've seen rumors that Google have already had their hands on similar parts), but there are already options out on the market: the Xilinx Zynq.

 

Zynq

I'm not going to go into too much detail about what the Zynq is, but basically it is a CPU+FPGA combo.  Unlike the upcoming Intel parts, which look like separate dies in a single chip, the Zynq I believe is a single die where the CPU and FPGA are tightly connected.  Another difference is that rather than a 15-core Xeon, the Zynq comes with a dual-Cortex-A9 (aka a smartphone processor from a few years ago).  I pledged for a snickerdoodle, but I got impatient and bought a Zybo.  There's a lot that could be said about the hardware, but my focus was on the state of the software so I'm just going to skip to that.

I've ranted blogged about how much I dislike the Xilinx tools in the past, but all my experience has been with ISE, the previous-generation version of their software.  Their new line of chips (which includes the Zynq) work with their new software suite, Vivado, which is supposed to be much better.  I was also curious about the state of FPGA+CPU programming models, and Xilinx's marketing is always talking about how Vivado has such a great workflow and is so great for "designer productivity", yadda yadda.  So I wanted to try it out and see what the current "state of the art" is, especially since I have some vague ideas about what a better workflow could look like.  Here are my initial impressions.

 

Vivado

Fair warning -- rant follows.

My experience with Vivado was pretty rough.  It took me the entire day to get to the point that I had some LEDs blinking, and then shortly thereafter my project settings got bricked and I have no idea how to make it run again.  This is even when running through a Xilinx-sponsored tutorial that is specifically for the Zybo board that I bought.

The first issue is the sheer complexity of the design process.  I think the most optimistic way to view this is that they are optimizing for large projects, so the complexity scales very nicely as your project grows, at the expense of high initial complexity.  But still, I had to work with four or five separate tools just to get my LED-blinky project working.  The integration points between the tools are very... haphazard.  Some tools will auto-detect changes made by others.  Some will detect when another tool is closed, and only then look for any changes that it made.  Some tools will only check for changes at startup, so for instance to load certain kinds of changes into the software-design tool, you simply have to quit that tool and let the hardware tool push new settings to it.  Here's the process for changing any of the FPGA code:
- Open up the Block Diagram, right click on the relevant block and select "Edit in IP Packager"
- In the new window that pops up, make the changes you want
- In that new window, navigate tabs and then sub-tabs and select Repackage IP.  It offers to let you keep the window open.  Do not get tricked by this, you have to close it.
- In the original Vivado window, nothing will change.  So go to the IP Status sub-window, hit Refresh.  Then select the module you just changed, and click Upgrade.
- Click "Generate Bitstream".  Wait 5 minutes.
- Go to "File->Export->Export Hardware".  Make sure "include bitstream" is checked.
- Open up the Eclipse-based "SDK" tool.
- Click "Program FPGA".
- Hopefully it works or else you have to do this again!

Another issue is the "magic" of the integrations.  Some of that is actually nice at "just works".  Some of it is not so nice.  For example, I have no idea how I would have made the LEDs blink without example code, because I don't know how I would have known that the LEDs were memory-mapped to address XPAR_LED_CONTROLLER_0_S00_AXI_BASEADDR.  But actually for me, I had made a mistake and re-did something, so the address was actually XPAR_LED_CONTROLLER_1_S00_AXI_BASEADDR.  An easy enough change if you know to make it, but with no idea where that name comes from, and nothing more than a "XPAR_LED_CONTROLLER_0_S00_AXI_BASEADDR is not defined" error message, it took quite a while to figure out what was wrong.

What's even worse though, was that due to a bug (which must have crept in after the tutorial was written), Vivado passed off the wrong value for XPAR_LED_CONTROLLER_1_S00_AXI_BASEADDR.  It's not clear why -- this seems like a very basic thing to get right and would be easily spotable.  But regardless of why, it passed off the wrong variable.  It's worth checking out the Xilinx forum thread about the issue, since it's representative of what dealing with Xilinx software is like: you find a forum thread with many other people complaining about the same problem.  Some users step in to try to help but the guidance is for a different kind of issue.  Then someone gives a link to a workaround, but the link is broken.  After figuring out the right link, it takes me to a support page that offers a shell script to fix the issue.  I download and run the shell script.  First it complains because it mis-parses the command line flags.  I figure out how to work around that, and it says that everything got fixed.  But Vivado didn't pick up the changes so it still builds the broken version.  I try running the tool again.  Then Vivado happily reports that my project settings are broken and the code is no longer findable.  This was the point that I gave up for the day.

Certain issues I had with ISE are still present with Vivado.  The first thing one notices is the long compile times.  Even though it is hard to imagine a simpler project than the one I was playing with, it still takes several minutes to recompile any changes made to the FPGA code.  Another gripe I have is that certain should-be-easy-to-check settings are not checked until very late in this process.  Simple things like "hey you didn't say what FPGA pin this should go to".  That may sound easy enough to catch, but in practice I had a lot of trouble getting this to work.  I guess that "external ports" are very different things from "external interfaces", and you specify their pin connections in entirely different ways.  It took me quite a few trial-and-error cycles to figure out what the software was expecting, each of which took minutes of downtime.  But really, this could easily be validated much earlier in the process.  There even is a "Validate Design" step that you can run, but I have no idea what it actually checks because it seems to always pass despite any number of errors that will happen later.

There's still a lot of cruft in Vivado, though they have put a much nicer layer of polish on top of it.  Simple things still take very long to happen, presumably because they still use their wrapper-upon-wrapper architecture.  But at least now that doesn't block the GUI (as much), and instead just gives you a nice "Running..." progress bar.  Vivado still has a very odd aversion to filenames with spaces in them.  I was kind enough to put my project in a directory without any spaces, but things got rough when Vivado tried to create a temporary file, which ended up in "C:\Users\Kevin Modzelewski\" which it couldn't handle.  At some point it also tried to create a ".metadata" folder, which apparently is an invalid filename in Windows.

 

These are just the things I can remember being frustrated about.  Xilinx sent me a survey asking if there is anything I would like to see changed in Vivado.  Unfortunately I think the answer is that there is a general lack of focus on user-experience and overall quality.  It seems like an afterthought to a company whose priority is the hardware and not the software you use to program it.  It's hard to explain, but Xilinx software still feels like a team did the bare-minimum to meet a requirements doc, where "quality beyond bare minimum" is not seen as valuable.  Personally I don't think this is the fault of the Vivado team, but probably of Xilinx as a company where they view the hardware as what they sell and the software as something they just have to deal with.

end rant.  for now

Programming model

Ok now on to the fun stuff -- the programming model.  I'm not really sure what to call this, since I think saying "programming model" already incorporates the idea of doing programming, whereas there are a lot of potential ways to engineer a system that don't require something that would be called programming.

In fact, I think Xilinx (or maybe the FPGA community which Xilinx is catering to) does not see designing FPGAs as programming.  I think fundamentally, they see it as hardware, which is designed, rather than as software, which is programmed.  I'm still trying to put my finger on exactly what I mean by that -- after all couldn't those just be different words for the same thing?  There are just a large number of places where this assumption is baked in.  Such as: the FPGA design is hardware, and the process software lives on top, and there is a fundamental separation between the two.  Or: FPGAs are tools to build custom pieces of hardware.  Even all the terminology comes from the process of building hardware: the interface between the hardware and the software is called an SDK (which is confusingly, also the name of the tool which you use to create the software in Vivado).  The software also makes use of a BSP, which stands for Board Support Package, but in this case describes the FPGA configuration.  The model is that the software runs on a "virtual board" that is implemented inside the FPGA.  I guess in context this makes sense, and to teams that are used to working this way, it probably feels natural.

But I think the excitement for FPGAs is for using them as software accelerators, where this "FPGAs are hardware" model is quite hard to deal with.  Once I get the software working again, my plan is to create a programming system where you only create a single piece of software, and some of it runs on the CPU and some runs on the FPGA.

It's exciting for me because I think there is a big opportunity here.  Both in terms of the existence of demand, but also in the complete lack of supply -- I think Xilinx is totally dropping the ball here.  Their design model has very little room for many kinds of abstractions that would make this process much easier.  You currently have to design everything in terms of "how", and then hope that the "what" happens to work out.  Even their efforts to make programming easier -- which seems to mostly consist of HLS, or compiling specialized C code as part of the process -- is within a model that I think is already inherently restrictive and unproductive.

 

But that's enough of bashing Xilinx.  Next time I have time to work on this, I'm going to implement one of my ideas on how to actually build a cohesive system out of this.  Unfortunately that will probably take me a while since I will have to build it on top of the mess that is Vivado.  But anyway, look for that in my next blog post on the topic.

Filed under: fpga No Comments
3Nov/150

Pyston 0.4 released!

I haven't been very active on this blog since I've been busy with Pyston -- and we just released version 0.4, check it out on the Pyston blog!

Filed under: Pyston No Comments
28Aug/150

What’s happening on Pyston

People sometimes ask me how Pyston is going and what we're currently working on.  It's a bit hard to answer, both because we haven't had a release recently with some headline-worthy features, but also because a lot of the stuff we're working on is individually pretty small.  Sometimes I try to find some sort of way of expressing this, maybe saying something like "there are a lot of small optimizations that we have to include" or "there is a very long tail of compatibility work".  It never feels that satisfying, so I thought I'd just jot down some of the random things that I've done lately and hope that maybe it ends up being somewhat representative.

  • Single-character string optimizations.  I noticed that we were running the following code somewhat slowly:
    query_string = url.split('?')[1]

    It turned out that we actually did a pretty good job at most of this: we would get into url.split quickly, and we would take the result and find the 1th element in it quickly.  It was just that our str.split method implementation was much slower than CPython's.  In particular, we were using a string function that was string.find(string), which even though was fast and had special-casing for small strings, was not as fast as the corresponding string.find(char) function.  So we needed to add an optimization that if the string that we are splitting on is a single character, we call string.find(char).  (CPython also has this optimization.)

  • Tracing-jit aggressiveness backoff.  This is probably the most along the lines of what I thought I'd be working on: some JIT level features dealing with some cool dynamic-language properties.  Cool.
  • Running code inside execs quickly.  Well, I haven't actually done this yet but I'm going to.  Currently we bail on efficient handling of execs, since they have some special name-resolution rules [or rather they are vastly more likely to use those rules than normal Python code], so we restrict that code to the interpreter.  I'm noticing that this is starting to effect us: collections.namedtuple creates your class by constructing a class definition string and exec'ing it.  Even though the resulting code is small, every time we have to run through it we pay some extra cost via the not-as-fast interpreter.
  • Efficient unicode attribute lookup.  I didn't anticipate this at all, but there are definitely cases where it's important for us to be able to handle unicode-based attribute lookups quickly, such as getattr(obj, u"foo").  People don't often explicitly request unicode attribute names, but any code that does "from __future__ import unicode_literals" will get this behavior by default.
  • Initializing sets in __new__ vs __init__.  This is the kind of "long tail" compatibility issue I mentioned.  You wouldn't think that it would matter to the user whether the set did its initialization work in __new__ or __init__.  Sure, there are ways that the user could tell if they really wanted to, but does "real code" doesn't depend on it?  Turns out the answer is yes, this causes errors in sqlalchemy.  So I need to go back and make sure we do the initialization at the same time that CPython does, so that we can support sqlalchemy's use of set-subclassing.

So anyway, that's just some of the random stuff that I've been up to lately (or am about to do).  There are definitely way more details to be worked out than I expected.

Filed under: Pyston No Comments