kmod's blog

22Jan/190

PoolJacking: easy 51% attacks against Bitcoin and Ethereum

With the recent Ethereum Classic double-spend attack, I decided to finally write this post about an existing vulnerability in Bitcoin and Ethereum.  The attack vector has been used in the past to steal a small amount of Bitcoin, but it can be used more cleverly to pull off 51% attacks and double-spend an unbounded amount of cryptocurrency.

This blog post talks about Bitcoin, but Ethereum and some other coins have the exact same vulnerability.

Background: Stratum

Bitcoin has a lot of parts of it that are secure.  And it has some parts that are not secure.  Unfortunately, having lots of secure parts doesn't count for much if you have parts that are not secure.

In particular, the Bitcoin ecosystem contains a protocol called Stratum, which coordinates miners and their mining pools.  Since 85% of mining happens via pools, 85% of the hashrate is dependent on Stratum's security.

Stratum is notoriously insecure.  Not in the sense that there are clever attacks on it, but in the sense that it contains zero security measures.  It's like letting anyone with an armored truck drive up to a bank and load up the truck with money: if the request is formatted right, Stratum will let it through.  Stratum has been attacked in the past by an ISP employee to steal mining rewards from their network, which is a clever, although limited, way of abusing the lack of security.  You can do much more damage than that.

The ISP employee simply redirected where the mining rewards went.  But with Stratum, you also control what block the miners are mining.  This means that if you can intercept Stratum, you can control the entire hashpower of a mining pool.  Not only does this still steal the mining rewards, but if done on a large enough scale it allows 51% attacks which are devastating to the network as a whole.

I call this PoolJacking: using Stratum to hijack an entire pool at a time, and then controlling their hashpower to rewrite the blockchain.

It's not a total break of Bitcoin's security, but PoolJacking allows attackers to double-spend, DDOS the entire network, or simply collect $5MM a day in mining rewards.

It's exploitable

This attack relies on intercepting specific internet traffic.  There are a few ways to do it, and easiest among them is BGP hijacking (another is DNS poisoning).  BGP hijacking allows an attacker to use BGP, the core internet routing protocol, to redirect where traffic is sent, and already happens thousands of times a day.

BGP access is limited, but still broadly available.  There are 62,970 licenses that have been given out to send BGP updates, and there are 16k "Network Engineers" on LinkedIn, most of whom presumably have this access.  It's conservative to say that there are 10k people who have the ability to pull off this attack.

Worryingly, these people are distributed throughout the world, all located in different legal and political regimes.  It would be easy for Pakistan to intercept Bitcoin traffic in the same way that it intercepted YouTube traffic in 2008.  I'm no expert but it looks like North Korea has the level of access to pull this off as well.

The worst part is that while this attack is detectable, there is not much that can be done about it.  An attacker only needs about 2 hours to execute a double-spend, giving a very small window for people to notice and raise confirmation requirements.  Even if the attack is noticed, it will take a very long time to roll out the required protocol fixes, since this will require updating the firmware on hardware miners (or more realistically, waiting for the next generation of miners).

 

Stratum's weakness is already known to the community, yet there has been no progress in replacing it with something secure.  I hope this blog post reveals just how damaging the vulnerability can be, and brings attention to this lingering problem.

Filed under: bitcoin No Comments
25Jun/173

Update on NumPy acceleration

I've been looking into accelerating NumPy using TensorFlow, and the results have been pretty interesting.  Here's a quick update.

tldr: TensorFlow adds a lot of overhead and doesn't speed up CPU execution, making converting NumPy->TensorFlow less promising.  TensorFlow's ability to target GPUs, though, makes up for the overheads, but not all NumPy programs are GPU-friendly.

 

Overview

The thesis of this project is that TensorFlow (TF) implements many of the sorts of optimizations that could speed up NumPy, such as GPU offload and loop fusion.  So my idea has been to take the work of the TF team and use it to speed up vanilla (non-AI / DL) NumPy workloads.  This is difficult because TF has a restricted programming model compared to NumPy, so the crux of the project is building a tool that can automatically convert from NumPy to TF.

The problem can be broken up into two parts: the first is the "frontend", or understanding NumPy code enough to translate it.  The second is the "backend", where we emit an efficient TF program.

Coming into the project, I thought the work would be in the frontend.  Understanding NumPy code, or Python code in general, is a very tough problem, but I think I have a promising dynamic analysis approach that could crack this for many kinds of programs.  As I'll talk about in the rest of this post, though, this didn't end up being the project blocker.  I'm happy with the dynamic analysis system I wrote, but I'll have to leave that to a future blog post.

k-means

The first challenge in a project like this is to set up a good goal.  It's easy to find some code that your project can help, but to be more than an intellectual curiosity it would have to help out "real" code.

I looked around to find a benchmark program that I could use as a yardstick.  I settled on scikit-learn, and specifically their k-means implementation.  k-means is a clustering algorithm, and Wikipedia has a good overview.

I immediately ran into a few issues trying to speed up scikit-learn's k-means algorithm:

  • scikit-learn uses Elkan's algorithm, which uses a neat triangle-inequality optimization to eliminate much of the work that has to be done.  This is a pretty clever idea, and is helpful when run on a CPU, but unfortunately for this project means that the algorithm is not amenable to vectorization or CPU offload.  Drat.
  • Even scikit-learn's implementation of the straightforward algorithm is hand-optimized Cython code.  This makes it fast, but also not quite suitable for Python-analysis tools, since it is not quite Python.

I converted their straightforward Cython implementation to Python so that it could run through my system, but the bar has become quite high: not only does my tool need to match the low-level optimizations that Cython does, but it needs to beat the higher-level algorithmic optimizations as well.

Translating k-means to TF

Although the overall goal is to automatically convert NumPy to TF, I started by doing the conversion manually to see the resulting performance.  I had to start with just the inner loop, because the outer loop uses some functionality that was not present in TF.  TF has a function (scatter_add) that exactly implements the semantics of the inner loop, so I thought it would have an optimized implementation that would be very fast.

In fact, it was not.  It was much slower.  This was for a few reasons:

  • TF makes a lot of copies.  The situation (at least at the time) was bad enough that it gives a bit of credibility to the rumor that the public TF code is crippled compared to what Google uses internally.
  • scatter_add supports arbitrary-rank tensors and indices, whereas the scikit-learn version hard-codes the fact that it is a rank-2 matrix.
  • It looks like TF uses an abstraction internally where they first record the updates to execute, and then execute them.  Since scatter_add is (or should be) entirely memory-bound, this is a very expensive overhead.

I fixed the first issue, but the other two would be relatively hard to solve.  It looks like TF will not be able to beat even moderately hand-optimized Cython code, at least on CPUs.

GPU

TensorFlow still has an ace up its sleeve: GPU offload.  This is what makes the restrictive programming model worthwhile in the first place.  And as promised, it was simply a matter of changing some flags, and I had k-means running on the GPU.

But again, it was not any faster.  The main problem was that the copy-elision I had done for the CPU version did not apply to CPU->GPU offload.  I had to change my TF code to keep the input matrix on the GPU; this is an entirely valid optimization, but would be extremely hard for a conversion tool to determine to be safe, so I feel like it's not quite fair.

Once I fixed that, if I cranked up the problem size a lot, I started to see speedups!  Nothing amazing (about 25% faster), and I didn't pursue this too far since the setup was starting to feel pretty synthetic to me.  But it was quite nice to see that even on a memory-bound program -- which GPUs can help with but not as much as they can help with compute-bound programs -- there was a speedup from running on the GPU.  Perhaps on other algorithms the speedup would be even greater.

Future work

Even though the end result wasn't that impressive, I think a lot was learned from this experiment:

  • Using a very-popular algorithm as a benchmark has drawbacks.  It hides a large benefit of an automated tool like this, which is to help speed up programs that have not already been hand-tuned.
  • TensorFlow is not a great fit for memory-bound computations are, but it can still speed them up by using a GPU.
  • TensorFlow does not automatically make algorithms GPU-friendly.

I don't know when I'll come back to this project, mostly because I am less convinced that it will be worthwhile.  But if it does turn out that people really want their NumPy code accelerated, I'll take a look again since I think this approach is promising.

Filed under: Uncategorized 3 Comments
11Apr/170

Monitor crashes

I've gotten a new problem in my life: my monitor has started crashing.  To be fair, the steps that cause it are fairly esoteric (using the USB ports, then switch the video input), but the failure mode is pretty remarkable: the monitor becomes completely unresponsive.  As in, I can't switch the video mode again.  And most remarkably, I can't turn the monitor off.  The power button becomes unresponsive, and I have to power cycle the monitor by unplugging it.

And this isn't a cheap monitor either, this is a high-end Dell monitor.  As a software engineer I usually feel pretty confident in our profession, but this was certainly a reminder about how even things taken for granted still have software and can fail!

Filed under: Uncategorized No Comments
27Feb/170

Persuasiveness and selection bias

I happened to be watching the Oscars last night, and I was pretty shocked to see the mistake with the Best Picture award.  Thinking back on it, this is a bit surprising to me: many things are happening that should be more "shocking" (all the craziness in Washington) but don't seem to affect me the same way.

I think this comes down to selection bias: the internet has made it so much easier to find extreme examples that the impact of them is dulled.  In contrast, seeing something for yourself -- such as watching the Oscars mistake live -- has a realness to it that is much more impactful.  Maybe another way of putting it is that it has become much easier to cherry-pick examples now.

I thought of some other examples of this: I don't feel very persuaded when someone says to me "there was a paper that shows X", because there's probably also a paper that shows the opposite of X.  Similarly, quoting an "expert" on something doesn't mean that much to me anymore either.  Particularly when their qualification is simply "[subject] expert", but even quotes from generally-respected people don't have that much impact, since I'm sure someone else famous said the opposite.

 

Maybe this is all wishful thinking.  There's the meme that "a terrorist attack is more frightening than X even though X kills more people", and if true is fairly opposite to what I'm saying here.  And I don't really know how to solve the selection bias problem -- words seem to hold less value in new internet regime where anyone can say anything they want, and it's not clear what to replace words with.  Or maybe this whole thing is just me being a bit jaded.  Either way, it will be interesting to see how society ends up adapting.

Filed under: Uncategorized No Comments
1Feb/1716

Personal thoughts about Pyston’s outcome

I try to not read HN/Reddit too much about Pyston, since while there are certainly some smart and reasonable people on there, there also seem to be quite a few people with axes to grind (*cough cough* Python 3).  But there are some recurring themes I noticed in the comments about our announcement about Pyston's future so I wanted to try to talk about some of them.  I'm not really aiming to change anyone's mind, but since I haven't really talked through our motivations and decisions for the project, I wanted to make sure to put them out there.

Why we built a JIT

Let's go back to 2013 when we decided to do the project: CPU usage at Dropbox was an increasingly large concern.  Despite the common wisdom that "Python is IO-bound", requests to the Dropbox website were spending around 90% of their time on the webserver CPU, and we were buying racks of webservers at a worrying pace.

At a technical level, the situation was tricky, because the CPU time was spread around in many areas, with the hottest areas accounting for a small (single-digit?) percentage of the entire request.  This meant that potential solutions would have to apply to large portions of the codebase, as opposed to something like trying to Cython-ize a small number of functions.  And unfortunately, PyPy was not, and still is not, close to the level of compatibility to run a multi-million-LOC codebase like Dropbox's, especially with our heavy use of extension modules.

So, we thought (and I still believe) that Dropbox's use-case falls into a pretty wide gap in the Python-performance ecosystem, of people who want better performance but who are unable or unwilling to sacrifice the ecosystem that led them to choose Python in the first place.  Our overall strategy has been to target the gap in the market, rather than trying to compete head-to-head with existing solutions.

And yes, I was excited to have an opportunity to tackle this sort of problem.  I think I did as good a job as I could to discount that, but it's impossible to know what effect it actually had.

Why we started from scratch

Another common complaint is that we should have at least started with PyPy or CPython's codebase.

For PyPy, it would have been tricky, since Dropbox's needs are both philosophically and technically opposed to PyPy's goals.  We needed a high level of compatibility and reasonable performance gains on complex, real-world workloads.  I think this is a case that PyPy has not been able to crack, and in my opinion is why they are not enjoying higher levels of success.  If this was just a matter of investing a bit more into their platform, then yes it would have been great to just "help make PyPy work a bit better".  Unfortunately, I think their issues (lack of C extension support, performance reliability, memory usage) are baked into their architecture.  My understanding is that a "PyPy that is modified to work for Dropbox" would not look much like PyPy in the end.

For CPython, this was more of a pragmatic decision.  Our goal was always to leverage CPython as much as we could, and now in 2017 I would recklessly estimate that Pyston's codebase is 90% CPython code.  So at this point, we are clearly a CPython-based implementation.

My opinion is that it would have been very tough to start out this way.  The CPython codebase is not particularly amenable to experimentation in these fundamental areas.  And for the early stages of the project, our priority was to validate our strategies.  I think this was a good choice because our initial strategy (using LLVM to make Python fast) did not work, and we ended up switching gears to something much more successful.

But yes, along the way we did reimplement some things.  I think we did a good job of understanding that those things were not our value-add and to treat them appropriately.  I still wonder if there were ways we could have avoided more of the duplicated effort, but it's not obvious to me how we could have done so.

Issues people don't think about

It's an interesting phenomenon that people feel very comfortable having strong opinions about language performance without having much experience in the area.  I can't judge, because I was in this boat -- I thought that if web browsers made JS fast, then we could do the same thing and make Python fast.  So instead of trying to squelch the "hey they made Lua fast, that means Lua is better!" opinions, I'll try to just talk about what makes Python hard to run quickly (especially as compared to less-dynamic languages like JS or Lua).

The thing I wish people understood about Python performance is that the difficulties come from Python's extremely rich object model, not from anything about its dynamic scopes or dynamic types.  The problem is that every operation in Python will typically have multiple points at which the user can override the behavior, and these features are used, often very extensively.  Some examples are inspecting the locals of a frame after the frame has exited, mutating functions in-place, or even something as banal as overriding isinstance.  These are all things that we had to support, and are used enough that we have to support efficiently, and don't have analogs in less-dynamic languages like JS or Lua.

On the flip side, the issues with Python compatibility are also quite different than most people understand.  Even the smartest technical approaches will have compatibility issues with codebases the size of Dropbox.  We found, for example, that there are simply too many things that will break when switching from refcounting to a tracing garbage collector, or even switching the dictionary ordering.  We ended up having to re-do our implementations of both of these to match CPython's behavior exactly.

Memory usage is also a very large problem for Python programs, especially in the web-app domain.  This is, unintuitively, driven in part by the GIL: while a multi-process approach will be conceptually similar to a multi-threaded approach, the multi-process approach uses much more memory.  This is because Python cannot easily share its memory between different processes, both for logistical reasons, but also for some deeper reasons stemming from reference counting.  Regardless of the exact reasons, there are many parts of Dropbox that are actually memory-capacity-bound, where the key metric is "requests per second per GB of memory".  We thought a 50% speed increase would justify a 2x memory increase, but this is worse in a memory-bound service.  Memory usage is not something that gets talked about that often in the Python space (except for MicroPython), and would be another reason that PyPy would struggle to be competitive for Dropbox's use-case.

 

So again, this post is me trying to explain some of the decisions we made along the way, and hopefully stay away from being too defensive about it.  We certainly had our share of bad bets and schedule overruns, and if I were to do this all over again my plan would be much better the second time around.  But I do think that most of our decisions were defensible, which is why I wanted to take the time to talk about them.

Filed under: Pyston 16 Comments
17Jan/173

NumPy to Theano / TensorFlow: Yea or Nay?

Hey all, I'm investigating an idea and it's gotten to the point that I'd like to solicit feedback.  The idea is to use Theano or TensorFlow to accelerate existing NumPy programs.  The technical challenges here are pretty daunting, but I feel like I have a decent understanding of its feasibility (I have a prototype that I think is promising).  The other side of the equation is how valuable this would be.  The potential benefits seem very compelling (cross-op optimizations, GPU and distributed execution "for free"), and I've heard a lot of people ask for better NumPy performance.  The worrying thing, though, is that I haven't been able to find anyone willing to share their code or workflow.  Not that I'm blaming anyone, but that situation makes me worried about the demand for something like this.

So, what do you think, would this be valuable or useful?  Is it worth putting more time into this?  Or will it be just another NumPy accelerator that doesn't get used?  If you have any thoughts, or want to chime in about your experiences with NumPy performance, I'd definitely be interested to hear about it in the comments.

Filed under: Uncategorized 3 Comments
15Jan/171

Amazon-Walmart arbitrage

I recently ordered some junk food from Amazon, despite my wife's objections. I ordered it from an Amazon Market (aka third party) seller since that was the choice picked by Amazon for one-click ordering.

The food arrived, and the interesting thing is that it arrived in a Walmart box, with a Walmart packing slip. Evidently, someone savvy recognized that the Walmart price was lower than the Amazon price, and undercut Amazon's price using Walmart as the fulfillment. I was pretty annoyed to have been caught by this, but at the same time I have to respect that they pulled this off, and that I got the food cheaper than if they hadn't done this.

Anyway, just thought that it is interesting that people are out there doing this!

Filed under: Uncategorized 1 Comment
3Oct/160

What does this print, #2

I meant to post more of these, but here's one for fun:

class A(object):
    def __eq__(self, rhs):
        return True

class B(object):
    def __eq__(self, rhs):
        return False

print A() in [B()]
print B() in [A()]

Maybe not quite as surprising once you see the results and think about it, but getting this wrong was the source of some strange bugs in Pyston.

Filed under: Uncategorized No Comments
28Jul/162

Stack vs Register bytecodes for Python

There seems to be a consensus that register bytecodes are superior to stack bytecodes.  I don't quite know how to cite "common knowledge", but doing a google search for "Python register VM" or "stack vs register vm" supports the fact that many people believe this.  There was a comment on this blog to this effect as well.

Anyway, regardless of whether it truly is something that everyone believes or not, I thought I'd add my two cents.  Pyston uses a register bytecode for Python, and I wouldn't say it's as great as people claim.

Lifetime management for refcounting

Why?  One of the commonly-cited reasons that register bytecodes are better is that they don't need explicit push/pop instructions.  I'm not quite sure I agree that you don't need push instructions -- you still need an equivalent "load immediate into register".  But the more interesting one (at least for this blog post) is pop.

The problem is that in a reference-counted VM, we need to explicitly kill registers.  While the Python community has made great strides to support deferred destruction, there is still code (especially legacy code) that relies on immediate destruction.  In Pyston, we've found that it's not good enough to just decref a register the next time it is set: we need to decref a register the last time it is used.  This means that we had to add explicit "kill flags" to our instructions that say which registers should be killed as a result of the instruction.  In certain cases we need to add explicit "kill instructions" whose only purpose is to kill a register.

In the end it's certainly manageable.  But because we use a register bytecode, we need to add explicit lifetime management, whereas in a stack bytecode you get that for free.

 

I don't think it's a huge deal either way, because I don't think interpretation overhead is the main factor in Python performance, and a JIT can smooth over the differences anyway.  But the lifetime-management aspect was a surprise to me and I thought I'd mention it.

Filed under: Uncategorized 2 Comments
2Jul/167

Why is Python slow

In case you missed it, Marius recently wrote a post on the Pyston blog about our baseline JIT tier.  Our baseline JIT sits between our interpreter tier and our LLVM JIT tier, providing better speed than the interpreter tier but lower startup overhead than the LLVM tier.

There's been some discussion over on Hacker News, and the discussion turned to a commonly mentioned question: if LuaJIT can have a fast interpreter, why can't we use their ideas and make Python fast?  This is related to a number of other questions, such as "why can't Python be as fast as JavaScript or Lua", or "why don't you just run Python on a preexisting VM such as the JVM or the CLR".  Since these questions are pretty common I thought I'd try to write a blog post about it.

The fundamental issue is:

Python spends almost all of its time in the C runtime

This means that it doesn't really matter how quickly you execute the "Python" part of Python.  Another way of saying this is that Python opcodes are very complex, and the cost of executing them dwarfs the cost of dispatching them.  Another analogy I give is that executing Python is more similar to rendering HTML than it is to executing JS -- it's more of a description of what the runtime should do rather than an explicit step-by-step account of how to do it.

Pyston's performance improvements come from speeding up the C code, not the Python code.  When people say "why doesn't Pyston use [insert favorite JIT technique here]", my question is whether that technique would help speed up C code.  I think this is the most fundamental misconception about Python performance: we spend our energy trying to JIT C code, not Python code.  This is also why I am not very interested in running Python on pre-existing VMs, since that will only exacerbate the problem in order to fix something that isn't really broken.

 

I think another thing to consider is that a lot of people have invested a lot of time into reducing Python interpretation overhead.  If it really was as simple as "just porting LuaJIT to Python", we would have done that by now.

I gave a talk on this recently, and you can find the slides here and a LWN writeup here (no video, unfortunately).  In the talk I gave some evidence for my argument that interpretation overhead is quite small, and some motivating examples of C-runtime slowness (such as a slow for loop that doesn't involve any Python bytecodes).

One of the questions from the audience was "are there actually any people that think that Python performance is about interpreter overhead?".  They seem to not read HN :)

 

Update: why is the Python C runtime slow?

Here's the example I gave in my talk illustrating the slowness of the C runtime.  This is a for loop written in Python, but that doesn't execute any Python bytecodes:

import itertools
sum(itertools.repeat(1.0, 100000000))

The amazing thing about this is that if you write the equivalent loop in native JS, V8 can run it 6x faster than CPython.  In the talk I mistakenly attributed this to boxing overhead, but Raymond Hettinger kindly pointed out that CPython's sum() has an optimization to avoid boxing when the summands are all floats (or ints).  So it's not boxing overhead, and it's not dispatching on tp_as_number->tp_add to figure out how to add the arguments together.

My current best explanation is that it's not so much that the C runtime is slow at any given thing it does, but it just has to do a lot.  In this itertools example, about 50% of the time is dedicated to catching floating point exceptions.  The other 50% is spent figuring out how to iterate the itertools.repeat object, and checking whether the return value is a float or not.  All of these checks are fast and well optimized, but they are done every loop iteration so they add up.  A back-of-the-envelope calculation says that CPython takes about 30 CPU cycles per iteration of the loop, which is not very many, but is proportionally much more than V8's 5.

 

I thought I'd try to respond to a couple other points that were brought up on HN (always a risky proposition):

If JS/Lua can be fast why don't the Python folks get their act together and be fast?

Python is a much, much more dynamic language that even JS.  Fully talking about that probably would take another blog post, but I would say that the increase in dynamicism from JS->Python is larger than the increase going from Java->JS.  I don't know enough about Lua to compare but it sounds closer to JS than to Java or Python.

Why don't we rewrite the C runtime in Python and then JIT it?

First of all, I think this is a good idea in that it's tackling what I think is actually the issue with Python performance.  I have my worries about it as a specific implementation plan, which is why Pyston has chosen to go a different direction.

If you're going to rewrite the runtime into another language, I don't think Python would be a very good choice.  There are just too many warts/features in the language, so even if you could somehow get rid of 100% of the dynamic overhead I don't think you'd end up ahead.

There's also the practical consideration of how much C code there is in the C runtime and how long it would take to rewrite (CPython is >400kLOC, most of which is the runtime).  And there are a ton of extension modules out there written in C that we would like to be able to run, and ideally some day be able to speed up as well.  There's certainly disagreement in the Python community about the C-extension ecosystem, but my opinion is that that is as much a part of the Python language as the syntax is (you need to support it to be considered a Python implementation).

Filed under: Pyston 7 Comments