I think a decent amount about developer productivity, and there’s a rule of thumb that I like that surprises most people when they hear it: if you simplistically convert developer salaries into $/hr and computer costs into $/hr, you get a ratio of about 6,000:1 (see later for math). In other words, it costs roughly the same amount to run 6k CPU cores as it does to hire a single developer.
Personally I find these quantities hard to reason about since I am not typically spinning up thousands of cores or hiring developers. But we can reformulate the ratio in a different way and get the title of the post: one second of developer time is worth roughly 1.7 hours of CPU time, which I round down to 1 to keep the numbers nice.
Personally I find this ratio to have a lot more kick: an hour of CPU time seems like a lot, and a second of your own time seems like very little. So seeing them equated in this way seems striking to me.
This is even more powerful when you consider that developer salaries are going up over time, whereas compute costs (per unit of work) are going down over time. When I first noticed this relation the value was pretty close to 1:1, but now it is considerably higher.
Developer productivity
To me this implies that we should be willing to explore vastly more cpu-intensive developer tools. There’s the traditional problem that developers are perhaps unlikely to pay close to the full value — no individual would spend 100% of their salary to double their output — but their employer might consider it.
There’s some trickiness because the situations where we could possibly save a second of time generally have low latency requirements, so we can’t wait for even a 32-core computer to process for two minutes. We could imagine having a fleet of 10k cpus that take 1/3 of a second to save you one second (Google search comes to mind), or maybe there are nightly computations that could happen that would save you a few minutes the next day.
I’m not saying that this is easy — it would probably need to save a noticeable amount of time to be worth the complexity. But I think there’s a whole unexplored field of ideas here and I hope this post is thought-provoking.
Appendix: math
These are all round numbers because I’m not claiming to be that precise about this, and the ratio is considerably greater than the headline requires. You can disagree with my numbers and think that it’s only worth thirty minutes to save a second, or want to consider different economies with significantly different ratios, but we’re still talking about a ratio orders of magnitude different than what current tools use.
For developer costs, I used a yearly cost of $200k. We can try to make this precise by looking up salary information (one source says average software engineering salary in the US is roughly $100k), but there is a significant multiplier applied on top of salaries to get a fully loaded cost and it’s fairly subjective to decide what goes into this multiplier. $200k is nice because when you divide it by 2000hrs a year, you get a round number of $100/hr, or about $0.03/second. I’m not an expert on the subject but I expect $200k is lower than the average all-in developer cost at a FAANG-like company.
For cpu costs, I looked up the price of a three-year c6i.4xlarge reserved instance, which comes out to an effective $0.26/hour. This server has 16 VCPUs, for a final cost of $0.016/cpu-hour.
Again there’s a lot of subjectivity in these numbers. To do a rigorous analysis you’d want to include the cost of management, but I’m not sure if people management is a higher multiplier than systems management so I left it out. Three-year RIs are pretty cost optimized but spot instances represent another 40% savings on top of that, which may or may not be workable depending on the use-case. I don’t want to have to create separate ratios for different geos, or whether you are going into an office or working remote, etc. The whole thing is a rule of thumb and hopefully I’ve at least convinced you that it’s a reasonable general ratio even if it’s not perfect.