There’s a cool-looking competition being held right now, called The Hackaday Prize. I originally tried to do this super-ambitious custom-SBC project — there’s no writeup yet but you can see some photos of the pcbs here — but it’s looking like that’s difficult enough that it’s not going to happen in time. So instead I’ve decided to finally get around to building something I’ve wanted to for a while: an FPGA raytracer.
I’ve been excited for a while about the possibility of using an FPGA as a low-level graphics card, suitable for interfacing with embedded projects: I often have projects where I want more output than an LCD display, but I don’t like the idea of having to sluff the data back to the PC to display (defeats the purpose of it being embedded). I thought for a while about doing either a 2D renderer or even a 3D renderer (of the typical rasterizing variety), but those would both be a fair amount of work for something that people already have. Why not spend that time and do something a little bit different? And so the idea was born to make it a raytracer instead.
I’m not sure how well this is going to work out; even a modest resolution of 640×480@10fps is 3M pixels per second. This isn’t too high in itself, but with a straightforward implementation of raytracing, even rendering 1000 triangles with no lighting at this resolution would require doing three *billion* ray-triangle intersections per second. Even if we cut the pixel rate by a factor of 8 (320×240@5fps), that’s still 380M ray-triangle intersections. We would need 8 intersection cores running at 50MHz, or maybe 16 intersection cores at 25MHz. That seems like a fairly aggressive goal: it’s probably doable, but it’s only 320×240@5fps, which isn’t too impressive. But who knows, maybe I’ll be way off and it’ll be possible to fit 64 intersection cores in there at 50MHz! The problem is also very parallelizable, so in theory the rendering performance could be improved pretty simply by moving to a larger FPGA. I’m thinking of trying out the new Artix-series of FPGAs: they have a better price-per-logic-element than the Spartans and are supposed to be faster. Plus there are some software licensing issues with trying to use larger Spartans that don’t exist for the Artix’s. I’m currently using an Spartan 6 LX16, and maybe eventually I’ll try using an Artix 7 100T, which has 6 times the potential rendering capacity.
These calculations assume that we need to do intersections with all the triangles, which I doubt anyone serious about raytracing does: I could try to implement octtrees in the FPGA to reduce the number of collision tests required. But then you get a lot more code complexity, as well the problem of harder data parallelism (different rays will need to be intersected with different triangles). There’s the potential for a massive decrease in the number of ray-triangle intersections required (a few orders of magnitude), so it’s probably worth it if I can get it to work.
Part of the Hackaday Prize is that they’re promoting their new website, hackaday.io. I’m not quite sure how to describe it — maybe as a “project-display website”, where project-doers can talk and post about their projects, and get comments and “skulls” (similar to Likes) from people looking at it. It seems like an interesting idea, but I’m not quite sure what to make of it, and how to split posts between this blog and the hackaday.io project page. I’m thinking that it could be an interesting place to post project-level updates there (ex: “got the dram working”, “achieved this framerate”, etc) which don’t feel quite right for this, my personal blog.
Anyway, you can see the first “project log” here, which just talks about some of the technical details of the project and has a picture of the test pattern it produces to validate the VGA output. Hopefully soon I’ll have more exciting posts about the actual raytracer implementation. And I’m still holding out for the SBC project I was working on so hopefully you’ll see more about that too 😛