distributed under the terms of the GPL license.
WS2811 pixels are awesome. I've seen lots of great projects using them to display all sorts of cool things, including video. Most of these projects have one thing in common though: The pixels have to be carefully arranged into rows and columns to make it easy to display patterns or video on them. Personally, I'm far too lazy to do that, so I decided to write some software that could display video on randomly arranged pixels instead.
How it works
The key to making all this work is to have a software-controlled camera take pictures of the lights with different patterns are displaying, and then analyze the pictures to figure out where the lights are in the picture. Originally I did this by just turning on one light at a time and taking a picture, but with 7200 lights, this would take far too long to finish. Instead, I've worked out an algorithm that lets me turn on a bunch of lights in a specific pattern, then alter the pattern for each image. With this method, a couple of hundred pictures is sufficient. Generally it takes me about an hour to collect the pictures.
Once the pictures are collected and analyzed, another program is used to decode video, sample it at points corresponding to the position of the lights and the result is stored in a data file. Finally, a program will play back the data file while also playing some music to put everything together.
The HardwareThe hardware has evolved quite a bit over the years. The only component that hasn't changed is the Raspberry PI that I use to control everything.
The software is relatively simple, but took some experimentation to get working. The first program I use is called "acquire", which runs on the Raspbery PI. It is in charge of turning on a set of lights, then execute a command which takes a picture. Since I don't actually store the pictures on the raspberry pi, the command actually uses ssh to log into another computer. From there it uses wget to fetch an image from a webcam program running on my phone.
After capturing a whole bunch of pictures, I run it through the "solve" program, which figures out how much each light is contributing to each pixel in the pictures taken. This program takes ~15 minutes to run on my fastest computer. Then it's just a matter of figuring out where the greatest concentration of contributions are, which tells us where the light is located within the image.
The third step is to run some video through the "video" program. For each frame, it will sample the video at the location of the lights. The data is stored in a very simple data file. The data is not compressed, which makes it easy to seek to a particular frame, which comes in handy when syncing playback to sound.
The download contains all the above programs, and a bunch more which can play patterns or generate videos with pretty patterns. Please beware that there are no instructions or readme files and the code is very sparse on comments. It shouldn't be that difficult to figure out if you know some programming though.
The DownloadIf you want to see my code, it is available here: chaosdisplay2018.tar.gz
The PrequelIf you're really curious, this is what it looks liked the first time I tried this.
The EndProblems? Questions? Suggestions? Mail me at firstname.lastname@example.org.
Last modified: November 30th, 2019 - Design by Monica & Fredrik Hübinette