Recurse SP2'23 #2: Vidrato
Just a quick update for today. I got to do a lot of pairing on different things, but my personal focus was on continuing the video filter from yesterday.
I also learned about https://fly.io/dist-sys/ , AKA “Gossip Glomers”, a set of distributed systems challenges similar to protohackers, Cryptopals, etc. If I get through protohackers, this could be a good follow-up.
Since daily Recurse blog posts will get pretty spammy, I’m hoping to
explore Hugo’s features for providing separate RSS/Atom feeds for
different kinds of posts, e.g. separating article
s from journal
entries.
Improving the webcam filter
I got to work on this with Shannon, who’s working on a cool side-scroller game in Go that you can play on her blog!
We did some refactoring on the webcam video filter I wrote about yesterday, then added a fader to adjust the delay time. It’s great to have a solid pairing buddy to help break intertia on a new feature.
After pairing, I did a lot (not enough) of cleanup on the project, added CLI options, and added additional faders to handle more parameters. In total, you can now tweak delay time for red and blue (as a multiple of red,) modulation depth, and modulation speed.
I haven’t set up OBS for screen recording yet, so I haven’t been able to record what the faders look like in action. In the meantime, you can just go over and check out the project, Vidrato, for yourself on Github!
It’s still very much a work in progress, but it’s nice to have a little tool come together.
Thoughts
I expected this to be more of a one-off task, but realizing I could add taskbars sent me down more of a rabbit hole. The addition of interactivity made me want to package it up so that others could use it, which made me want to add even more interactivity.
Switching my delay line from a FIFO queue to a ring buffer complicated the code a good bit more. I honestly felt a little self-conscious pushing the project up as-is, but I wanted to stay on track with posting here. And, like improving at writing, I want to get comfortable just showing code that I had fun with, even if it’s not what I would want to deliver professionally.
The project also spawned some interesting questions for me, given the nature of the code. There’s a lot of threaded IO for reading video frames, reading control input, loading frames from the webcam, writing frames to the monitor, and possibly writing frames to disk.
How do I modularize this kind of code?
OpenCV’s trackbar API has a feature I can use to reduce my usage of global
,
but I’ll have to think a bit on how to cleanly incorporate that.
In addition to the threading, there’s also just a lot of configuration code
relative to the actual algorithmic code.
Maybe I can break this out to a stricter configuration struct of sorts?
How do I test this kind of code? Even if I can break it down into smaller components and isolate the computational logic, how do I cleanly and thoroughly test data running through a delay line? How do I account for users tweaking a control during testing? How do I account for the possibility that I’m missing frames from the camera?