LF talented EE/embedded linux/signal processing engineer. Email me at thing a magig at mailcyr.us

FOSS guitar karaoke (with lights)

Like an automated roadie... or a PDF for small shows
Configure your computer to the Thingamagig standard (Github), download a session (also Github), and press play. The backing track will play and everything will just happen - guitar tones, vocal harmonies, lighting effects, loopers, etc.

Philosophy - Why does anyone need this?

TL;DR - The SONG should drive, not the equipment.

Think about a typical guitar signal chain: Guitar → pedalboard → amp. In this chain there are dozens of knobs and a handful of switches, enabling millions of possible tonal combinations, yet paradoxically still limited to your specific equipment.

Dialing in the exact same tone twice is almost impossible, or at least a massive pain in the ass. "But what about Kemper, Helix, etc and the preset system?" you ask... It's a step in the right direction, but with several major drawbacks:

1a. The equipment is still the master - The ***SONG*** should be in charge. That way, it can automate tone changes, loopers (with mechanical precision), lights, effects and effect parameters (like delay + bpm). This is the central philosophy of Thingamagig.

1b. You're still tap-dancing - Thingamagig requires zero external equipment, including pedals. (Currently, Thingamagig relies on a TC Helicon Voicelive device for vocal effects, but these will be brought into the sessions eventually.)

2. Unless the Kemper/Helix comes with an exact preset for your song... you'll have to spend time dialing it in and saving it to one of the limited number of preset slots. With Thingamagig, the "preset" is already in the session when you load it.

3. Cost - $1000 minimum for a Helix or Kemper.

4. Proprietary lock-in - Self-explanatory

Thingamagig 2.0
"Alexa, tell Thingamagig to play Back in Black by AC/DC."

Right now, though I spent more than a year simplifying it, Thingamagig requires a high technical aptitude for use. It requires a person to install Linux (preferrably Ubuntu Studio), clone multiple Github repos, load a lighting config into QLC+, load a session into Ardour and be able to navigate that interface. Further, it requires the person be at a computer to read the screen and operate the play/stop functions (at a minimum).

This is suboptimal. My ultimate vision for Thingamagig is a small, inexpensive, screenless Raspberry Pi-based device that pairs with a smart speaker. You plug in your guitar (or maybe wireless?), ask Alexa (e.g.) to play a song, and that's it. An external lighting device would be optional.

I'd like to apply to YC with this vision. If you're a talented EE/embedded linux/signal processing engineer, please email me at thing a magig at mailcyr.us.

Gallery (click to play)