Visions and Signals

It’s been a busy summer. Visions got launched. Probably my favourite application I’ve built so far. Please have a go with it if you haven’t already.

I’ve been trying out a lot of new things to see what feels right next. I had a go at reinforcement learning – Pretty fun but prohibitively expensive and slow to take on as a main project. Even the GPUs available on google colabs aren’t really powerful enough to perform quick experiments on very large datasets like the Chess data I was looking at. I feel like if you solve chess then there are probably a lot of other applications of that degree of power. It would be doable if I didn’t mind only being able to train one version of a model every day then feed back on it the next day. But that’s not going to be a good way to work at the moment.

I’ve looked at a few other things based on the Visions architecture but working on some DSP this week is feeling good. I spent a long time learning it in the past. It was my entry point to programming and development but I’d like to think I’ve grown a lot as a developer since those days, so unsurprisingly coming back to it now I find i’m able to make a lot of things I would have liked to have made years ago but couldn’t work out how. We’ll see how it turns out but I’ve got a lot of ideas sitting around from back then that could have been pretty fun if I knew how to do the details. Its the antialiasing of everything that makes things more mathsy. I think that’s the kind of thing that LLMs can help you with now.

Synths are fun because you can launch them a piece at a time. You can’t really do that with websites. I mean, not at the scale I’m doing it at. I love the idea of developing some new synthesis techniques and slowly making them available to ableton users or something like that. I realised if i’m going to work on another music project it needs to be something with relatively broad appeal and who doesn’t love playing with surprising new synth tones in ableton? Don’t answer that.

I find it pretty draining being between projects. Thankfully I think I’ve found a new one. We’ll see how we get on.

May ’24 Update Pt. 2

So I’ve not been doing as much music stuff for a while now. I’ve been applying for jobs and it’s been a great opportunity to upskill on machine learning techniques. Obviously really powerful technologies which have loads of exciting use cases, but it can seem a bit daunting to self-learn.

It seems like every start-up position I could find was with companies which wanted to use machine learning in one way or another so this seemed like the right moment to sink my teeth into it. I started with the MIT Intro to Deep Learning Course on YouTube, which is fantastic.

That’s a lie actually, I started with the fantastic FluCoMa package for Max. There are a whole suite of tools in there for using machine learning for music within MaxMSP and if you’re artistically inclined its a really fantastic way to get an intuition for what a lot of the basic machine learning algorithms actually do. I had a lot of fun with that. Here are the courses I followed on Music Hackspace. The instructor, Ted Moore, is great and guides you through a bunch of little projects. I ended up building a classifier tool which translated my beatboxing into drum machine sounds in real time. I’d quite like to use a looper and do a gig with it, but that hasn’t happened yet.

Anyway, I was saying, the MIT Deep Learning course. An excellent primer but I probably got more out of it for having done some hands on practical experiments first in a context I understand well.

Next, I did the IBM “Machine Learning with Python” course on Coursera. Its free, its high quality and gives loads of hands-on examples again. Its more brief than Andrew Ng’s Deep learning specialisation (which I would like to do next, if i get the chance). The IBM course has loads of good visualisations in it and it helped me develop intuition for how some of the algorithms are working at a fundamental level.

Since, then I’ve got myself signed up with Google Colab, which I’d never heard of before, but is fantastic. I absolutely love being able to easily connect to a £20,000 GPU perform training on a model. I’ve been up to all sorts on there and it feels like magic. Downloading and fine-tuning the tiny Gemma2 and Llama3 models from Google and Meta respectively; Getting started with Kaggle competitions; Making my own fine-tuned models and testing out everything I’ve learned about the machine learning toolkit.

Obviously some of these tools have been around for a long time, but they’ve never been so accessible. SciKit Learn is really brilliant and makes coding these applications really simple. Me being me I’ve been looking for ways to encorporate these algorithms into the music creation process. There are a couple of really cool existing tools, like the Synplant-2 VST which uses reinforcement learning to approximate the sounds of samples you put in, using a synth voice. Amazing. I feel like we’re on the cusp of the creative process being total revolutionised and I’m excited for it.

I’m not sure what the right idea for me will be yet. I’m waiting for a technology to come along which opens up a use case which really resonates with me. In the meantime, I have the strong feeling that I’ve found a space that I can work in where something good is going to happen sooner rather than later.

Sequencer Update May ’24

It’s been quite a while since I posted. I haven’t stopped, I have a new addition to the family and he’s doing great. Since I last updated I finished the software side of the sequencer I had been working on.

It’s really fun to play with and you can start to see how it is going to work from the hardware perspective. I’ve even added midi control of all the sliders and hooked it up to this monster MIDI controller. It starts to feel a bit like a finished instrument at this point. The next step is definitely the hardware. I’ve done a fair bit of planning for this already and have a design in my head. There are going to be a lot of potentiometers – too many for a product, so the plan is to make a prototype with all the controls included then after performing with it a few times i’ll hopefully have a good idea of which controls are the most useful then slim the whole thing down to just those controls before designing a version for production.

The other challenge will be taking my embedded programming skills to the next level because this is going to need to be ported from my laptop to an STM chip. The code will need to be adapted a bit but it should be doable.

I’ve started a YouTube channel to demo my projects on and you can see the sequencer here. I mostly started the channel to demo my projects to potential clients/employers. Apparently I sound a bit stiff and need to loosen up a bit. It’s still been fun making the videos. I’m not much of a showman but I’ll probably get the hang of it. It’s great having a way to demo things. I’d never seen myself as a Youtuber but now i tried it, I realise I should have done it ages ago.

Orbits now open source :)

Orbits – my stochastic drum machine is now open source

Ever since RNBO was announced for MaxMSP, I’ve been very excited about all the possibilities that this would open up for Max users. Specifically with regards to creating instruments that people could interact with in the browser.

When I first used the RNBO software, I found the affordances for exporting to the web fairly limited. If you’ve done it, you’ll know that the web export mode from RNBO creates a very simple device with HTML sliders for each parameter in your RNBO patch. Pretty cool, but not that easy to use if you, for example, have parameters that were click buttons or dials in your original patch. Also, the user experience of just sliders vertical sliders down the page isn’t great. You’re not getting any visual feedback and it’s a bit basic.

So I did some experiments and built Orbits. Other than being fun to play with, i’m thinking of it as a progression of the default RNBO web export template. I’ve open sourced it in the hope that people will be able to adapt the code to build some more sophisticated RNBO instruments in the browser. Support for dials, buttons, switches and sliders is built in and there’s a system to map your parameters to the UI devices.

I’m currently working on a project that will make it a lot easier to create instruments in the browser using RNBO patches, even for those without coding experience. But for now, if you have a bit of Javascript experience, this should put you in the right direction to making your own instruments with a bit more pazazz than the default web export template.

If you have any questions, please get in touch. I’d be happy to hear from you.

Tidalcycles GPT

Tidalcycles GPT

I guess it’s in our nature to want to push the limits of a new technology as soon as it’s available. As soon as GPT-4 came out, I mostly wanted to ask it about stuff it didn’t know. I had a bunch of problems with hallucinations and even emailed a professor to ask him for a copy of a paper that didn’t actually exist. So now having the option to upload PDFs to it and ask questions about them is great. Ideas abound. Explain to me like I’m 5, these research papers from a wide array of different disciplines I have an interest but no expertise in. Exciting!

The thing I’d always wanted GPT-4 to know more about is Tidalcycles live-coding language (plus MaxMsp / Gen~) so now with custom GPTs being available as of this week, the first thing I did was make Tidalcycles GPT. First, I saved all the Docs off the Tidalcycles website as PDF and combined them into a couple of files, then uploaded them as context. Result – a version of ChatGPT that knows quite a lot about Tidalcycles. Should be at least as useful as the Docs for people who want to know how to do something. But also pretty interesting because it’s pretty good at writing music using Tidalcycles too. It already knows a fair bit about the structure of music so I managed to get it to write me a techno track using Tidalcycles and the code worked straight away.

So, the interesting thing for me here is the emerging capability of GPT to write music of its own. I’ve seen musicGen and some of the other generative audio models and they’re very impressive. But this is different, the model has a lot of freedom over what it makes. I gave it some prompts like “express the feelings of your true soul”, and that kind of thing and the results were really interesting. I’m not really aware of any other means by which ChatGPT can write music (maybe I’m missing something) so it’s cool that by having access to the Tidalcycles Docs, it now does. I’m probably not going to use this to write an album, although someone could I guess, but it gives an interesting foresight into what this technology will be able to do in the future when it has access to multiple means to make music and some experience writing it.

I’m pretty excited to see the aspects of human creativity that are unlocked when machines are able to perform a lot of the tasks that we deeply engage with creatively now. I don’t believe for a second that artists will be satisfied to just do the same things but faster. I guess we’ll find out very soon.

Sequencer1

I’ve been working on this Sequencer idea for a while. The end goal is to produce a hardware version but, for now, i’m satisfied that this C++ version represents a major step in the right direction.

There are 6 channels of generative sequencing, all controllable by UI sliders which manipulate various sequence generation and processing algorithms. Each channel generates a pattern and has up to three output sequences, that it can share this pattern between. I’ve set it up so that each output sequence triggers a different sample but you could link these triggers to anything you like.

The video gives a run down of how it works. There some fun stuff I’ll explain another time, like seeding sequence generation for one channel using the sequence from another channel. I’ve also got a groove engine to implement that I’m quite excited about – it’s all on a grid at the moment.

The plan is to play a few gigs with it and get a feel for which controls feel useful and what it still needs, then adapt it and eventually port it to a microcontroller and attach some knobs. It feels like a fun way of making drum patterns to me so hopefully some other people will agree. If you’d be interested in alpha testing it at some point or have any feature requests, please get in touch.

Orbits

So i’ve made a new thing. And you can use it right now.

ORBITS

It’s a stochastic drum machine called Orbits that I built with RNBO (the new extension of MaxMSP).

The plan was to just see if I could get a MaxMSP patch working as a standalone web page. The documentation for RNBO offers an easy way to upload your patch and have a template version of it with some HTML sliders and stuff but it’s not very user friendly or interesting looking. Not to mention you still need to get it hosted. So it’s a bit of a learning curve to get from the default template version to a webpage with a musical instrument that people can actually use.
I’d not done much web development before and this is a new technology i’m really excited about so I thought it was time to knuckle down and just learn to do this properly.
I finished this a little while ago, i’m a bit of a nervous sharer of my work I guess. Since then I’ve been working on the Full Stack Open web development course to learn the MERN stack. I’m now planning some upgrades to allow networked collaboration and sharing of patches.

New Sequencer Idea

So, I started work proper on this sequencer idea I’ve had for ages. I was planning to get it ready in time for Pattern Club LIve on April 15th but that turned out not to be very realistic. Still, it should be pretty cool when it’s done. I’ll post some more code and stuff on here as it gets completed. ChatGPT has been pretty helpful at converting my Max Patches into C++ code.

Chatty man

Ok, so since GPT4 was released I’ve been getting pretty sidetracked putting it through its paces. That’s not to say I haven’t been getting stuff done but I’ve been re-evaluating all my projects through the lens of “how might I do this differently if I had an expert level teacher with me the whole time?”

The good first – I find I’m undertaking work more thoroughly. I have a tendency to take on projects that are right on the edge of my abilities because that’s when I’m most motivated. In the past, that’s lead me to corner-cutting – I would try to make a thing without fully understanding all the parts it was made of. With ChatGPT as a helper, its become easier to just do everything the right way and understand each step of my working process, because there’s a place I can ask silly questions and get quickly receive detailed replies without getting a funny look or opposing opinions, even about very specific topics. For example, I’ve been working on a way to make webpages with RNBO (the new MaxMSP package), but by default it is only able to make sliders to interact with parameters of your embedded patch. With the help of ChatGPT I was able to go through Cycling74’s website building Javascript template line by line and understand it. I couldn’t have done that before without spending weeks finding the right person to help me. I’ll post the results on here pretty soon. (Yes, I do keep saying that, half the point of making this site was to push to me to get stuff finished).

The bad – as much as I’m enjoying using it, the ethical issues around the release of these chatbots are too much to go into. I don’t believe at all that OpenAI are taking the responsibility that comes with their newfound power remotely seriously. I had a go at jailbreaking ChatGPT – it’s very simple and and it could then be used for all sorts of nefarious means. Even unjailbroken, it’s going to mess with peoples jobs and the way the world works and it seems obvious that more work needed to be done to prepare for it before it was released. Even the chatbot itself will tell you how blatantly reckless OpenAI’s practices have become once you jailbreak it.

Interesting times indeed but I really wish OpenAI could have approached this important moment with the appropriate degree of caution so the excitement around this remarkable new tech wasn’t so polluted by the fear of the damage it could cause.

Going loopy

I’ve only been doing hardware stuff for a year or so and it still blows my mind when PCBs arrive. Holding the physical object after spending so much time looking at it on a screen and thinking abstractly about it feels great. Especially after working almost entirely with digital media up until recently; where the finished product is usually a digital file of some description.

I started learning KiCad last year, after stumbling across the Eurorack Blocks framework by Raphael Dingé. It’s an open source library aimed at making the design and manufacture of Eurorack modules more accessible to people with less software and electronics experience. That being said, there’s still been a learning curve as manual routing of the PCB and understanding the PCB manufacture process are still necessary. But I already managed to make a working Eurorack oscillator with it (with expert SMD soldering help from a friend), so ERB definitely should take a lot of credit. If I can do it, then anyone can and that’s mission statement accomplished for ERB.

So this module is a performance looper. I want a module that works as a Boss RC-505, but much smaller. So I can use it for live techno. I saw Blawan using the Boss unit, so I got one but it didn’t really feel intuitive for me performing with multiple bulky bits of gear. There are other Eurorack loopers out there but none of them is really quite what I want so I thought this would be a good project. Most of the through hole components have arrived now, so it just needs soldering together and we’ll find out if it works. That feels highly improbable but as far as I’m aware, there’s no reason it shouldn’t.

Unless I’ve made some critical error (entirely possible), it could be up and running in a couple of weeks. At that point I’ll post some videos on YouTube, etc.