Last time (admittedly, a very long time ago!) I painted the backdrop of my church and its A/V control setup, and gave some of the motivation for wanting a cheaper, more flexible replacement. I had a proof of concept up and running, the design and architecture of which I'll go into later in this post. But I also left you with a cliffhanger: just what exactly did happen on Sunday 10th February, and what bearing did it have on the project?
Somebody had thoughtfully left a glass of water hidden right next to the AMX touchscreen. That morning, the inevitable happened; the touchscreen lost its programming after a short bath, and we were left with no way of controlling our video switchers short of manually dialling in each take, from a different room. The team stumbled through the morning service as best as they could, and I got a phone call at lunchtime: was my replacement system ready, and could I install it right now please?
The answers were no and yes, respectively. That glass of water, strangely enough, was one of the most useful things to be added to the A/V system in some time. ;)
One of the things I had agreed with the church leadership after my week of working in the Parish Centre was that I wasn't going to spend any more than I already had on components for the replacement system. I'd already invested a couple of hundred pounds all told, and knowing what happened to the last few upgrade plans we'd heard of, I wasn't going to commit more until the plan had been given both approval and budget. So when the call came, I didn't have all the necessary components to install a complete solution - I had barely any more than the bare proof of concept from November (though I had been developing the software in that time). Nevertheless, some control was better than no control, so I grabbed what I had and headed to church.
What I had at this point was a Raspberry Pi with the control software installed, a USB hub, and enough serial adapters to connect exactly two devices. A quick committee meeting decided that the most important things were the main switcher and one of the cameras, so those were wired up first; I made a Kanban-style board using post-its to track which devices had been moved to the new system and which were still wired in to the AMX. (Two of those post-it notes, "Please do not remove these post-its" and "thank you", are still there. That's just how our sense of humour works.)
We didn't have a control surface (the touchscreen being the single most expensive part of the system), but I did have an old netbook that was capable enough for the time being. I was also fortunate enough to have received back the router that church had "borrowed" which meant we could connect the server and client machines together reliably. With the help of several of the team, we managed to bring the first phase of the system online in just a couple of hours - finishing just as the 6pm team arrived to set up.
Enough narrative. Show me teh codez.
Because I work as a technical architect, I drew a pretty, if also pretty basic, diagram of the new system:
Essentially, the "controller" running on the server is a big interface onto all the devices in its bucket. It's worth noting that you don't interact with the devices themselves from the client, you ask the controller to prod them for you. This is partly by design and partly a consequence of choosing Pyro: because the devices themselves need access to the serial ports on the physical server, they would need to use a proxy object to be able to be RPC'd. (Actually, I'm not sure quite how much work that would be.)
Having the controller as this union of all device interfaces means that it is the only object that needs to be made available through Pyro. The disadvantage (if it so proves) is that all the devices have to be physically on one server. It doesn't take much imagination to come up with a scenario in which a more distributed system is useful - for example, controlling something which doesn't have serial cabling into the server room. But that can be a future feature when it's needed. I imagine that keeping track of all possible devices on a network is one headache too far right now!
Another interesting feature of the system is the controller's "sequencer". It essentially lets you queue up a sequence of commands to be executed in turn, at intervals of (at the moment and somewhat arbitrarily) a second. The first use case, and the one for which it was included, is the perhaps surprising candidate of the highly desirable "turn the system on" feature: it needs to turn on one power distribution unit, then pause for a short time before turning on the next one, and so on. I have no doubt that more clever things will make their way in here as use cases in future. The ability to record and play back macros is an interesting idea that I'll certainly be looking into in the future.
The code itself is probably the least interesting part of the system. It's up at GitHub if you're interested (the project with all the UI is up separately). More interesting is what we can do now it's running, where I'm thinking of taking it, and how I intend to make sure getting from here to there doesn't accidentally destroy the universe on the way. (The phone calls would be insufferable.) More of that in part 3, which I'll hopefully not take quite so long to write as I did this one!