Feeling positive after an interesting night attending the Games Co-Op meet-up in Dublin this evening.
I went along to put Jukebox in front of as many game developers as I could to get their feedback, and to see if I could enlist anyone in trying out the pre-release system.
I was very conscious that I didn’t want to give a sales pitch – we’re not selling anything at the moment – but at the same time wanted to convey the value that we are trying to create for game developers with Jukebox. Plus, I’m the world’s worst salesman; so I figured that it was better to let the technology do the talking…
I was very glad I brought along an external speaker to amplify the output. Trying to demo a music-based app from a laptop in a noisy pub was never going to be optimal conditions; but at least people could hear that there was some music coming out when it was supposed to.
All told it was a great experience for me. I’ve demo’d tech products many times before; but this was the first time I’d demo’d something where it was my own product, rather than that of a company I was working for; so emotionally there was a lot more at stake. People seemed to respond very positively to the idea – I got some really useful and constructive feedback, and met some interesting people that I hope will be sufficiently intrigued by Jukebox to give it a go.
It’s been a long time – mainly because I’ve been very busy!
The story so far:
After several false starts, I have been heads down working with a team to develop our product, test it, get beta users for it and generally take a great concept and turn it into reality.
As such there has been lots of learning along the way! If I had been diligent (or less busy actually working on it) I would have blogged about the experience as I was going along. Starting with a small prototype system and then refactoring it and growing it to make it secure, scalable, and production ready is quite a journey, with lots of enlightening steps along the way.
So, the plan now is to begin writing up those steps, bit-by-bit; and changing the focus of this site towards focussing on the technology and technology choices made to successfully build and deploy a complex software system.
The system? It’s called Jukebox, and may be found here: https://www.jukeboxaudio.co. We’re still working on the visual design, and expecting a make-over in the next week or so; but in the meantime I’ll begin talking about what we did to get where we are; and where we’re planning on going…
You need to think of the Start-Up process as a game – you learn the rules, and then try to play to the best of your abilities.
Going through a pre-seed pitch with the NDRC was a good introduction into the world of pitching for investment – the folks we were pitching to were ‘friendly’ – they were the same people (mostly) who had been mentoring us for the last five weeks, and we knew what they were looking for.
Today I am over in London and we are pitching to TechStars. The stakes are higher (we are asking for more money) and what we are walking into is far less well known.
The last few days has been a case of frantically polishing the prototype app ready to demo, and it is now working pretty well. We were planning to do a live demo; but this morning sense prevailed, and I realised that there were so many risk factors that it makes much more sense to record a video of a walk-through. We can always show the app working live afterwards if the panel want real proof.
Today I feel like I need to write something, as I’ve assembled my first ever Deep-Learning based application, and I’m a little bit freaked out (and pleased) with the result. One of the key components to the product I’m working on is an audio track comparison mechanism, to allow the system to find similar tracks to an initial ‘seed’ track.
I’ve used a Keras-based neural network to analyze tracks and produce matching metrics, and then wrapped this in some playlist generation logic, that takes a seed track and then returns the 5 closest matches.
The whole system is still very much an initial prototype, and there are lots of improvements we will need to do; however, the initial results are still eerily more accurate than I would have hoped. Sometimes the matches it is coming up with may seem strange, but then on further consideration I can see that there are notable similarities. What is strange to me is that I have no understanding of the criteria the matcher used, just that what it has come up with kinda makes sense.
As I get more familiar with machine learning I’m sure this sense of wonder will be replaced by understanding; but at the moment I am feeling a significant sense of wonder!
I feel like I’m learning some valuable, if frustrating, lessons.
I’ve made no blog updates for the last few weeks as I’ve been flat-out trying to plan and prototype the idea I had started working on in my last post.
After a lot of hard work we have a road-map, a very viable business plan, and a working prototype that shows what we wanted to do is very possible…
But the realization that despite this, it is likely to take several months (at least) to actually create a sell-able system, means that there are some lean months ahead, unless we can get funding; and even this is likely to take time.
…and not everyone on the team is prepared to batten down for the long haul.
Things are still in flux, but it’s looking as though one of our critical team members and domain expert is going to walk away, leaving the remainder of the team significantly disadvantaged; and probably unable to continue.
I think there are several lessons here:
Setting up a start-up takes time. It takes time to create a viable plan; it takes time to build a product; it takes time to get funding (and the later you leave it the better in many ways); and if you are a founder, it can take a long time to get any kind of income – as investors will not expect founders to draw a salary. With this in mind, you have to be prepared to tighten your belt in the short-term if you want long-term success.
Everyone on the team needs to fully understand what they are committing to; and commit to it. If you are asking people to come with you on a journey, and you are critical to the success of that journey; then you need to be clear with each other how far you are prepared to go to reach your destination. Getting off the bus half-way along will be letting everyone else down.
There will be set-backs – nothing is ever plain sailing; and members of a team need to be prepared to weather the storm and overcome problems as they arise: adapting and pivoting in response to things that are learned along the way, rather than falling at the first hurdle. There will be lots of hurdles, but if you stay focussed and flexible, a lot of these can be overcome.
Communication is critical. Our current situation partially arose because one team member went off on a limb and did something which destabilized their financial situation before we had secured any kind of funding. If they had talked to the rest of the founders before doing this we would have strongly advised that their plan was a bad idea, and we wouldn’t now be in this situation. When you have a tight co-dependency on a group of people, it is critical to discuss anything that could have an effect on that group of people.
So, we live and learn, and time will tell what happens; but it feels as though it’s time to start getting back to the drawing board…
After a faltering week of trying to get a prototype finished for one project, yesterday felt like a rollercoaster ride…
I got up to take an early Sunday call at 9am to discuss the next steps for the start-up idea developed at the Dublin Techstars Start-up Weekend. This is starting to grow legs, and looks like it is showing real promise. It was a quick call for everyone to touch base, with more co-ordination later in the week.
Then, an hour or so later two collaborators and myself kicked off the inaugural planning meeting for a new start-up idea, to build a product for the recruitment industry.
I am cautiously optimistic that this may go somewhere – within the team lies excellent domain knowledge and contacts within the industry; and the day was extremely productive. 6 hours of intense, heads-down planning: firstly fleshing out the high-level mechanics of what we are trying to build; followed by a couple of hours building a Lean Canvas business plan; followed by another couple of hours creating a high-level product backlog to approximate what we were going to have to do to reach MVP.
Going through this process from start to finish was highly illuminating and a great experience – we were able to really unravel the problem into well-understood and actionable steps; and also to develop a longer-term vision for where we wanted to go. It was a fabulous learning process, and by the end of the day we felt exhausted, but very happy that we had a plan in place to start moving forward.
I was also delighted to find that my kitchen counter top worked brilliantly as a horizontal whiteboard, and being able to brainstorm onto a large flat surface and then annotate the post-its with arrows and lines between them was really useful. I see a potential business idea in selling kitchen counters specifically with this purpose in mind…
Quick post today – it’s been a while, and a lot has gone on. Last weekend I attended the TechStars start-up weekend in Dublin. (http://communities.techstars.com/ireland/dublin). It was a fabulous weekend – a great chance to meet other like-minded individuals and get coaching on how to go through the process of stress-testing a start-up idea, and refining the business model. I met some great people and had a brilliant (if exhausting) time. I think that attending was definitely the best €31 I have ever spent!
I have been needing to diversify my skills into working as a shop-fitter as well over the last week, as my wife is in the process of opening a shop; and so I’ve spent a few days helping her build out shop furniture, and getting the space spruced up a bit. It’s great to see her on her journey too – finding a suitable premises has been really difficult (commercial rent prices in Dublin are generally ridiculous), but I have a really good feeling about this place; and I know she will work her magic to make the shop amazing…
So, amongst everything else that’s been going on, I have managed to make some progress on the speculative web application I’ve been building with a dispersed group of folks across the world (to be announced more publicly once it is a little more polished). The app has four key components: data acquisition, data processing, web service and web front end. I now have a basic system up and running working end-to-end; though it is still in need of a lot of refinement. The really tough part of the problem is the data acquisition and processing system; as this has to be able to acquire and classify unstructured data from multiple sources. Going for a long bike ride in the mountains yesterday gave me head-space to think a design through, and I think I have the basic algorithm figured out… I just need to implement it, see how it works, and then try to refine it.
But the really hard problems that take time and effort to solve are the things that make programming worth doing!
So, what about that work life balance?
Over the last couple of months I’ve been working under my own direction, on my own projects; and although I have been probably putting in several more hours per day of work than I did when I was working for a company, it has felt far less stressful, and far more productive and enjoyable.
So, I was asking myself last night – how come I was happily working at 11pm and feeling lively and engaged? I realised that as I was now completely free to choose the hours I work, and the location where I worked (home, cafe, hillside etc) – then I could fit my work around my needs, and how I felt on a particular day. My overall productivity has been high, but by being in control of when and where I work, the hours of work have had far less of an impact (actually a positive impact) on my well-being.
So why can’t companies allow their employees real flexibility when working – to choose when and where they do their work?
Trust? Do employers want their employees sitting under their nose, so that they can check that they are doing the work they’re being paid for?
Co-ordination? Do employers feel that their employees need to be in a single location in order to be able to attend meetings and co-ordinate with their teams?
To me it feels as though overcoming these obstacles would result in a more relaxed, happy, and productive workforce. If a company culture is based on trust, and employees are measured based on what they achieve, rather than the hours they work; then there is no reason to need employees to be present in the office at core hours. And if meetings are kept to a minimum, and co-ordination is done using other tools and using conferencing technology; then, again, there is no real need for teams to be continuously co-located.
An inspirational weekend at the Dublin Science Gallery #hackthebrain event, working with neuroscientists, neuro-tech experts, artists and engineers to experiment at the boundaries between brain-science and art.
It was amazing to have the opportunity to work alongside experts in this field, and to be able to learn from them in putting together our technology concept: Mind Anamorphosis – an application whereby the user manipulates a virtual world using their thoughts, to rotate a 3D anamorphic sculpture.
First, the technology:
I learned this weekend that there are many different types of so-called ‘Brain Computer Interfaces’, or BCIs – i.e. systems whereby brain signals can be used to control a computer. At the most basic there are systems that measure simple vital signs such as brain alpha waves, heart-rate, galvanic skin response, or muscle nerve signals, to derive a reading about the person’s relaxation state, stress levels etc.
Beyond this there are a number of relatively easy-to-determine signals that the brain produces in certain situations. One of the most easy to access is the so-called P300 response (https://en.wikipedia.org/wiki/P300_(neuroscience)), which is an EEG peak at a certain position on the back of the head around 300ms after a user has seen a flash at the point in their field of vision upon which they are concentrating (e.g. an area of the screen they are looking at). With practice this can allow people to use on-screen keyboards and controls just by looking at the screen, by periodically causing each screen button to flash, and then determining after which button flash the biggest P300 response occurred. Whilst this is still significantly more awkward than using a keyboard it still provides a way for someone to fairly easily interact with a computer with no more than eye movements.
A similar technology (at least to me as a lay-observer) is something called SSVEP (https://en.wikipedia.org/wiki/Steady_state_visually_evoked_potential) whereby different frequency flashes are used to signify different commands; and then by looking for the flashing pattern using electrodes over the visual cortex it is possible to work out which command flasher (for instance an LED positioned next to the screen) the user is looking at.
And then comes Motor Intention Detection: (http://ieeexplore.ieee.org/abstract/document/6678728/?reload=true) which is where things move into the realm of ‘real’ brain control, where a person’s thoughts are directly read and interpreted by the computer – so, for instance, when a person thinks ‘move right’ the computer is able to interpret this as ‘move right’ and so directly respond to the thought pattern. This is achieved by placing electrodes over the motor cortex in the brain and then training the computer to recognise the patterns that occur when the user think about specific actions. At the start of the hackathon the experts stated that this was one of the hardest Brain Control Interfaces to achieve; so it was a little worrying that we had chosen this as the planned method for our project…
Next, the concept – ‘Mind Anamorphosis’:
The concept we were trying to implement was an artistic idea build around the basic idea of using Motor Intention to give a user visual feedback of their intended or ‘thought about’ movement on the screen. Long-term, such a concept could be useful in giving stroke victims visual feedback on the movement they are trying to make, when the stroke has impaired their ability to actually carry this movement out. Research has shown that this visual feedback can significantly increase the ability of the brain to re-learn its ability to control the body.
In the short-term, though, we used the idea of a 3D virtual anamorphic sculpture; i.e. one that appears to be an incoherent collection of objects unless viewed from a specific angle. A great example of the concept of anamorphism in real life is Salvador Dalí’s Mae West portrait (https://divisare.com/projects/304130-salvador-dali-oscar-tusquets-blanca-sala-mae-west-room-at-teatre-museu-dali); but we wanted to take this into the virtual world, and then use brain-control to move around the sculpture to try to control the position so that the true nature of the sculpture swung into view.
For the prototype we used a headset from g.tec (http://www.gtec.at/) connected to an OpenBCI (http://openbci.com/) Ganglion capture board that fed the brain signals into my laptop. The brain signals come in as a set of analogue waveforms, and so in order to understand what they meant we needed to filter the signals, and then train spatial and categorizing filters to understand that when a certain pattern of brain-waves occurred this meant a specific thing. To carry out this processing we used the OpenVIBE toolkit (http://openvibe.inria.fr/).
The output of our OpenVIBE configuration was basically a categorization of whether the user was thinking ‘left’ or ‘right’; or rather a measure of the relative probabilities of each.
The next step was to transfer this probability signal into the Unreal game engine (https://www.unrealengine.com/en-US/blog) to then control the user’s position within a scene. OpenVIBE has an output for the Virtual Reality Peripheral Network (VPRN) protocol; and so I built a VPRN controller client plug-in for Unreal in order to receive the output from OpenVIBE and use it as controller within our virtual world.
For the hackathon we used a very simple Celtic Symbol that we exploded in three dimensions so that it only looked right from one vantage point. As the user thinks left and right, they are rotated around the symbol to (hopefully) reach the correct viewing angle to be able to see it correctly. When they did it looked like this:
Learnings…
I came along to the weekend with a bare minimum of understanding of how BCI worked and what could be done with it. Over the weekend I was able to talk to experts in the field, be guided in the use and configuration of the technology for our project; as well as watch other teams struggle and succeed in using BCI technology in different ways; and through this learned several things:
Current BCI technology is finnicky and difficult to use: even with professional equipment and experts on-hand, teams of smart people were really struggling to get the technology to work.
It is also messy – having spent two days with a rubber cap on my head covered in electroconductive gel and carrying around a bundle of wires connected to the processing box, I can say that in its current form this is definitely not technology that people would choose to use unless they had a specific reason to do so – this is still a long way from consumer technology.
For Motor Intention control, the difficult part is being able to get one’s brain into the right state of mind so that it is quiet and relaxed, so that the only signals it is generating relate to the intention to move. For a day and a half I was really struggling to train the system with my thoughts, until I was given advice on how to best think in a way that creates a clear signal for the system to work with. The only hurdle was trying to get into the required calm and semi-meditative state at a hackathon surrounded by a ton of noise and conversations. However, by the end of the second day we had reached the point where our prototype system was configured so that it was falteringly and inaccurately responding to my thoughts; but it was definitely responding to them. It was a small step; but it was still extremely exciting (and unnerving) to see a game world moving and responding to what I was thinking.
Conclusions…
Brain Control Interfaces are conceptually incredibly exciting; but it seems to me that currently, beyond medical use and some other very niche applications, there is no sign of a ‘killer’ consumer application for this; and the technology is sufficiently imprecise and difficult to use as an actual computer input device, that unless you really had to, you would not choose to use it over more traditional computer interface devices such as keyboards or mice. As one delegate from a BCI company said to me over the weekend: if all you can do is blink your eyes, then even this is easier to use than a BCI device.
However, it feels like there are still many possibilities just over the horizon; and one of the big issues blocking advancement is that it is so difficult to break into as an enthusiast. The EEG head-sets are very expensive, and almost impossible to obtain unless you are a research lab or university. Beyond the lofty goal of true brain control of computers, there are many interesting lesser applications that could be useful, creative, or just fun; such as emotion-responsive clothing; or brainwave-controlled music and art. If the technology could become more easily available to hackers and enthusiasts for experimentation, in the way that Arduino and Raspberry Pi have opened up hacking possibilities in the area of general electronics, then I think we would see many more interesting brain-controlled ideas popping up. And within one of these there may be the beginnings of a real killer app that brings this technology to the mainstream…
Spent half the day at the #redshirtdublin Microsoft Azure event at UCD watching #ScottGuthrie talking about new features in Azure and Visual Studio. Some great features – especially Cosmos DB, the Azure Logic Apps for easily harnessing machine learning functionality; and my personal favourite – the ability to build iOS apps in Visual Studio without the need for a Mac.
Although Microsoft is definitely moving in the right direction regarding support of industry standards (such as git and linux etc) it still very much feels that to get the most out of Azure you have to wholeheartedly embrace the entire ecosystem – options to easily plug other 3rd party tools efficiently were not mentioned.
Great to bump into some old colleagues, and hang out on the UCD campus in the fabulous sunshine.
Although I’ve been exercising by going for walks and bike rides, and doing kettle bell work outs, I think that the amount of running backward and forwards between meetings I used to do must have had some exercise value. Spending more time focused at my desk has meant a net decrease in exercise, and I’ve put on a bit of weight.
To rectify this I’m going to increase my daily exercise regime and adjust my diet (which I need to do anyway). Trying to get a regular bike ride into my routine; but I’m not sure how good it is finishing it off with a pizza :-S
I’m really starting to enjoy the flexibility of being able to work wherever I take my laptop – especially when it’s sunny it gives me a great opportunity to get out and about and enjoy the sunshine. Sitting here in a café up in Dublin, updating this blog and getting a prototype web-site set up.
It’s been a busy week – after starting to play around with Brain Control Interfaces, a proposal we have put together for brain-controlled art has been accepted by the Dublin Science gallery for its Hack the Brain event (https://dublin.sciencegallery.com/page/hackbraindublin). So I have been spending part of the time working on a way to connect the OpenVIBE Brain Control system to the Unreal 3D engine, so we can use brain-control to control elements within the game engine. It’s slow progress, but last night I got as far as causing a virtual ball to roll around the screen under the control of OpenVIBE, so we’re getting there.
Alongside this, my main focus is learning about Machine Learning and Natural Language Processing; both of which are fascinating, and key to some ideas I have; and on looking at the possibility of joining a team focussed on consolidating an existing start-up business and trying to make it really fly. More on this as I when I get fully involved…
All told there are now four different projects I’m involved in, alongside the learning I’m doing. A couple of these are potentially commercial viable; and the others are great experiences which I think will bring me a lot of value through the things I learn through them; so in general I’m feeling pretty positive. I’ve still not had that light-bulb moment of thinking ‘This is it!’ around a project; but I’m not really expecting that yet – I’m feeling that getting involved in a variety of ventures, learning relevant tech; and giving myself head-space to freewheel a bit; are all pushing me in the right direction.