Work life balance…

Start-up planning

Quick post today – it’s been a while, and a lot has gone on. Last weekend I attended the TechStars start-up weekend in Dublin. (http://communities.techstars.com/ireland/dublin). It was a fabulous weekend – a great chance to meet other like-minded individuals and get coaching on how to go through the process of stress-testing a start-up idea, and refining the business model. I met some great people and had a brilliant (if exhausting) time. I think that attending was definitely the best €31 I have ever spent!

I have been needing to diversify my skills into working as a shop-fitter as well over the last week, as my wife is in the process of opening a shop; and so I’ve spent a few days helping her build out shop furniture, and getting the space spruced up a bit. It’s great to see her on her journey too – finding a suitable premises has been really difficult (commercial rent prices in Dublin are generally ridiculous), but I have a really good feeling about this place; and I know she will work her magic to make the shop amazing…

So, amongst everything else that’s been going on, I have managed to make some progress on the speculative web application I’ve been building with a dispersed group of folks across the world (to be announced more publicly once it is a little more polished). The app has four key components: data acquisition, data processing, web service and web front end. I now have a basic system up and running working end-to-end; though it is still in need of a lot of refinement. The really tough part of the problem is the data acquisition and processing system; as this has to be able to acquire and classify unstructured data from multiple sources. Going for a long bike ride in the mountains yesterday gave me head-space to think a design through, and I think I have the basic algorithm figured out… I just need to implement it, see how it works, and then try to refine it.

But the really hard problems that take time and effort to solve are the things that make programming worth doing!

So, what about that work life balance?

Over the last couple of months I’ve been working under my own direction, on my own projects; and although I have been probably putting in several more hours per day of work than I did when I was working for a company, it has felt far less stressful, and far more productive and enjoyable.

So, I was asking myself last night – how come I was happily working at 11pm and feeling lively and engaged? I realised that as I was now completely free to choose the hours I work, and the location where I worked (home, cafe, hillside etc) – then I could fit my work around my needs, and how I felt on a particular day. My overall productivity has been high, but by being in control of when and where I work, the hours of work have had far less of an impact (actually a positive impact) on my well-being.

So why can’t companies allow their employees real flexibility when working – to choose when and where they do their work?

  • Trust? Do employers want their employees sitting under their nose, so that they can check that they are doing the work they’re being paid for?
  • Co-ordination? Do employers feel that their employees need to be in a single location in order to be able to attend meetings and co-ordinate with their teams?

To me it feels as though overcoming these obstacles would result in a more relaxed, happy, and productive workforce. If a company culture is based on trust, and employees are measured based on what they achieve, rather than the hours they work; then there is no reason to need employees to be present in the office at core hours. And if meetings are kept to a minimum, and co-ordination is done using other tools and using conferencing technology; then, again, there is no real need for teams to be continuously co-located.

Just a thought…

An Inspiring Weekend…

Presenting Mind Anamorphism

An inspirational weekend at the Dublin Science Gallery #hackthebrain event, working with neuroscientists, neuro-tech experts, artists and engineers to experiment at the boundaries between brain-science and art.
It was amazing to have the opportunity to work alongside experts in this field, and to be able to learn from them in putting together our technology concept: Mind Anamorphosis – an application whereby the user manipulates a virtual world using their thoughts, to rotate a 3D anamorphic sculpture.

First, the technology:

I learned this weekend that there are many different types of so-called ‘Brain Computer Interfaces’, or BCIs – i.e. systems whereby brain signals can be used to control a computer. At the most basic there are systems that measure simple vital signs such as brain alpha waves, heart-rate, galvanic skin response, or muscle nerve signals, to derive a reading about the person’s relaxation state, stress levels etc.
Beyond this there are a number of relatively easy-to-determine signals that the brain produces in certain situations. One of the most easy to access is the so-called P300 response (https://en.wikipedia.org/wiki/P300_(neuroscience)), which is an EEG peak at a certain position on the back of the head around 300ms after a user has seen a flash at the point in their field of vision upon which they are concentrating (e.g. an area of the screen they are looking at). With practice this can allow people to use on-screen keyboards and controls just by looking at the screen, by periodically causing each screen button to flash, and then determining after which button flash the biggest P300 response occurred. Whilst this is still significantly more awkward than using a keyboard it still provides a way for someone to fairly easily interact with a computer with no more than eye movements.
A similar technology (at least to me as a lay-observer) is something called SSVEP (https://en.wikipedia.org/wiki/Steady_state_visually_evoked_potential) whereby different frequency flashes are used to signify different commands; and then by looking for the flashing pattern using electrodes over the visual cortex it is possible to work out which command flasher (for instance an LED positioned next to the screen) the user is looking at.
And then comes Motor Intention Detection: (http://ieeexplore.ieee.org/abstract/document/6678728/?reload=true) which is where things move into the realm of ‘real’ brain control, where a person’s thoughts are directly read and interpreted by the computer – so, for instance, when a person thinks ‘move right’ the computer is able to interpret this as ‘move right’ and so directly respond to the thought pattern. This is achieved by placing electrodes over the motor cortex in the brain and then training the computer to recognise the patterns that occur when the user think about specific actions. At the start of the hackathon the experts stated that this was one of the hardest Brain Control Interfaces to achieve; so it was a little worrying that we had chosen this as the planned method for our project…

Next, the concept – ‘Mind Anamorphosis’:

The concept we were trying to implement was an artistic idea build around the basic idea of using Motor Intention to give a user visual feedback of their intended or ‘thought about’ movement on the screen. Long-term, such a concept could be useful in giving stroke victims visual feedback on the movement they are trying to make, when the stroke has impaired their ability to actually carry this movement out. Research has shown that this visual feedback can significantly increase the ability of the brain to re-learn its ability to control the body.
In the short-term, though, we used the idea of a 3D virtual anamorphic sculpture; i.e. one that appears to be an incoherent collection of objects unless viewed from a specific angle. A great example of the concept of anamorphism in real life is Salvador Dalí’s Mae West portrait (https://divisare.com/projects/304130-salvador-dali-oscar-tusquets-blanca-sala-mae-west-room-at-teatre-museu-dali); but we wanted to take this into the virtual world, and then use brain-control to move around the sculpture to try to control the position so that the true nature of the sculpture swung into view.
For the prototype we used a headset from g.tec (http://www.gtec.at/) connected to an OpenBCI (http://openbci.com/) Ganglion capture board that fed the brain signals into my laptop. The brain signals come in as a set of analogue waveforms, and so in order to understand what they meant we needed to filter the signals, and then train spatial and categorizing filters to understand that when a certain pattern of brain-waves occurred this meant a specific thing. To carry out this processing we used the OpenVIBE toolkit (http://openvibe.inria.fr/).

The output of our OpenVIBE configuration was basically a categorization of whether the user was thinking ‘left’ or ‘right’; or rather a measure of the relative probabilities of each.
The next step was to transfer this probability signal into the Unreal game engine (https://www.unrealengine.com/en-US/blog) to then control the user’s position within a scene. OpenVIBE has an output for the Virtual Reality Peripheral Network (VPRN) protocol; and so I built a VPRN controller client plug-in for Unreal in order to receive the output from OpenVIBE and use it as controller within our virtual world.
For the hackathon we used a very simple Celtic Symbol that we exploded in three dimensions so that it only looked right from one vantage point. As the user thinks left and right, they are rotated around the symbol to (hopefully) reach the correct viewing angle to be able to see it correctly. When they did it looked like this:

Learnings…

I came along to the weekend with a bare minimum of understanding of how BCI worked and what could be done with it. Over the weekend I was able to talk to experts in the field, be guided in the use and configuration of the technology for our project; as well as watch other teams struggle and succeed in using BCI technology in different ways; and through this learned several things:

  • Current BCI technology is finnicky and difficult to use: even with professional equipment and experts on-hand, teams of smart people were really struggling to get the technology to work.
  • It is also messy – having spent two days with a rubber cap on my head covered in electroconductive gel and carrying around a bundle of wires connected to the processing box, I can say that in its current form this is definitely not technology that people would choose to use unless they had a specific reason to do so – this is still a long way from consumer technology.
  • For Motor Intention control, the difficult part is being able to get one’s brain into the right state of mind so that it is quiet and relaxed, so that the only signals it is generating relate to the intention to move. For a day and a half I was really struggling to train the system with my thoughts, until I was given advice on how to best think in a way that creates a clear signal for the system to work with. The only hurdle was trying to get into the required calm and semi-meditative state at a hackathon surrounded by a ton of noise and conversations. However, by the end of the second day we had reached the point where our prototype system was configured so that it was falteringly and inaccurately responding to my thoughts; but it was definitely responding to them. It was a small step; but it was still extremely exciting (and unnerving) to see a game world moving and responding to what I was thinking.

Conclusions…

Brain Control Interfaces are conceptually incredibly exciting; but it seems to me that currently, beyond medical use and some other very niche applications, there is no sign of a ‘killer’ consumer application for this; and the technology is sufficiently imprecise and difficult to use as an actual computer input device, that unless you really had to, you would not choose to use it over more traditional computer interface devices such as keyboards or mice. As one delegate from a BCI company said to me over the weekend: if all you can do is blink your eyes, then even this is easier to use than a BCI device.
However, it feels like there are still many possibilities just over the horizon; and one of the big issues blocking advancement is that it is so difficult to break into as an enthusiast. The EEG head-sets are very expensive, and almost impossible to obtain unless you are a research lab or university. Beyond the lofty goal of true brain control of computers, there are many interesting lesser applications that could be useful, creative, or just fun; such as emotion-responsive clothing; or brainwave-controlled music and art. If the technology could become more easily available to hackers and enthusiasts for experimentation, in the way that Arduino and Raspberry Pi have opened up hacking possibilities in the area of general electronics, then I think we would see many more interesting brain-controlled ideas popping up. And within one of these there may be the beginnings of a real killer app that brings this technology to the mainstream…

If you’re interested, the code produced as part of the hackathon may be found here: https://github.com/HackTheBrain/MindAnamorphosis

Strange and beautiful neuro-technology at #hackthebrain

A post shared by Toby Steele (@steele_toby) on