An Inspiring Weekend…

Presenting Mind Anamorphism

An inspirational weekend at the Dublin Science Gallery #hackthebrain event, working with neuroscientists, neuro-tech experts, artists and engineers to experiment at the boundaries between brain-science and art.
It was amazing to have the opportunity to work alongside experts in this field, and to be able to learn from them in putting together our technology concept: Mind Anamorphosis – an application whereby the user manipulates a virtual world using their thoughts, to rotate a 3D anamorphic sculpture.

First, the technology:

I learned this weekend that there are many different types of so-called ‘Brain Computer Interfaces’, or BCIs – i.e. systems whereby brain signals can be used to control a computer. At the most basic there are systems that measure simple vital signs such as brain alpha waves, heart-rate, galvanic skin response, or muscle nerve signals, to derive a reading about the person’s relaxation state, stress levels etc.
Beyond this there are a number of relatively easy-to-determine signals that the brain produces in certain situations. One of the most easy to access is the so-called P300 response (https://en.wikipedia.org/wiki/P300_(neuroscience)), which is an EEG peak at a certain position on the back of the head around 300ms after a user has seen a flash at the point in their field of vision upon which they are concentrating (e.g. an area of the screen they are looking at). With practice this can allow people to use on-screen keyboards and controls just by looking at the screen, by periodically causing each screen button to flash, and then determining after which button flash the biggest P300 response occurred. Whilst this is still significantly more awkward than using a keyboard it still provides a way for someone to fairly easily interact with a computer with no more than eye movements.
A similar technology (at least to me as a lay-observer) is something called SSVEP (https://en.wikipedia.org/wiki/Steady_state_visually_evoked_potential) whereby different frequency flashes are used to signify different commands; and then by looking for the flashing pattern using electrodes over the visual cortex it is possible to work out which command flasher (for instance an LED positioned next to the screen) the user is looking at.
And then comes Motor Intention Detection: (http://ieeexplore.ieee.org/abstract/document/6678728/?reload=true) which is where things move into the realm of ‘real’ brain control, where a person’s thoughts are directly read and interpreted by the computer – so, for instance, when a person thinks ‘move right’ the computer is able to interpret this as ‘move right’ and so directly respond to the thought pattern. This is achieved by placing electrodes over the motor cortex in the brain and then training the computer to recognise the patterns that occur when the user think about specific actions. At the start of the hackathon the experts stated that this was one of the hardest Brain Control Interfaces to achieve; so it was a little worrying that we had chosen this as the planned method for our project…

Next, the concept – ‘Mind Anamorphosis’:

The concept we were trying to implement was an artistic idea build around the basic idea of using Motor Intention to give a user visual feedback of their intended or ‘thought about’ movement on the screen. Long-term, such a concept could be useful in giving stroke victims visual feedback on the movement they are trying to make, when the stroke has impaired their ability to actually carry this movement out. Research has shown that this visual feedback can significantly increase the ability of the brain to re-learn its ability to control the body.
In the short-term, though, we used the idea of a 3D virtual anamorphic sculpture; i.e. one that appears to be an incoherent collection of objects unless viewed from a specific angle. A great example of the concept of anamorphism in real life is Salvador Dalí’s Mae West portrait (https://divisare.com/projects/304130-salvador-dali-oscar-tusquets-blanca-sala-mae-west-room-at-teatre-museu-dali); but we wanted to take this into the virtual world, and then use brain-control to move around the sculpture to try to control the position so that the true nature of the sculpture swung into view.
For the prototype we used a headset from g.tec (http://www.gtec.at/) connected to an OpenBCI (http://openbci.com/) Ganglion capture board that fed the brain signals into my laptop. The brain signals come in as a set of analogue waveforms, and so in order to understand what they meant we needed to filter the signals, and then train spatial and categorizing filters to understand that when a certain pattern of brain-waves occurred this meant a specific thing. To carry out this processing we used the OpenVIBE toolkit (http://openvibe.inria.fr/).

The output of our OpenVIBE configuration was basically a categorization of whether the user was thinking ‘left’ or ‘right’; or rather a measure of the relative probabilities of each.
The next step was to transfer this probability signal into the Unreal game engine (https://www.unrealengine.com/en-US/blog) to then control the user’s position within a scene. OpenVIBE has an output for the Virtual Reality Peripheral Network (VPRN) protocol; and so I built a VPRN controller client plug-in for Unreal in order to receive the output from OpenVIBE and use it as controller within our virtual world.
For the hackathon we used a very simple Celtic Symbol that we exploded in three dimensions so that it only looked right from one vantage point. As the user thinks left and right, they are rotated around the symbol to (hopefully) reach the correct viewing angle to be able to see it correctly. When they did it looked like this:

Learnings…

I came along to the weekend with a bare minimum of understanding of how BCI worked and what could be done with it. Over the weekend I was able to talk to experts in the field, be guided in the use and configuration of the technology for our project; as well as watch other teams struggle and succeed in using BCI technology in different ways; and through this learned several things:

  • Current BCI technology is finnicky and difficult to use: even with professional equipment and experts on-hand, teams of smart people were really struggling to get the technology to work.
  • It is also messy – having spent two days with a rubber cap on my head covered in electroconductive gel and carrying around a bundle of wires connected to the processing box, I can say that in its current form this is definitely not technology that people would choose to use unless they had a specific reason to do so – this is still a long way from consumer technology.
  • For Motor Intention control, the difficult part is being able to get one’s brain into the right state of mind so that it is quiet and relaxed, so that the only signals it is generating relate to the intention to move. For a day and a half I was really struggling to train the system with my thoughts, until I was given advice on how to best think in a way that creates a clear signal for the system to work with. The only hurdle was trying to get into the required calm and semi-meditative state at a hackathon surrounded by a ton of noise and conversations. However, by the end of the second day we had reached the point where our prototype system was configured so that it was falteringly and inaccurately responding to my thoughts; but it was definitely responding to them. It was a small step; but it was still extremely exciting (and unnerving) to see a game world moving and responding to what I was thinking.

Conclusions…

Brain Control Interfaces are conceptually incredibly exciting; but it seems to me that currently, beyond medical use and some other very niche applications, there is no sign of a ‘killer’ consumer application for this; and the technology is sufficiently imprecise and difficult to use as an actual computer input device, that unless you really had to, you would not choose to use it over more traditional computer interface devices such as keyboards or mice. As one delegate from a BCI company said to me over the weekend: if all you can do is blink your eyes, then even this is easier to use than a BCI device.
However, it feels like there are still many possibilities just over the horizon; and one of the big issues blocking advancement is that it is so difficult to break into as an enthusiast. The EEG head-sets are very expensive, and almost impossible to obtain unless you are a research lab or university. Beyond the lofty goal of true brain control of computers, there are many interesting lesser applications that could be useful, creative, or just fun; such as emotion-responsive clothing; or brainwave-controlled music and art. If the technology could become more easily available to hackers and enthusiasts for experimentation, in the way that Arduino and Raspberry Pi have opened up hacking possibilities in the area of general electronics, then I think we would see many more interesting brain-controlled ideas popping up. And within one of these there may be the beginnings of a real killer app that brings this technology to the mainstream…

If you’re interested, the code produced as part of the hackathon may be found here: https://github.com/HackTheBrain/MindAnamorphosis

Strange and beautiful neuro-technology at #hackthebrain

A post shared by Toby Steele (@steele_toby) on

Leave a Reply

Your email address will not be published. Required fields are marked *