Getting Some Good Feedback

Feeling positive after an interesting night attending the Games Co-Op meet-up in Dublin this evening.

I went along to put Jukebox in front of as many game developers as I could to get their feedback, and to see if I could enlist anyone in trying out the pre-release system.

I was very conscious that I didn’t want to give a sales pitch – we’re not selling anything at the moment – but at the same time wanted to convey the value that we are trying to create for game developers with Jukebox. Plus, I’m the world’s worst salesman; so I figured that it was better to let the technology do the talking…

I was very glad I brought along an external speaker to amplify the output. Trying to demo a music-based app from a laptop in a noisy pub was never going to be optimal conditions; but at least people could hear that there was some music coming out when it was supposed to.

All told it was a great experience for me. I’ve demo’d tech products many times before; but this was the first time I’d demo’d something where it was my own product, rather than that of a company I was working for; so emotionally there was a lot more at stake. People seemed to respond very positively to the idea – I got some really useful and constructive feedback, and met some interesting people that I hope will be sufficiently intrigued by Jukebox to give it a go.

 

Chaining Function Calls in Elixir

Having come from an almost exclusively Object-Oriented and Procedural Programming background, one of the things I initially struggled with when learning the functional programming paradigm of Elixir, but which I later grew to love, was to think about solving problems in terms of data processing pipelines of functions, rather than hierarchies of objects interacting with one another.
I found myself spending a significant time thinking about how to elegantly express the solution to a data processing requirement as a sequence of data transforms. Although there are always numerous different ways to achieve an aim in Elixir, the language is both more rewarding and punishing than non-functional languages, like C#, Java or Python, when it comes to the quality of your implementation. If you think it through and get it right, you are rewarded with a beautifully elegant and concise sequence of data transformations; but if you approach the problem from the wrong direction, or try to hack it, things rapidly become messy.
In reality, putting the extra effort in to properly think through what you are trying to achieve rapidly becomes second nature; and there have been several times over the last few months where it has given me great pleasure to arrive at an elegant solution after a bit of head-scratching and careful thinking.

Elixir’s Language Features for Chaining Processing Steps

With the aim of creating a clean and elegant sequence of processing steps I want to look at the relative merits of the different approaches to chaining the output of one processing step with the input of the next step.

The main options (from most basic upwards) are:
– A simple ‘if-else’ statement
– The ‘cond’ and ‘case’ statements
– Expression chaining using the |> operator and pattern-matching
– The ‘with’ statement

Each of these has their merits and detractions in different situations. My (by no means definitive) take on when to use these is as follows:

if…else (official docs)

My general thought would be that unlike procedural/OO languages, where several nested ‘if…else’ statements can be a fairly standard and not too terrible occurrence, if you find yourself using if…else as more than a single top-level selector in Elixir, there are almost certainly better ways of expressing your logic. I get the feeling that ‘if’ statements in Elixir are tolerated, rather than recommended. Whilst they are useful where there is a simple binary decision that needs to be made, anything with multiple interrelated decisions is likely to get messy. Whilst if…else statements might seem like a familiar friend to procedural developers, they don’t work as well in the functional paradigm. In C# or Python it might make sense to have a function with multiple ‘if’s updating some internal object state based on input data; however, data immutability means that this kind of scenario doesn’t really apply in Elixir. For example, it may make more sense to have different versions of an Elixir function, matching different patterns of input data, rather than using ‘if’ statements inside the function to switch logic path.
One other shortcoming is that ‘if’ doesn’t easily handle the case where one of the items being evaluated may be one of a number of types. This situation can occur with one of the standard Elixir patterns, whereby functions return some data on success, or {:error, message} on failure. Testing such a return value with ‘if’ will cause a MatchError, and so is awkward or inappropriate in this situation; wheras the scenario can easily be handling using the pattern matching available in the options discussed below.

Case and Cond (official docs)

Elixir’s ‘cond’ and ‘case’ statements are both useful where an expression can evaluate to a number of different values (case) or where several paths may be taken based on one of a number of different statements evaluating to ‘true’ (cond)
Case can also be more useful than ‘if’ in some binary branching cases as it allows binary decisions to be made when the statement being evaluated can result in different types. Coming back to the return-type paradigm mentioned above in the ‘if’ section – case can be used to determine the next processing step when a function potentially returns more than one return type, e.g. here, where for success there is just a single atom, ‘:ok’, but for failure there is a tuple:


case uploadAsset(localName, targetFilename) do
 :ok ->
  IO.puts("Uploaded #{targetFilename}")
  {:ok, "#{@mediaRoot}/#{targetFilename}"}
 {:error, reason} ->
  IO.puts("Uploaded of #{targetFilename} failed: #{reason}")
  {:error, "Unable to upload track to filestore: #{reason}"}
end

Case is obviously useful in any case where there are multiple possible patterns to match; and a big benefit it brings when compared with the equivalent in most procedural languages, is that a) the optional match patterns within the case can be complex data types and b) matched values can then be used in the subsequent action statement. For example, take this block of code that handles the result of a transactional database update:


case Repo.transaction(multi) do
  # if success, then return the profile
  {:ok, result} -> result.profile
  {:error, :profile, changeset, %{}} ->
    EventLogger.logError("PREFERENCE_PROFILE_CREATE_ERROR", "#{changeset |> inspect}", __MODULE__)
    {:error, "Unable to create Preference Profile for device"}
  {:error, :device, changeset, %{}} ->
    EventLogger.logError("DEVICE_CREATE_ERROR", "#{changeset |> inspect}", __MODULE__)
    {:error, "Unable to create consumption device record"}
end

The transaction update can fail in multiple ways, and the case statement allows all of these patterns to be matched and then acted upon.
One disadvantage is that nested case statements can rapidly become messy and difficult to understand; and so I’ve found they work best if each clause routes to just a few lines of processing – ideally to a single expression or function call.
Case statements can be made more flexible still by using a ‘when’ clause to limit matches still further – this becomes most useful when there is a drop-through default that mops up any situation where there isn’t a match.

Cond provides a similar multi-path selector to case, but is useful when each path is gated on multiple un-related (or only semi related) conditions. For example, the following authentication handler plug logic within a Phoenix app, that first needs to match the case where a user isn’t logged in, then needs to match the case where they are logged in and have access rights, and finally needs to drop through to the case where the user has insufficient rights for a resource:


def call(conn, opts) do
 cond do
   conn.assigns.current_user == nil ->
     conn
     |> put_flash(:error, "Please log in or sign up to access that page")
     |> redirect(to: page_path(conn, :index))
     |> halt()
   matchRequirements(conn.assigns.current_user, opts) == true ->
     conn
   true ->
     conn
     |> put_flash(:error, "You have insufficient rights to access that page")
     |> redirect(to: page_path(conn, :index))
     |> halt()
   end
 end

In general I have found ‘cond’ to be less frequently useful than ‘case’ or the other approaches mentioned here; but still a very useful part of the Elixir toolkit when the need emerges, and more elegant than the multiple nested ‘if’ statements that may be necessary without it.

The |> Operator (official docs)

If there was one language feature that really makes Elixir pop it has to be the |> pipeline operator, and the programming paradigm that goes along with it. You may begin to guess that I love this feature, and I’ll try to explain why in the next few paragraphs:
So what is the |> operator? Quite simply it is an operator that directs the output of the statement on its left-hand-side into the first parameter of the function on its right-hand-side. On the surface, this doesn’t seem like that big a deal; however, this facilitates a programming pattern which, in conjunction with the way many of the standard enumerable and stream-handling modules are written, allows complex sequences to be expressed concisely and elegantly.
I think the primary benefit that use of the |> operator brings is it really gets you to think in terms of a data processing pipeline. If you design your functional steps properly then you can have one function pass its output to the input of the next function, etc. etc.; and what you end up with is a brilliantly concise and clear representation of the data processing steps your code is making. I had a eureka moment early on in my Elixir programming days where I had written some processing logic that spread across several functions, case statements and if branches. It wasn’t terrible code, but it didn’t exactly jump out as elegant. It was then that I began to realise the power of Elixir’s ‘Enum’ module, and the way it allows sequences of data to be operated on using a single statement.
With this in mind, I started playing around with my code, which needed to:
– iterate over a dataset, creating a ‘match score’ for each data item
– sort the results so the best matches were first
– take the top results
– separate the data items from their match scores
– load a foreign-key associated data item from the database for each data item
– extract the associated data items out into their own results list.

Using Enum and the pipe operator, I was able to refactor the code as follows:


results = getTrackMetadata
          |> Enum.map(fn x -> { measureDistance(seed.v1, x.v1), x } end)
          |> List.keysort(0)
          |> Enum.slice(0, @numResultsNeeded)
          |> Enum.map(fn {_score, metadata} -> metadata end)
          |> Repo.preload(:track)
          |> Enum.map(fn x -> x.track end)

Which resulted in a block of code that clearly illustrated the sequence of steps that were taking place.

So, the first two conclusions I came to were that
1) The |> operator allows a pipeline to be clearly defined as an elegant sequence of data processing steps.
2) When processing sets of data, the Enum module provides a powerful set of functions to applying map, reduce and other sequence processing operations as part of a pipeline.

However, there was a problem. Take the following sequence, which is a block of code for doing an authenticated upload to Google Cloud Storage, and then logging the result:


def upload(sourcePath, targetFilename, type) do
  Goth.Token.for_scope("https://www.googleapis.com/auth/cloud-platform")
  |> createConnection
  |> uploadFile(sourcePath, targetFilename, type)
  |> EventLogger.logResult("FILE_OP", "Upload of #{targetFilename}")
end

This looks clean and concise, but what happens when any of the intermediate functions suffers a failure? Suddenly there is a match error in the pipeline and Elixir throws an exception, bringing the pipeline to a crashing halt. This may be ok in some situations, but in most cases I would expect that it is desirable to at least register an error and handle it properly.
In order to allow the pipeline to run end-to-end in the successful case, but to propagate errors in the case of failure, I’ve found that an effective pattern is to implement two versions of each pipeline function: an error propagating one, and the standard one. In other words, create different versions of the function to handle each of the possible outputs from the previous step; and in the case of that output being an error, just propagate that error unchanged.
For example, that ‘createConnection’ function above can be implemented as:


defp createConnection({:error, message}), do: {:error, message}
defp createConnection({:ok, token}) do
  GoogleApi.Storage.V1.Connection.new(token.token)
end

 

Now, if Goth.Token.for_scope returns an error, createConnection passes the error onwards; but if it returns {:ok, token} the second version of the function is invoked and a storage token is created.

With (official docs)

So the |> operator allows a concise sequence of steps to be defined; but what if you don’t want to go to the bother of creating duplicate versions of every function in the pipe for cases where the ‘happy path’ isn’t followed? Luckily, Elixir has an answer for this in the form of the ‘with’ macro.
Simply put, ‘with’ allows you to define a sequence of processing steps, in a similar way as to the |> operator; but it allows you to define the match condition for the result of each step, and an error handler for the case where this match doesn’t occur. In other words, it allows you to define the happy path, and also what to do when this path can’t be taken.
For example, the following code takes a base64-encoded token, decodes it, extracts its version header, validates the header and then validates the token; before returning the valid token. If there is a failure, it returns the error as a standard {:error, message} tuple:


def unwrap(wrappedToken, expectedTokenType) do
  with { :ok, decodedToken } <- base64Decode(wrappedToken, expectedTokenType),
       {header, encryptedToken} <- extractHeader(decodedToken),
       {:ok, actualTokenType} <- getValidatedTokenType(header, expectedTokenType),
       {:ok, validToken} <- getValidatedToken(encryptedToken, actualTokenType) do 
    {:ok, validToken} 
  else 
    {:error, message} -> {:error, "Invalid token data (#{message |> inspect})"}
    errorMessage -> {:error, "Invalid token data (#{errorMessage |> inspect})"}
  end
end

 

I’ll start by saying that I’m still on the fence with ‘with’. I like that it can express some powerful processing pipelines without the additional plumbing needed by the |> operator to elegantly handle error conditions; but I find the syntax significantly less readable. That said, it also allows a data processing pipeline to be defined when the function outputs can’t be directly fed into the next stage; and so offers a little extra flexibility when compared to |>.
I’m still playing with, and exploring the merits, of both approaches; but I still think the concise elegance enabled by the |> operator is often worth the modest overhead.

Turn, Turn, Turn…

It’s been a long time – mainly because I’ve been very busy!

The story so far:

After several false starts, I have been heads down working with a team to develop our product, test it, get beta users for it and generally take a great concept and turn it into reality.

As such there has been lots of learning along the way! If I had been diligent (or less busy actually working on it) I would have blogged about the experience as I was going along. Starting with a small prototype system and then refactoring it and growing it to make it secure, scalable, and production ready is quite a journey, with lots of enlightening steps along the way.

So, the plan now is to begin writing up those steps, bit-by-bit; and changing the focus of this site towards focussing on the technology and technology choices made to successfully build and deploy a complex software system.

The system? It’s called Jukebox, and may be found here: https://www.jukeboxaudio.co. We’re still working on the visual design, and expecting a make-over in the next week or so; but in the meantime I’ll begin talking about what we did to get where we are; and where we’re planning on going…

 

One pitch down, next one today…

You need to think of the Start-Up process as a game – you learn the rules, and then try to play to the best of your abilities.

Going through a pre-seed pitch with the NDRC was a good introduction into the world of pitching for investment – the folks we were pitching to were ‘friendly’ – they were the same people (mostly) who had been mentoring us for the last five weeks, and we knew what they were looking for.

Today I am over in London and we are pitching to TechStars. The stakes are higher (we are asking for more money) and what we are walking into is far less well known.

The last few days has been a case of frantically polishing the prototype app ready to demo, and it is now working pretty well. We were planning to do a live demo; but this morning sense prevailed, and I realised that there were so many risk factors that it makes much more sense to record a video of a walk-through.   We can always show the app working live afterwards if the panel want real proof.

Presentation in a couple of hours. Wish me luck!

Ghost in the Machine!

It has been a long time since my last post – mainly because I’ve been flat-out busy with exciting stuff…

The start-up I got involved with during Dublin Start-Up Weekend back in July has gained momentum, and we’ve been busy developing our business plan and prototype system; and have been taking part in the NDRC’s pre-accelerator programme (http://www.ndrc.ie/news-events/news/12-teams-selected-for-ndrcs-autumn-pre-accelerator-programme)

Today I feel like I need to write something, as I’ve assembled my first ever Deep-Learning based application, and I’m a little bit freaked out (and pleased) with the result. One of the key components to the product I’m working on is an audio track comparison mechanism, to allow the system to find similar tracks to an initial ‘seed’ track.

I’ve used a Keras-based neural network to analyze tracks and produce matching metrics, and then wrapped this in some playlist generation logic, that takes a seed track and then returns the 5 closest matches.

The whole system is still very much an initial prototype, and there are lots of improvements we will need to do; however, the initial results are still eerily more accurate than I would have hoped. Sometimes the matches it is coming up with may seem strange, but then on further consideration I can see that there are notable similarities. What is strange to me is that I have no understanding of the criteria the matcher used, just that what it has come up with kinda makes sense.

As I get more familiar with machine learning I’m sure this sense of wonder will be replaced by understanding; but at the moment I am feeling a significant sense of wonder!

Frustration…

I feel like I’m learning some valuable, if frustrating, lessons.

I’ve made no blog updates for the last few weeks as I’ve been flat-out trying to plan and prototype the idea I had started working on in my last post.

After a lot of hard work we have a road-map, a very viable business plan, and a working prototype that shows what we wanted to do is very possible…

But the realization that despite this, it is likely to take several months (at least) to actually create a sell-able system, means that there are some lean months ahead, unless we can get funding; and even this is likely to take time.

…and not everyone on the team is prepared to batten down for the long haul.

Things are still in flux, but it’s looking as though one of our critical team members and domain expert is going to walk away, leaving the remainder of the team significantly disadvantaged; and probably unable to continue.

I think there are several lessons here:

  1. Setting up a start-up takes time. It takes time to create a viable plan; it takes time to build a product; it takes time to get funding (and the later you leave it the better in many ways); and if you are a founder, it can take a long time to get any kind of income – as investors will not expect founders to draw a salary. With this in mind, you have to be prepared to tighten your belt in the short-term if you want long-term success.
  2. Everyone on the team needs to fully understand what they are committing to; and commit to it. If you are asking people to come with you on a journey, and you are critical to the success of that journey; then you need to be clear with each other how far you are prepared to go to reach your destination. Getting off the bus half-way along will be letting everyone else down.
  3. There will be set-backs – nothing is ever plain sailing; and members of a team need to be prepared to weather the storm and overcome problems as they arise: adapting and pivoting in response to things that are learned along the way, rather than falling at the first hurdle. There will be lots of hurdles, but if you stay focussed and flexible, a lot of these can be overcome.
  4. Communication is critical. Our current situation partially arose because one team member went off on a limb and did something which destabilized their financial situation before we had secured any kind of funding. If they had talked to the rest of the founders before doing this we would have strongly advised that their plan was a bad idea, and we wouldn’t now be in this situation. When you have a tight co-dependency on a group of people, it is critical to discuss anything that could have an effect on that group of people.

So, we live and learn, and time will tell what happens; but it feels as though it’s time to start getting back to the drawing board…

Opportunities arise…

After a faltering week of trying to get a prototype finished for one project, yesterday felt like a rollercoaster ride…

I got up to take an early Sunday call at 9am to discuss the next steps for the start-up idea developed at the Dublin Techstars Start-up Weekend. This is starting to grow legs, and looks like it is showing real promise. It was a quick call for everyone to touch base, with more co-ordination later in the week.

Then, an hour or so later two collaborators and myself kicked off the inaugural planning meeting for a new start-up idea, to build a product for the recruitment industry.

I am cautiously optimistic that this may go somewhere – within the team lies excellent domain knowledge and contacts within the industry; and the day was extremely productive. 6 hours of intense, heads-down planning: firstly fleshing out the high-level mechanics of what we are trying to build; followed by a couple of hours building a Lean Canvas business plan; followed by another couple of hours creating a high-level product backlog to approximate what we were going to have to do to reach MVP.

Going through this process from start to finish was highly illuminating and a great experience – we were able to really unravel the problem into well-understood and actionable steps; and also to develop a longer-term vision for where we wanted to go. It was a fabulous learning process, and by the end of the day we felt exhausted, but very happy that we had a plan in place to start moving forward.

I was also delighted to find that my kitchen counter top worked brilliantly as a horizontal whiteboard, and being able to brainstorm onto a large flat surface and then annotate the post-its with arrows and lines between them was really useful. I see a potential business idea in selling kitchen counters specifically with this purpose in mind…

 

Work life balance…

Start-up planning

Quick post today – it’s been a while, and a lot has gone on. Last weekend I attended the TechStars start-up weekend in Dublin. (http://communities.techstars.com/ireland/dublin). It was a fabulous weekend – a great chance to meet other like-minded individuals and get coaching on how to go through the process of stress-testing a start-up idea, and refining the business model. I met some great people and had a brilliant (if exhausting) time. I think that attending was definitely the best €31 I have ever spent!

I have been needing to diversify my skills into working as a shop-fitter as well over the last week, as my wife is in the process of opening a shop; and so I’ve spent a few days helping her build out shop furniture, and getting the space spruced up a bit. It’s great to see her on her journey too – finding a suitable premises has been really difficult (commercial rent prices in Dublin are generally ridiculous), but I have a really good feeling about this place; and I know she will work her magic to make the shop amazing…

So, amongst everything else that’s been going on, I have managed to make some progress on the speculative web application I’ve been building with a dispersed group of folks across the world (to be announced more publicly once it is a little more polished). The app has four key components: data acquisition, data processing, web service and web front end. I now have a basic system up and running working end-to-end; though it is still in need of a lot of refinement. The really tough part of the problem is the data acquisition and processing system; as this has to be able to acquire and classify unstructured data from multiple sources. Going for a long bike ride in the mountains yesterday gave me head-space to think a design through, and I think I have the basic algorithm figured out… I just need to implement it, see how it works, and then try to refine it.

But the really hard problems that take time and effort to solve are the things that make programming worth doing!

So, what about that work life balance?

Over the last couple of months I’ve been working under my own direction, on my own projects; and although I have been probably putting in several more hours per day of work than I did when I was working for a company, it has felt far less stressful, and far more productive and enjoyable.

So, I was asking myself last night – how come I was happily working at 11pm and feeling lively and engaged? I realised that as I was now completely free to choose the hours I work, and the location where I worked (home, cafe, hillside etc) – then I could fit my work around my needs, and how I felt on a particular day. My overall productivity has been high, but by being in control of when and where I work, the hours of work have had far less of an impact (actually a positive impact) on my well-being.

So why can’t companies allow their employees real flexibility when working – to choose when and where they do their work?

  • Trust? Do employers want their employees sitting under their nose, so that they can check that they are doing the work they’re being paid for?
  • Co-ordination? Do employers feel that their employees need to be in a single location in order to be able to attend meetings and co-ordinate with their teams?

To me it feels as though overcoming these obstacles would result in a more relaxed, happy, and productive workforce. If a company culture is based on trust, and employees are measured based on what they achieve, rather than the hours they work; then there is no reason to need employees to be present in the office at core hours. And if meetings are kept to a minimum, and co-ordination is done using other tools and using conferencing technology; then, again, there is no real need for teams to be continuously co-located.

Just a thought…

An Inspiring Weekend…

Presenting Mind Anamorphism

An inspirational weekend at the Dublin Science Gallery #hackthebrain event, working with neuroscientists, neuro-tech experts, artists and engineers to experiment at the boundaries between brain-science and art.
It was amazing to have the opportunity to work alongside experts in this field, and to be able to learn from them in putting together our technology concept: Mind Anamorphosis – an application whereby the user manipulates a virtual world using their thoughts, to rotate a 3D anamorphic sculpture.

First, the technology:

I learned this weekend that there are many different types of so-called ‘Brain Computer Interfaces’, or BCIs – i.e. systems whereby brain signals can be used to control a computer. At the most basic there are systems that measure simple vital signs such as brain alpha waves, heart-rate, galvanic skin response, or muscle nerve signals, to derive a reading about the person’s relaxation state, stress levels etc.
Beyond this there are a number of relatively easy-to-determine signals that the brain produces in certain situations. One of the most easy to access is the so-called P300 response (https://en.wikipedia.org/wiki/P300_(neuroscience)), which is an EEG peak at a certain position on the back of the head around 300ms after a user has seen a flash at the point in their field of vision upon which they are concentrating (e.g. an area of the screen they are looking at). With practice this can allow people to use on-screen keyboards and controls just by looking at the screen, by periodically causing each screen button to flash, and then determining after which button flash the biggest P300 response occurred. Whilst this is still significantly more awkward than using a keyboard it still provides a way for someone to fairly easily interact with a computer with no more than eye movements.
A similar technology (at least to me as a lay-observer) is something called SSVEP (https://en.wikipedia.org/wiki/Steady_state_visually_evoked_potential) whereby different frequency flashes are used to signify different commands; and then by looking for the flashing pattern using electrodes over the visual cortex it is possible to work out which command flasher (for instance an LED positioned next to the screen) the user is looking at.
And then comes Motor Intention Detection: (http://ieeexplore.ieee.org/abstract/document/6678728/?reload=true) which is where things move into the realm of ‘real’ brain control, where a person’s thoughts are directly read and interpreted by the computer – so, for instance, when a person thinks ‘move right’ the computer is able to interpret this as ‘move right’ and so directly respond to the thought pattern. This is achieved by placing electrodes over the motor cortex in the brain and then training the computer to recognise the patterns that occur when the user think about specific actions. At the start of the hackathon the experts stated that this was one of the hardest Brain Control Interfaces to achieve; so it was a little worrying that we had chosen this as the planned method for our project…

Next, the concept – ‘Mind Anamorphosis’:

The concept we were trying to implement was an artistic idea build around the basic idea of using Motor Intention to give a user visual feedback of their intended or ‘thought about’ movement on the screen. Long-term, such a concept could be useful in giving stroke victims visual feedback on the movement they are trying to make, when the stroke has impaired their ability to actually carry this movement out. Research has shown that this visual feedback can significantly increase the ability of the brain to re-learn its ability to control the body.
In the short-term, though, we used the idea of a 3D virtual anamorphic sculpture; i.e. one that appears to be an incoherent collection of objects unless viewed from a specific angle. A great example of the concept of anamorphism in real life is Salvador Dalí’s Mae West portrait (https://divisare.com/projects/304130-salvador-dali-oscar-tusquets-blanca-sala-mae-west-room-at-teatre-museu-dali); but we wanted to take this into the virtual world, and then use brain-control to move around the sculpture to try to control the position so that the true nature of the sculpture swung into view.
For the prototype we used a headset from g.tec (http://www.gtec.at/) connected to an OpenBCI (http://openbci.com/) Ganglion capture board that fed the brain signals into my laptop. The brain signals come in as a set of analogue waveforms, and so in order to understand what they meant we needed to filter the signals, and then train spatial and categorizing filters to understand that when a certain pattern of brain-waves occurred this meant a specific thing. To carry out this processing we used the OpenVIBE toolkit (http://openvibe.inria.fr/).

The output of our OpenVIBE configuration was basically a categorization of whether the user was thinking ‘left’ or ‘right’; or rather a measure of the relative probabilities of each.
The next step was to transfer this probability signal into the Unreal game engine (https://www.unrealengine.com/en-US/blog) to then control the user’s position within a scene. OpenVIBE has an output for the Virtual Reality Peripheral Network (VPRN) protocol; and so I built a VPRN controller client plug-in for Unreal in order to receive the output from OpenVIBE and use it as controller within our virtual world.
For the hackathon we used a very simple Celtic Symbol that we exploded in three dimensions so that it only looked right from one vantage point. As the user thinks left and right, they are rotated around the symbol to (hopefully) reach the correct viewing angle to be able to see it correctly. When they did it looked like this:

Learnings…

I came along to the weekend with a bare minimum of understanding of how BCI worked and what could be done with it. Over the weekend I was able to talk to experts in the field, be guided in the use and configuration of the technology for our project; as well as watch other teams struggle and succeed in using BCI technology in different ways; and through this learned several things:

  • Current BCI technology is finnicky and difficult to use: even with professional equipment and experts on-hand, teams of smart people were really struggling to get the technology to work.
  • It is also messy – having spent two days with a rubber cap on my head covered in electroconductive gel and carrying around a bundle of wires connected to the processing box, I can say that in its current form this is definitely not technology that people would choose to use unless they had a specific reason to do so – this is still a long way from consumer technology.
  • For Motor Intention control, the difficult part is being able to get one’s brain into the right state of mind so that it is quiet and relaxed, so that the only signals it is generating relate to the intention to move. For a day and a half I was really struggling to train the system with my thoughts, until I was given advice on how to best think in a way that creates a clear signal for the system to work with. The only hurdle was trying to get into the required calm and semi-meditative state at a hackathon surrounded by a ton of noise and conversations. However, by the end of the second day we had reached the point where our prototype system was configured so that it was falteringly and inaccurately responding to my thoughts; but it was definitely responding to them. It was a small step; but it was still extremely exciting (and unnerving) to see a game world moving and responding to what I was thinking.

Conclusions…

Brain Control Interfaces are conceptually incredibly exciting; but it seems to me that currently, beyond medical use and some other very niche applications, there is no sign of a ‘killer’ consumer application for this; and the technology is sufficiently imprecise and difficult to use as an actual computer input device, that unless you really had to, you would not choose to use it over more traditional computer interface devices such as keyboards or mice. As one delegate from a BCI company said to me over the weekend: if all you can do is blink your eyes, then even this is easier to use than a BCI device.
However, it feels like there are still many possibilities just over the horizon; and one of the big issues blocking advancement is that it is so difficult to break into as an enthusiast. The EEG head-sets are very expensive, and almost impossible to obtain unless you are a research lab or university. Beyond the lofty goal of true brain control of computers, there are many interesting lesser applications that could be useful, creative, or just fun; such as emotion-responsive clothing; or brainwave-controlled music and art. If the technology could become more easily available to hackers and enthusiasts for experimentation, in the way that Arduino and Raspberry Pi have opened up hacking possibilities in the area of general electronics, then I think we would see many more interesting brain-controlled ideas popping up. And within one of these there may be the beginnings of a real killer app that brings this technology to the mainstream…

If you’re interested, the code produced as part of the hackathon may be found here: https://github.com/HackTheBrain/MindAnamorphosis

Strange and beautiful neuro-technology at #hackthebrain

A post shared by Toby Steele (@steele_toby) on

Stepping backwards and forwards in time…

Spent half the day at the #redshirtdublin Microsoft Azure event at UCD watching #ScottGuthrie talking about new features in Azure and Visual Studio. Some great features – especially Cosmos DB, the Azure Logic Apps for easily harnessing machine learning functionality; and my personal favourite – the ability to build iOS apps in Visual Studio without the need for a Mac.
Although Microsoft is definitely moving in the right direction regarding support of industry standards (such as git and linux etc) it still very much feels that to get the most out of Azure you have to wholeheartedly embrace the entire ecosystem – options to easily plug other 3rd party tools efficiently were not mentioned.
Great to bump into some old colleagues, and hang out on the UCD campus in the fabulous sunshine.