Listen to James Piercy talk with Szczepan Orlins, Founder of Animorph Coop, a company that aims to push rehabilitation into the future and make it fun and accessible for your home.
They discuss 2 new devices that support the rehabilitation of people with brain injury. EyeFocus, developed with Stephanie Rosset, is a gamification and digitalisation of a rehabilitation technique to improve spatial neglect. CrossSence are augmented reality glasses with an AI assistant designed to help people with demeita stay independent for longer.

(0:09) James | Welcome to another of these podcasts from the Health Tech Research Centre in Brain and Spinal Injury. I'm taking the opportunity to talk to some innovators and companies that we're working with on projects and today I'm talking to Szczepan from Animorph. And they're working on a number of different projects using digital technologies to support people after brain injuries. Szczepan, maybe you can just tell us a little bit about Animorph, who are you, what do you do? |
(0:35) Szczepan | Hi, Animorph was founded in 2016 with a mission of enhancing human potential and we are cooperative. We're based in North London, quite a small team, seven people. And since then we've worked on a variety of projects in the medical space but also in training education. But primarily focusing on using technology to either make up for something that we might have lost or use technology to regain something that we might have lost. The Unmet Needs Directory that Brain MIC at the time published in I think 2019, when we did read that we realised that there's plenty of unmet needs and we should focus on them and that's how we met a whole lot of other people and started working on interesting projects. |
(1:23) James | Yeah, it's interesting, so the Directory of Unmet Needs is something that's under review at the moment, and the idea is really just to sort of find gaps in provision, you know, where do people need kind of help and support. So it's fascinating that you kind of use that as you're in. Perhaps you can give me an example of the kinds of needs that you've identified that these kind of technologies might help with. |
(1:42) Szczepan | The main idea I think that came out of that conversation was with University of East Anglia and specifically with NeuroLab run by Dr Stephanie Rosset. And it was focused on spatial neglect or spatial inattention and how there is no effective evidence-based treatment for it. So spatial inattention affects about one third of stroke survivors and, effectively, stops people from seeing or operating and operating one side of their body. And that significantly slows down recovery from stroke and makes it very difficult for people to function. And we thought that there is some evidence, or there's some treatments, that had been delivered in the past. But they were paper-based, and they were frustrating for both therapists and stroke survivors. So we thought we could take it further, and we could digitize it, and make it more responsive, more personalized. And so that was a good example, I think that's been one of our main projects, and we've gone through several studies now. |
(2:54) James | So this spatial neglect is very weird, it's kind of a lack of awareness or attention isn't it? Often to half of something you might eat half your dinner. You can see it; there's not a visual problem but there's just no kind of attention place there. You rotate the plate, and they'll eat half of the plate and so on. Clearly that's a big problem for lots of people. So how does this technology work? How do we direct attention where, at the moment, there is none? |
(3:19) Szczepan | There's several techniques that are used. The main one that is in NICE guidelines is visual scanning training, but we decided to focus on another one that is perhaps more promising which is called a smooth pursuit training technique. And that is, effectively, an optokinetic stimulation from the so-called “healthy side” of perception towards the affected side. So that through multiple stimuli traveling towards that affected side at a different velocity you can follow them with your eyes, and as you do that there's improvement being observed over a period of time. |
(4:08) James | Okay let me see if I can understand this. So you're going to hold a screen, or something, in front of somebody and something is going to move across, maybe from the left to the right and we see how well the eyes can track. So it moves into the space that they have no kind of awareness of, and then, hopefully in time, we start to pay attention to that thing which otherwise has been lost, is that right? |
(4:27) Szczepan | Yes, so gradually you would be able to follow (in our case maybe dots or stars or triangles or some other objects) further and further to the affected side. So the solution that we built is on a tablet, but previously people would have literally sheets of paper and they would kind of drive them from one side to another and look at the eyes of a stroke survivor and whether they're following the motion or not. The key there is having multiple different stimuli moving together in the same direction. And there's several other parameters that play an important part. But the core of the therapy is having a tablet with, let's say, the canvas, through which the stimuli is going, let's say, from left to right or from right to left (depending what stroke survivor or care selected). And their eyes are tracked to indicate whether they've actually looked towards the end of the screen, or only towards the middle of the screen, or only in an earlier part of the screen, so that we can adjust “difficulty levels” so to speak. So how many stimuli would be visible and how fast would they move. |
(5:46) James | And how often do people need to do this? We know with a lot of rehabilitation techniques you've got to be consistent right? You've got to do a lot and you've got to keep working and increase the challenge level. So is this something that people will do every day, for sort of 20 minutes a day? Or a few times a week? How does it work in practice? |
(6:03) Szczepan | Yes, it's meant to be done twice a day after breakfast and after lunch for half hour. And we've noticed improvement only after a week of using it, but we are now going to enter another stage of the study which will be our first clinical study. But yeah, the need is I think to use it for at least two weeks to see a really tangible improvement. However, with this approach, which is both entertaining and engaging and rewarding, because there's a whole range of user interface let's say practices, best practices, that we used in order to encourage people, we expect that adherence is improved. And we already had a whole lot of great feedback from early PPI activities that we did, and early design that we improved based on people's feedback. So we hope that the burden, so to speak, of doing it a couple times a day won't be significant. And after all it's a game and it's only an hour a day. |
(7:08) James | Yeah, I was going to say this sort of gamification is something which is used quite a lot isn't it, just to kind of encourage and motivate people to do things. So how do you do that with this kind of application? How do you make it so that there's a kind of a game element, so you want to go back and you want to kind of practice and see that improvement? |
(7:26) Szczepan | I guess the PPI activities shown that it's not the most obvious kind of gamification, because people might want to be, let's say, motivated to do better, but they don't also want to be treated negatively. So if they're failing they wouldn't necessarily want to know that upfront, or in a very let's say pervasive way, because that might actually be demotivating. So the general attitude with the app is to encourage further attempts and be positive about all the attempts that people carried out, and especially reward through sounds and voice that is generated, that is kind of accompanies you through the experience and reward you for achieving the goals. But effectively we're kind of hiding the actual result from the user because we don't want them to focus on this. After all it's about being in those specific sequences in those repetitions of the training that we want to encourage, and as long as they continue we know that there will be improvement. |
(8:42) James | Yeah sure. And is this something that people can do on their own? Would they do it with a therapist in the room? Or now we don't need somebody holding pieces of paper can people sort of do it in their own time and in their own house? |
(8:53) Szczepan | So definitely thats the intention in the long run. At the moment we are looking at partnerships both on a clinical side in the acute stage, but we're thinking, that at this point particularly, we need support from clinicians. Then there's a side of early supported discharge, and at this point you can imagine that people maybe can have carers helping them set it up first time, or someone from family who could be there to support them if they need it. But at present you still need to have some sort of stabilization of your chin, it’s more or less stabilization that prevents drifting of the face, and the tablet needs to attach to some sort of tablet holder that is on a more or less your eye level, so there's some setup that is needed. And we want to make sure that this is robust, because we're only using a front facing camera of a tablet, so this is not, let's say, the most advanced eye tracker, even though we managed to get pretty good results comparing to the industry's standard eye tracker. And so, at the moment, I wouldn't say this is really feasible to expect from people after stroke who suffer from spatial inattention. But in the long run I think, and particularly as people improve, they should be able to continue the rehabilitation on their own. |
(10:14) James | So that's EyeFocus. Interesting project you're working on for that kind of specific condition we know affects quite a lot of people after stroke. I wonder if you can give us an example of another project that Animorph are on? You're doing quite a lot of work in this kind of medical space I think at the moment. |
(10:27) Szczepan | This isn't necessarily strictly a brain injury, though it may be caused by brain injury, it's dementia that we are working on as well, which is a more cognitive aid for people with dementia. It's an application for smart glasses that uses cross-sensory encoding (it is actually called CrossSense) by connecting senses let's say a colour to a cup, or colour to letters, or a sound that you can assign to an object that matters to you or a picture of someone. Then you're not only creating those sensory cues in a space that can just help you navigate it, but also there's a potential benefit to your memory and your ability to recall because of dual coding or let's say synesthesia like principles. And thanks to that, if you even do not have glasses on, you will be able to recall or more easily associate the information and otherwise might be difficult to access. |
(11:30) James | Interesting, so it's like an augmented reality thing. You wear the glasses and when I look at my cup the cup suddenly has a colour, or a sound, associated with it which will remind me. And then the idea is that when I take the glasses off, I'll still think of that noise or think of that colour when I see the cup and get that kind of reinforcement. |
(11:49) Szczepan | That's, of course, a longer perspective. Because in an immediate sense if you wear glasses, let's say for an hour a day, in the moments when a care wants to have a moment of respite, or you want to make sure that you can carry out some daily activities on your own. From making a sandwich, dressing up, making a tea, could be variety of supposedly easy daily activities that are multi-step processes that you can benefit from. And also there's an AI agent that you can talk about what you see and what are the parts of the activities what things are how they relate to each other. But if you are exposed consistently to those cross-sensory bindings the associations, you effectively build on the principles that have been well studied in how you can acquire synesthesia through training. So there's different approaches to training, like a battery type approach, or there's also let's say passive acquisition. So this is what we are looking at, where people are consistently exposed to those connections. And it's been shown, in variety of research, that you can acquire those associations and perform with near same consistency that let's say synesthetes would. And synesthetes of course are known for a great memory, and the way to retrieve information that the rest of the population does not necessarily have. So we are building on some of those principles, 4% of people in our population have synesthesia, and we are trying to recreate parts of it with augmented reality, where the glasses are just you know light, sunglasses type, equipment, but they can actually perceive information around you, visually and auditory and can also return information to you. Be it display images, or play music sounds, or voice recording, perhaps the voice recordings that you left for yourself. |
(13:54) James | Fascinating yeah. So you could almost kind of set yourself reminders of what things are what the next steps are in that process, and then they're triggered by the glasses when you when you look at |
(14:05) Szczepan | Yes you could. Let's say you cooked and then the next day you wanted to recall how to make it, or maybe a week later you want to make it again. You could talk to the agent which would store information about previous conversations and previous activities that you have done together and follow up on it. And that might apply also to setting reminders for, let's say, time of the day, when you might need to take medicine, or something that is in a particular room in your house that might be of importance. One other part that I'd like to highlight when it comes to data processing is that everything runs in your home. So no information actually ever leaves your home and goes somewhere to the cloud. Because we are now looking at a situation where everything is processed, everything you see, everything you hear, and so we want to make sure that this is treated with utmost seriousness and respect. And we've been on an information commissioner's office, a regulatory sandbox program, which is helping us ensure that we're going to create the best standards when it comes to handling that data. |
(15:19) James | Yeah, that sort of privacy and confidentiality thing is huge. Because there's so much of this new digital technology, and a lot of that stuff is about sharing that information and storing it somewhere in the cloud. But as you said, with this system, actually, you don't need to go share it with loads of people because you're just using it yourself right so you can keep it kind of locally stored. |
(15:38) Szczepan | I guess you can use it to share let's say with a carer, or a family member, and at this point we might think how to do it within the home environment, but if they're remote maybe there's a there's a way to do it. But here the usual dynamics of dementia care are reversed, because most of the products for people living dementia are really for the carriers to bring peace of mind and frequently focus on monitoring the person who lives with dementia. We are trying on; we are focusing on supporting autonomy and independence of the person living with dementia. So they are more in control and they are more empowered to carry out daily activities, and then feel confident to also interact with other people. Because we also know that this is a significant part of being able to perform cognitively when you are so confident with yourself and are in good mental health. We hope to maintain that ability to function effectively for as long as possible. But, of course, carers, we want them to be part of it, be included, and know what's going on, even if they don't wear the glasses. So that's the angle which we are coming from and we do want to make sure that everyone is in the experience; no one is left out. |
(16:57) James | Yeah and just kind of maintaining that independence but supported by the kind of friends and family that are around. I'm asking everybody in these little chats Szczepan about the role of the Health Tech Research Centre or the MedTech Co-op that came before us. How have we as an organisation helped support the development of these kinds of technologies? |
(17:20) Szczepan | Well, I mentioned at the beginning the Directory of Unmet Needs which was pivotal, but also what followed from it was collaboration with Dr Stephanie Rossett and that was possible because of Brain MIC at the time and now HRC. We received Seedcorn funding, which helped us build a prototype to demonstrate that this is possible.Actually, in parallel, we're then working on a project that was a mixed reality based smooth eye pursuit app and Brain MIC helped us coordinate the two and run them alongside each other. Unfortunately, COVID made it very difficult to test the headset-based technique. But also help us discover the potential of the tablet-based technique and then later we prioritised it. So that was amazing, where we could actually prototype two versions of the same app and gain evidence and completing both of those early stages. And then EyeFocus progressing further to other NIHR grants from i4i Connect and now i4i Fast. So we've been receiving that support throughout really, as HRC was a part of the i4i Connect project and now are also advising us on the current stage. And I hope we'll continue working together in the future. |
(18:49) James | Yeah we look forward to it too. Well thanks ever so much for your time, for chatting, and telling us a little bit about some of the work of Animorph. I know that people can find your website. |
(18:58) Szczepan | Animorph Co-operative. Because Animorph might lead you to Animorphs which is a TV show which is very amusing but it's not us! |
(19:06) James | You can watch the TV show as well, and then, go to the Animorph Co-op to find out about the real meat! Okay, thanks so much for your time. |
(19:14) Szczepan | Thank you. |