CIONIC is Overcoming Disability through Precision Bionics: Sensors, Soft Robotics, and Machine Learning

LDV Capital invests in people building businesses powered by visual technologies. We thrive on collaborating with deep tech teams leveraging computer vision, machine learning, and artificial intelligence to analyze visual data. We are the only venture capital firm with this thesis. LDV Vision Summit is our annual signature gathering for people interested in all things visual tech. 

At our 6th annual LDV Vision Summit, Jeremiah Robison, Founder and CEO of CIONIC, shared how he builds bionic clothing that leverages visual tech, sensors, soft robotics, and machine learning to improve lives by helping those living with physical disabilities. Watch this inspiring 7-minute video or read the shortened transcript below.

“At CIONIC, we build robotic suits to help people move. We are on a mission to revolutionize the treatment of neuromuscular impairment and power people of all abilities to transcend physical limitations.

This is my daughter Sofia, a tremendously engaging, bright nine-year-old who plays soccer, dances ballet, and dreams of becoming a roboticist. She has cerebral palsy, and while that is the least interesting part of her, it is also the thing that sucks the air out of the room. In a world where we have technology that can drive cars down the road, my daughter has crutches, canes, walkers, and wheelchairs. We can do better! In fact, we have to do better, because the reality is that by the year 2050, if we don't, one in five people will be living with a mobility impairment. 

Increased lifespans and lower birth rates mean that if we don't find solutions, 1 in 5 people will be using walkers, crutches, canes, and wheelchairs.

It's tempting to call this a binary problem. One person is being abled, and another person is being disabled. Think of that label, "disabled," something less than, something is taken away from them. In our household, we like to use the word "not yet able." I am not yet able to run as fast as Usain Bolt. I am not yet able to contemplate the cosmos like Stephen Hawking. These are both tremendous individuals with so much to give to society. Why do we call one of them disabled and the other able? We live on a multidimensional continuum of abilities. 

Claire Lomas, with the help of a robotic exoskeleton, completed the London marathon in 17 days. Viktoria Modesta challenges our beliefs about beauty and fashion and individualism. She's a singer-songwriter from Latvia who builds these amazing prosthetic legs that she demonstrates in her videos. Justin Gallegos was signed by Nike as their first athlete with cerebral palsy. He helps design shoes and athletic apparel to help people of all abilities move.

What does it take to help all people on this continuum live richer, more full and independent lives? We believe it takes a platform for precision bionics. Precision bionics understands the needs and capabilities of people along this continuum and thus is able to help them in real-time by augmenting their bodies to achieve full independence and mobility. 

The key to this platform is the ability to map intention to the outcome. In a healthy person, the brain fires a signal and everything goes in motion. "Take a step, get up out of the chair, and walk down the street." For someone who has a neurological condition, that pathway is broken. 

How do we go about augmenting it and replacing it? We believe it takes three distinct steps:
– Sensing, understanding what the body is doing;
– prediction; understanding what it's capable of in real-time & what will happen next. This prediction helps us to choose the best way to intervene.
– augmentation: actuating the body. How do you get the body to move?

Jeremiah Robison speaking at our LDV Vision Summit (May 22, 2019) © Robert Wright

This is our sensing system in action. We're streaming real-time data of muscle firing and positional data, both a signal of intent with the muscle firing and the outcome, what the positional information is of the limb orientations. You can see the calibrated data up on the right of the limbs in motion, and we can start to see that that signal of intent is happening milliseconds before the body actually moves, and this is a key insight. 

If we can understand what is about to happen in the body, then we have an opportunity to intervene.

When you take this signal and you start to look at it across multiple dimensions, some interesting things happen. 

What we see on the right, is a stacked version of the EMG signal in the frequency domain, and it is a beautiful image of what is going on in the human body. By using traditional workflows that have been pioneered in the computer vision community for image classification, we're able to classify movement. In this case, distinguishing between a front step, a back step, and a side step. Once you understand what is about to happen, you can take that same data and pump that into physiological simulations like the OpenSim platform to understand how you can intervene in the body.

In the pharmaceutical industry, simulation has been used to accelerate the discovery of drugs that can affect the body for the past several years. We're using it to understand and rapidly discover new ways to augment and actuate the body.

With traditional robotics and motors, we have control systems – everything from haptic feedback to electrical stimulation, postural correction and more. It takes a tremendous amount of computational power to work in real-time – to do sensing, prediction and augmentation. The wonderful thing about CIONIC’s workflows is that we can train these models in the cloud and then run them in real-time on the device to bring this whole thing into action in real time to complete that loop. We’d do not just the understanding part of the equation, but help people to move their bodies.

We're building this as a platform because while we may be focused on a few specific conditions, the reality is that these problems of helping people overcome disabilities are common. It's about understanding the signal of the body, creating computational models, and intervention. In doing so, we believe we can create new types of prosthetics, bionics, drug dosing, precision medicine, as well as novel interfaces to how a person who may be challenged physically can interact with computers, and Thomas Reardon, a computational neuroscientist, co-founder & CEO of CTRL-labs, showed some great examples of how that is possible.

If we're able to do that, we can realize a world that is beyond disability. I'm not saying people won't be born with cerebral palsy. I'm not saying that an accident won't happen that will rob them of function. What I'm saying is the label is gone. They're no longer not able. With technological advancements, we're allowing them to live a full and independent and completely mobile life.”


Updated in June 2022:

The first wearable garment to combine movement analysis and augmentation

As a result of tremendously successful clinical trials, Cionic Neural Sleeve has been granted FDA clearance for functional electrical stimulation to assist in gait for people with foot drop and leg muscle weakness. Ushering in a new age of wearable technology, the Cionic Neural Sleeve is the first product to combine movement analysis and augmentation into a wearable garment.

In the most recent news, the company added a stellar advisory board and received amazing reviews from their first customers – see this blog post by Ben Lenail, watch a short video about Patricia, and meet Jim.

With further technological advances, this brilliant team will be able to tackle other neurological conditions to build toward the future of neurotherapeutics.

See more details in our post, “The FDA-Cleared Cionic Neural Sleeve Overcomes Disability in Real-Time”.

In 2018, we researched the future of healthcare and published our LDV Insights report. Jeremiah Robison was one of the experts we spoke to about precision medicine. Download the report for free and learn about Nine Sectors Where Visual Technologies Will Improve Healthcare by 2028.