fbpx
skip to Main Content

Selling vision: Eyedaptic CEO discusses bringing medical vision-enhancing glasses to market

Selling vision: Eyedaptic CEO discusses bringing medical vision-enhancing glasses to market

In this discussion with Eyedaptic CEO Jay Cormier, Imaging & Machine Vision Europe hears how the company’s EYE6 smart glasses, and the imaging and display technologies they use, help people to see again.

IMVE: Tell me about Eydeaptic, what is it you do?

Jay Cormier: Eyedaptic is a vision enhancement company, focused mainly on macular degeneration (AMD) and other retinal diseases, and our main goal is to help people with AMD to see better and, in doing so, to remain independent.

“We asked ourselves the question: ‘How do you make something more powerful, without making it more complex?’”

My background is in technology, but my grandmother had AMD, so we always approached it from that angle, and that brought us firmly into the imaging and machine vision space. Since we started, we’ve always asked ourselves the question: “How do you make something more powerful, without making it more complex?” And that’s key for AMD patients – who are often older people for whom complexity is already their enemy, not to mention the fact they can’t see very well. That’s why we’ve always felt AI could be the key to unlocking this problem of increasing power, while reducing complexity.

IMVE: So the journey really started for you from the medical angle?

Jay Cormier: Absolutely. And that’s why we’re talking about imaging and machine vision, because, I might be overstating it a little but we’re really trying to help the blind see, and quite frankly, AMD doesn’t have a cure, so we asked ourselves:  “How can we really optimise their remaining vision?” My background is in technology, as I mentioned, so although we approached it from the medical opportunity side, it was always with a technology mindset.

IMVE: Tell me about the product. Your latest release was the i6 Smart Glasses. Does that mean there have been five previous iterations in the past?

Jay Cormier: Yes and no. The EYE1 was our prototype device, that was the one we first started trying out on patients, but it was never brought to market. The EYE2 was our pilot product, which was where we were just dipping our toe into the market, but it was really the EYE3, EYE4 and EYE5 that were the first ones to enter the market in any real form.

“We call her Ivy because she needed a name. It’s easier than saying ‘multimodal generative AI’ every time”

The way to the think of the EYE6 is that’s really a combination of the EYE5’s vision enhancement capabilities, plus a visual assistant, which is actually a multimodal generative AI agent. We call her Ivy, because she needed a name. It’s easier than saying “multimodal generative AI” every time. The EYE6 is really just the EYE5 – on the same hardware platform – plus Ivy.

IMVE: So what are some of the different example use cases for the EYE5 and EYE6 models?

Jay Cormier: So the vision enhancement aspect (of both models) is all about how we can get those people to see better, so they can do all the tasks they need to and remain independent. We’ve done a couple of clinical studies on this, and we regularly get results of doubled or quadrupled visual acuity for the test patients. That means we can get someone who is legally blind (20/200 visual acuity), back down to around 20/50.

“We can get someone who is legally blind, back into the range of usable vision”

To give some more reference of visual acuity, someone who doesn’t need glasses would have 20/20 vision – in US measurements. Anything past 20/70 is considered ‘low vision’. When they get to 20/200, that’s legally blind and 20/400 is beyond that, when they’re only able to count fingers (in front of their face). We can get all those people, from 20/70 up to 20/400, back into the range of usable vision.

IMVE: And how is that done? What are the key technologies you use?

Jay Cormier: What we’re doing from a technology perspective, is we’re fundamentally bringing the image in from a high-resolution camera. We then process and enhance the image in a variety of different ways using simple methods like magnification and contrast enhancement.

Keep in mind that people who suffer from AMD have rods and cones in their retina that are not as dense in their remaining vision as they were in their central vision, where the macula is. So what the glasses really try to do is get the most out of those remaining rods and cones where they’re at their least dense, and that involves enhancing the image. So we apply our enhancement, and we redisplay it on two high-definition 1080px microLED displays that sit on the inside of the glasses, in front of their eyes.

EYE6 glasses

EYE6 glasses. Image: Eyedaptic

IMVE: How do the glasses compare to consumer smart glasses on the market?

Jay Cormier: The first question we need to ask is “What is ‘smart glasses’?” Because I think they genuinely do come in different flavours. We tend to call everything ‘smart glasses’, but there are some that don’t have displays at all, and those wouldn’t work at all for our application. There are smart glasses that have displays, but no camera, and that wouldn’t work either, or there are others that have some sort of a display, but maybe it’s monocular, it’s only on one side or it’s a very small, narrow field of vision display.

“You need a high-quality camera and you need high-quality displays with a good field of view”

So not just any smart glasses would work for vision enhancement. You need a high-quality camera and you need high-quality displays with a good field of view, so that’s the first thing I do: subdivide the smart glasses market. That doesn’t mean those other products aren’t good, they’re just for different use cases. The AMD market comes with a fairly demanding use case, so we need to use good cameras and good displays, and the design needs to be lightweight and easy to wear.

IMVE: On the design of the glasses then, how important is it to simplify the technology and make sure it’s simple to use?

Jay Cormier: That’s the key, exactly. And that’s where we start applying autonomous AI algorithms to make adjustments on behalf of the user. It’s a way for them to get more assistance with their vision, without having to figure out a complex technology.

IMVE: Can you give me some more examples of what these algorithms are able to do?

Jay Cormier: A good example is with our Autozoom technology. What it does is it monitors sensors in the glasses to understand head motion. We want to know when the customer creates a focus of attention – meaning when they stop and look at something because they want to see it better – as opposed to when they’re just looking around.

“When it sees text during this period of focused attention, it will zoom in automatically so the user can see the text better”

Once the focus of attention comes into play, our algorithms then scan what’s coming in from the camera and look out for any text. When it sees text during this period of focused attention, it will zoom in automatically so the user can see the text better. Then, when they look away or at something else, it will zoom out automatically. We call that Autozoom. It’s all autonomous, and the user never has to touch anything.

IMVE: Are there any other generative AI applications performed by the Ivy assistant?

Jay Cormier: Yes, because Ivy is the generative AI agent itself. The beautiful thing for someone with a vision  impairment, is we’ve given them access to a very powerful multimodal AI. So they just snap a picture, and they can ask Ivy, in natural language, questions like “What’s in the picture?”, “Read me that sign.” “Where are my socks? I lost them.” Ivy can do all of that, but the only thing they need to use as a user interface, is the click of a button and their voice, it couldn’t be much easier.

Of course, even if you want to talk to Ivy without an image, you can do that as well. So if the vision enhancement isn’t enough – either because you have something besides macular degeneration or because it’s gotten much worse, now we can help a whole new range of people on their visual journey.

IMVE: How difficult was it to develop a product that needs to adjust to individual users?

Jay Cormier: Yes, developing a product for low-vision people is a challenge, but I think the customisation and adaptation of the product is key to its success. We do extensive beta testing because we’re always trying to design a flexible and customisable system, without making it more complicated, as I said, so that’s where the challenge lies. Some of that is pure technology, involving software, algorithms and AI, for example, but some of it is just good user interface (UI) and user experience (UX) design as well, and that’s just as important.

“The technologist in us always wants to put in more features, but we have a very tight focus on usability for our end user”

Every time we bring a product to market we recruit a new cohort of beta users to compare to our existing beta users’ experiences, and we go through extensive beta testing with those people to make sure the product features act how they expect them to. We can’t assume that we know what a visually impaired person is looking for. Through that beta testing process, which usually lasts around six, sometimes even 12 months, we refine the product before we bring it to market.

IMVE: What were some of the other product development challenges you faced? I presume there’s a constant battle between adding applications without restricting use with weight or bulkiness?

Jay Cormier: You couldn’t be more right. The technologist in us always wants to put in more features, but we have a very tight focus on usability for our end user. That’s why we have those beta testers. We also collect quite extensive data on their usage patterns to help make sure that what we’re hearing from them is in line with what they’re actually doing, to make sure we don’t get any false positives.

Eyedaptic EYE6 glasses

Image: Eyedaptic

IMVE: EYE6 was introduced at the American Retina Forum’s national meeting this year. How did that go, and what are the next steps?

Jay Cormier: The launch went great. First of all, the American Retina Forum is a technology-forward group of early-adopting retina specialists, so they’re always interested in new technologies that can help their patients. I think it was a great place to announce the EYE6 because we’ve been busy trying to upgrade our customers and eye-care practices so we can share it with more people, and they got to witness it firsthand at the meeting.

IMVE: So where is your focus now? Will you continue to develop EYE7, or scale-up product availability to make EYE6 available and accessible to more people?

Jay Cormier: As a technology company, we’re always looking to bring out new products and new technologies, but we are at the point now where we’re very happy with our position in the market, and with the customer reception we’ve received, so our main focus is on scaling, no doubt about it.

In some sense, however, we’ve only scratched the surface between the very rich intersection of augmented reality – on the imaging and machine vision side – and generative AI, so there’s much more we can do to use AI to provide better vision enhancement and control.

IMVE: With the Ivy assistant, it sounds like EYE6 glasses have the capability to battle other smart glasses manufacturers in the consumer electronics market. Is there a plan to move in that direction?

Jay Cormier: We do have that conversation quite a lot, but we are very focused on these people who need help with their vision. Worldwide, macular degeneration affects 180 million people so it’s a massive market, but it’s also so underserved. Although it’s not easy to reach those people, we feel like someone needs to service them because it’s far bigger than any other market.

“If someone wants a visual assistant, they can get that from Meta. I’m not going to take on Meta”

But it’s also where our patents stand alone. No one else really sits at that intersection between assistance and enhancement. If someone wants a visual assistant, they can get that from Meta, for example – I’m not going to take on Meta – but if someone needs enhancement, they don’t have a lot of choices.

IMVE: In terms of price positioning, how do you balance the continued development of product quality with making it accessible to people on low-income or reduced healthcare plans?

Jay Cormier: We have a strong belief that the best way to get this into patients’ hands is through eye-care practices. This is where all these people with vision impairments are seeing some sort of eye doctor. As you might imagine, however, trying to serve people that are visually impaired with a direct-to-consumer approach is incredibly difficult, so we’re following something like the hearing-aid model, where hearing aids are sold through audiologists. Similarly, visual aids are sold through eye doctors and we price them like hearing aids to make sure they are accessible.

“We’re following something like the hearing-aid model”

Over time, as we see the hardware continue to improve, we can make progress here as well. One way we’re making it even more accessible is that when we introduced Ivy, we had a decision to make on how to price the additional service and function. We decided that as its fundamentally the same platform, users could just get Ivy on a subscription model, so the visual assistant is very cost effective.

IMVE: And that’s how it’s positioned? With the product available and a subscription service either bundled or available separately?

Jay Cormier: Yes, and you know, a lot of people value them both together. That’s good because that’s our target market. And we stand alone on that.

This post is courtesy of Imaging & Machine Vision Europe

Back To Top
Skip to content