Join Dr. Daniel Neely and Dr. Nicolas Jaccard as we examine Artificial Intelligence (AI), its applications in Ophthalmology and related tools available on Cybersight. We begin with a non-technical overview of AI, including a brief history, current state, and considerations such as bias and ethics. We then demonstrate Cybersight AI, and how its open-access tools can detect and visualize glaucoma, macular disease and diabetic retinopathy.
Lecturers: Dr. Daniel Neely, Pediatric Ophthalmologist & Professor of Ophthalmology, Indiana University School of Medicine, USA & Dr. Nicolas Jaccard, Principal Architect, Artificial Intelligence, Orbis International
Transcript
DR NEELY: Thank you, Nicolas. So Nicolas is an AI engineer, or AI architect. And I’m talking to you as a clinician. As an ophthalmologist. And I think it’s important that we always keep that loop intact. Because simply having a tool is different than being able to apply a tool correctly. As you can see, AI is just in its infancy. This is not the birth of AI, but in terms of AI and ophthalmology, we’re talking in the last five years. So this really is the ground floor. And it’s something that is going to change so rapidly. Just like a young child growing from infancy. This is just gonna be a logarithmic change, every six months to a year. And here we are. We’re offering this to you free. Which is… I think… Just an amazing benefit to Orbis/Cybersight. What I’m going to do is demonstrate to you in realtime submitting a couple AI consultations, and I just want you to keep in mind, as you look at all of this AI, and as you use the AI consultation feature, it’s not going to be perfect. This is a supplemental tool. This is not designed to replace being a physician. But this is to guide you. And I see the potential of AI here to be most beneficial in a couple areas. One is screening. So mass screening of fundus photographs, or other images, to determine who needs to see a physician. And then the other group are the people that you’re already seeing, using AI to help you guide your diagnosis, based on the information that’s input. And then once you have been helped with your diagnosis, then guiding your treatment plan. So things like preferred practice patterns. All right? If you now have a diagnosis of diabetic retinopathy, what are the preferred practice patterns for that? What should you be doing now? And again, always with the understanding that you’re the doctor, you need to make the decision, or you need to ask your mentor to help you decipher this information, if it’s not clear. What are we doing right now? Well, currently we offer AI services in just two categories. And that’s glaucoma, optic nerve head analysis, glaucoma screening, and then the other category is in the field of diabetic retinopathy for both adults and children. And so when you are submitting cases, you will see that those are the only areas where you’re able to submit a case from. And I’ll share my screen and we’ll just go to the home screen and share this. And as we look at this home screen, so this is the home screen if you have a Cybersight consultation account. If you just have the library and course access, you will need to change your account to a full access consultation account. And when you do, this is what will show up. You’ll see a list of ongoing consultations, you’ll see a list of cases you can search by category, and you’ll see your own cases up here. So you need to be able to see this screen to submit a request. When you want to submit a request, you can either use the red “submit a new request” button, or down here, on the sidebar, you have the ability to go through this. When you open this up, there will be a couple options. So general question doesn’t apply to this. Just patient cases. We have an AI only case, where it is not automatically submitted to a teaching mentor. So this would be if you’re screening for glaucoma or just screening for fundus photographs, or diabetic repository, or you can use AI interpretation as part of a case that you are submitting to a mentor. So you have a diabetic patient, you’re submitting it to a mentor for advice. While you’re waiting for that result to come back, you will be getting an AI report, generated instantly, almost. When I say instantly, I’m talking less than a minute, perhaps. So let’s start with the AI only case. This is a new feature, by the way. We’ve offered AI interpretation maybe for more than a year now, but the AI only feature is just the last few months. The AI only case… Again, on the subspecialty dropdown box, you’re only going to see these three categories. It’s retina or it’s glaucoma. So there’s glaucoma, and then I have the option of choosing files. And I’m going to choose a normal right here. And I’m going to choose one that is not normal. Upload those. So those are uploaded. I have my preview. Once I have my preview, then I submit. It’s already in the works. I’ll go back to my home screen. And this will appear in my in-progress cases for me. And while we’re waiting for that, let’s launch the first poll question here, Lawrence. So this is something important for us to know. Because here we talk about AI grading of fundus photographs. So step number one here: I would just like to hear from the audience: Do you have access to a fundus camera that can take a decent photograph? Yes or no? Obviously this is important, because if you have a bad image or no image, then the AI is not going to be of much use to you. The old saying, garbage in, garbage out. Most of us have access to a decent fundus camera. And the quality of the picture is important, as you submit these things. Here’s that AI only case. That we just submitted. Got it opened up here. And I’m just hitting my screen refresh. Here’s our report. In the span of about a minute, we have received our AI report. This is an AI only case. All I’m receiving is the AI report here. We have image 1 and image 2. And again, as Nicolas showed that preview, the very first thing you get on the summary is: Was your image verified and was it gradable? You have to have good images. Otherwise the system can’t interpret them for you. And it’s giving you a summary. Disc anomalies and diabetic retinopathy. So I think that’s another important point here. We’re selecting categories of glaucoma or diabetic retinopathy, but the system is performing both screenings on every image that you submit. All right? So you don’t have to be accurate on which of those dropdowns you use. But those are the categories we can use. It will run the analysis on all images, both for DR and for glaucoma. And then if you see that everything is green, you don’t even need to look at the images. But if there are abnormalities, like we have here, and I’m gonna scroll down to the second image, which is the normal image first, so here we have a normal image, the disc is highlighted, and you will get… I think this is a really nice feature… You will get a vertical cup to disc ratio estimated for you. So here, this one is being estimated at 0.55 vertical cup to disc ratio. And so I think that’s a nice tool, not just for screening, but this is something you can use in managing your glaucoma patients. You know, that’s limited information, obviously. But it is one more tool, where you can monitor the consistent objective AI cup to disc ratio for you in your patients over time. All right? Macula. No macular abnormalities were detected. No diabetic retinopathy was detected. No microaneurysms, exudates, or hemorrhages. Okay? So that’s a normal report. I’ll just go back to that. So why is that yellow? Well, this one is yellow, because this cup to disc ratio is 0.55. In this case, we know it’s a physiological cup. Because it’s greater than what we typically expect to see, this cautionary yellow flag is appearing. Again, you’re the physician. You need to take all the information into account. This is simply highlighting something that maybe we should pay attention to. Because it’s a little bit outside normal limits. The other image that we submitted is actually one that we know is abnormal. And you’ll get that same kind of assessment here. Disc anomalies. The machine is just in this highlighting purple. Showing some areas that it detected were outside of normal limits. And then we have again that vertical cup to disc ratio. This time, much larger, 0.75. So this is well outside normal limits. And so we’ve got a red flag on that. If you scroll back, you can see the areas of the disc in particular, that the AI program is highlighting. All right? So this is a glaucoma suspect at this point. That would need further evaluation. The macula is normal. No evidence of diabetic retinopathy. You can see that the machine picked up on a couple microaneurysms. And these are quite small. But when magnified, you can see that there’s a microaneurysm right here, and this one is a little more difficult for me to find. So here you go. You can see how even subtle things that I think clinically — at least I would probably miss, at a cursory glance — it’s highlighting those and bringing those to your attention. So really a super tool. And you can just imagine how this can be utilized maybe for glaucoma screening. Administered by non-ophthalmic personnel. Or even camera kiosk setup in a pharmacy or grocery. So that’s an example of an AI-only glaucoma case. Let’s do another one. I’ll go to the sidebar this time. I will start it as an AI-only case, but I’ll show you how that can convert to a full consult. In this case, we’re going to go retina-vitreous. I’ll choose my files. And I’ve got a couple here on my desktop. This diabetic and this diabetic. These are not the same patient. All right? So normally… If we were submitting photos, we’re gonna use the same patient. But for example purposes here, I’ve got two separate photos, even though they’re right and left eye and they’re taken on different camera systems as well. So that’s been submitted. Back to my cases. My in-progress cases here. So my report is… I can see it confirmed that it’s submitted and it’s pending. Let’s launch our second poll question. Our images are there. And we will be waiting for our report to come up. All right. So this is just to follow up on the fundus camera question. Because I’m curious as to what’s out there. If you do have a fundus camera, that 60-some-percent of you that had a fundus camera, if you could, in the Q and A box, please, go ahead and type in the type of camera, or the brand of camera, or the name of the camera. Otherwise… So go ahead and answer the poll here. If you don’t have one, of course, don’t have one. If you have one, but don’t know the name, go ahead and respond to that. So we just have that information. And again, type in the kind of camera you have in the Q and A. So we’ll take a look at that. It’s not critical for what we’re doing here. But I would like to look at that. All right. So we have… I need to move my bar here. It’s hiding my refresh. Screen is refreshed, and I have my AI report. All right. So now… We can see a lot of the images again are verified and gradable. We put in decent images. We are getting an error on the disc for one side. And then we’re getting a nice normal report on the other disc. And then both of these images are showing up as abnormal for diabetic retinopathy. So we’re gonna want to take a look at those images. And I’ll just go to… This first image. And showing our image here… You can see the system is highlighting these macular abnormalities. And then we’re getting a diagnosis of severe diabetic retinopathy, based on the amount of retinopathy changes and the locations. Here we’re highlighting some, again, microaneurysms, and the boxes around those are where the machine has identified the microaneurysms, in addition to the exudates and cotton wool spots. There’s highlighting of the exudates, and highlighting of the hemorrhages. Now let’s go to our second image. And it’s one. Image one. And then image two. So we’ve got… Our disc is coming out normal. And this would also pick up neovascularization of the disc, if we had that. We have a nice normal disc. We also have a nice normal cup to disc ratio here. Being measured as 0.17. So totally within normal. But once again, our macular anomaly score — we’re finding changes which are significantly anomalous. Those are being highlighted. And then the machine algorithm goes into the grading. And based on the number and extent of exudates and hemorrhages, we’re getting a report of severe non-proliferative diabetic retinopathy. Now, I think one could look at these images and findings and say… Was this moderate or severe? And that’s where the physician’s input needs to come in. The machine is highlighting the changes. Based on that information, it’s doing the best it can to grade it. But ultimately, the physician is the authority there. And when we look at correspondence for grading diabetic retinopathy with this program versus human graders, when the severity is at the kind of moderate and above level, the correspondence with the human graders is about 90% so that’s not bad. And Nicolas can speak more to that if we have questions during the Q and A. Again, the exudates… And the hemorrhages… Here we have just one small dot hemorrhage. All right. So I looked at that and I’m like… Okay. Well, those seem like moderate diabetic retinopathy to me. I’m not sure if it’s severe. I’m not sure if I need to treat this patient. Now that I’ve done that AI analysis, I think I’d like to get an opinion from one of my mentors at Orbis. So at the bottom — let me show you the original images that we used. Here’s the original right eye. I’ll just open it. If you go to large, I can just zoom in. So there’s the original right eye. And here again, there’s the original left eye. So you can open them full resolution or smaller, and zoom in. So these are our cotton wool spots, our hard exudates, and a few scattered hemorrhages. No neovascularization was highlighted. So that’s another part of the report if it’s present. And that will show you where it’s located. So here you’re like… I’m not sure if I need to treat this patient or just watch them. Down at the bottom, resubmit case for human consult. Okay? So I’m clicking on that. It’s informing me that I’m now going to submit it for human review. But it’s also telling me that I still have access to my report. Okay? So in my case files, I’ll still be able to pull this up. I’m like… Yeah, cool. I want some feedback from one of my retina colleagues. Now, the one thing it doesn’t do — this looks now just like a blank empty new case. Just like if I was starting a case all over. It doesn’t prepopulate anything at this time. So I’m back to a new patient case. And then I’ll just go back again to retina-vitreous. I’m gonna send this to myself, so it doesn’t go out live into the system. And… When you submit consults, I’m just… There’s a lot of stuff you can put in on here. Right? But what you have to put in is you have to put in the red asterisk. So we need case category. We need age. Pretty basic stuff. I’ll just put in male. Put in insulin dependent diabetic for the past 20 years. Right? I’m gonna put in… You have to have some history to work on. Right? I’m gonna put that in. All right. This is ophthalmology. We need a visual acuity. So let’s put in a visual acuity. And you can put it in any form you want. Anywhere from LOGMAR to 20 foot to decimal. Many of you use decimal. So I’ll put in two random decimal acuities. But you see we have other options up here, if you can’t do that. So that’s really all I have to put in to submit a consult. Other than… Diabetic… My typing is amazing. Diabetic retinopathy. Treatment: None. But… Does this warrant treatment? All right. So I have moderate to severe diabetic retinopathy. Do I need to be doing Avastin? Do I need to be doing focal laser? We don’t see RP in this case. So choose files. So in this first one, I’m going to go back to our same diabetic images, there and there. So I’m attaching these to the case. Keep in mind that if I want to run AI interpretation, you have to manually select that. In this case, we definitely do, because this is a follow-up to our AI-only consult. I’m gonna click yes. And go back to my images. You have to then choose which images you want the AI to run on. Right? So there’s my 2DR images. So now those are appearing there. Why do I have to do this twice? Well, because if some of the other stuff you had put in was not AI, I don’t know if I have any other images here, but let’s just say I had accessory images, or… So if this image was an OCT, or a photograph of the patient’s chart or something, I don’t want AI analysis running on that. So you’re only gonna include the ones that are appropriate for AI, as Nicolas outlined, when he was showing you those good sample images. And this is what we want. We want fundus images. Like this. Either centered on the macula or centered on the disc. At that point, you also can submit. I just save a draft, and here goes our submission. And now that is submitted… And what’s happening now? Well, not only are you right now getting another version of that AI report, which we didn’t have to run again, but I did, but now we also have a case that’s being sent to the mentor. And this case will then come back to you for feedback. So you’ll get a retina expert’s opinion as to whether or not that was truly moderate or severe, and should you be doing anti-VEGF treatment? Should you be doing laser? And we’ll kind of close the loop that way. So I think that’s really the exciting point where we are right now. I do have one more poll question. Which we’re going to launch. This is where we are now. This is what we can do right now. But what do you want to see next? All of this is going to be fine tuned, of course. The glaucoma screening and mapping the contour of the nerve. And maybe some more education in with the diabetic retinopathy grading scale. That’s all going to be fine tuned. But what do you want to see us do next? Now, click on a couple of these. Try and prioritize what you find to be the most exciting. Macular degeneration? Dry and wet exudates? Dry forms or wet exudates and guidance on treatment for ARMD, ERG interpretation, if you have an ERG, and you’re not comfortable with generating reports, would you like to see that? This is an interesting one. Glaucoma optic nerve monitoring over time. Where I talked about being able to look at a nerve and get a grading. What about being able to scroll those nerves over time, or having an ongoing record where you can highlight changes over time? Pediatric refraction prescribing. You have a refraction. Now what do you give, based on the strabismus or absence of strabismus? ROP screening. I think this is an exciting one. We’ll have the ability maybe to screen for PLUS disease. So a non-physician can take photographs, patients in the outlying areas can be triaged and seen by an ophthalmologist if they have changes. Changes in strabismus mobility. You can take a grid and the AI can come up with a diagnosis. And then visual field interpretation. All right. So a little bit over the board. Yeah. All right. So that’ll give us some guidance as to where we go in the future. At this point, I’m going to bring Nicolas back in. I’m going to have both of us handle the Q and A session here.
DR JACCARD: Yep, I’m back. Thanks, Dan.
DR NEELY: Nicolas is back. Let me open our Q and A. We’ll go through these. Any AI system tool to address dry eyes? I’m not aware of any, Nicolas. Are you?
DR JACCARD: There is research. I’ve certainly seen some papers on it, but I’ve never seen any kind of commercial product to do it. My answer is probably there’s some research on it. I’m not aware of any kind of systems you can buy and use. Certainly something that should need more work as well.
DR NEELY: All right. Is there any AI available other than Pegasus for glaucoma diagnosis?
DR JACCARD: So for context, Pegasus is a name of some of the AI algorithms we’re using. And the answer is yes, there are other companies offering… There are companies that offer such services. Two algorithms that are FDA approved, and maybe three or four that are to be used in the European Union. I’m not going to name names, because I’m sure I’m going to forget some. But you have products who can license… As I showed during the presentation, mostly DR. Countless products for DR out there. And they’re getting better all the time. But the performance for glaucoma detection is nowhere near as good as DR grading. But this is certainly something else that exists, and it is both for fundus photograph, but also from disc CTs.
DR NEELY: So just for clarity, our system is the Pegasus system, formerly known as Visiolytics, now part of the Orbis family. The Pegasus system we have for you is free. Always a nice selling point. And you can use it to your heart’s content. Next question… We have: Is there any AI available… I’m sorry. Wrong one. When clinicians are critically evaluating AI systems, what are the key questions to achieve this scrutiny? So if we’re looking at an AI system that’s available to us, what to we need to be looking for?
DR JACCARD: It’s very similar to, for example, if you read papers about a new drug that comes out. I don’t know… To treat AMD. I’m not an ophthalmologist, so I don’t know. But let’s say a drug to treat AMD or glaucoma. First of all, if it’s to good to be true, it probably is. There are many… Especially papers that tend to be written by non-medical groups. So let’s say… A machine learning group that use ophthalmic data as input for the work, but non-ophthalmologists tend to overestimate the actual performance of the algorithms. And just because they look at the perfect dataset and they say… Oh yeah, we got perfect performance… This is what I mentioned with the Google case. When it deployed in Thailand, it was terrible. Even though when they tested in practice in the lab, it was outperforming every single ophthalmologist they could get their hands on. So I think be critical. As in: Don’t believe the hype too much. Which holds true for any academic paper or any product out there. But also always look at what was used as baseline for evaluation. Make sure you’re happy with the fact that: Okay. What they used as what they call the ground truth, the gold standard against which we compare AI, is reasonable. It’s not just one random person. Usually you have a panel of expert ophthalmologists, or you use additional tests. For example, with glaucoma, you can use OCT. Let’s say your application is detection of glaucoma on fundus photograph. You may want to use disc OCTs and visual field and everything you can to ascertain the diagnosis, and then you compare the AI versus that. So it’s about making sure that the data is right, that the evaluation seems reasonable, but also, as I mentioned in my talk, I would only really trust a system that’s been deployed in the wild. As part, for example, as a multicenter study, where the company or the organization that created a system gave it away to various institutions, and they used it independently, and then they came to a consensus as to how it performed on the ground with real patients. And I think this is what you are after. There are tens of thousands of papers out there about how an algorithm is better than everyone else at diagnosing DR, for example. But as long as they are not deployed on the ground and used with on the ground patients, I don’t think… It’s interesting from an academic standpoint. But let’s say you do 1% better than the previous state of the art — it’s interesting if you’re pushing for machine learning improvement, for example, but when it comes to patient benefits and satisfaction, and so on, only once it has been deployed and tested on the ground should you trust the system.
DR NEELY: That’s important. A couple of things. One, you have to know what population was the machine learning based on. If it’s just a very small population, versus a diverse population. Certainly that’s something that our users can help us with, as we deploy this over time. We should be able to have a diverse collection of real world photographs that we can analyze. And so I think that’s a good point. And I think… So you need to know what’s it based on. And you mentioned earlier about something functioning in a laboratory setting, versus functioning in the real world. And I think that’s an important point. Because so often we find… All right. We’ve got the perfect image. And now we get these great reports. But when you just start taking normal images that we get from users, then we find that there’s a lot of difficulty. And that’s just how this works. You have to have a good image. And the machine has to be… The machine learning has to be able to interpret that image. And so those are key parameters. There’s another question about ophthalmology AI books. Are there any good AI books in ophthalmology? It’s such a recent field. Has anything come out yet that you’re aware of?
DR JACCARD: I think there’s a useful answer somewhere from someone, recommending a couple of general books. So when it comes to AI and ophthalmology, as I said, I don’t know if there are many books out there. I know there are a couple in the press. So there are a couple that will come out soon. About the subject. I think you have… When it comes to AI and machine learning, you have a few resources that hopefully… We’re in the process of updating our patient artificial intelligence on Cybersight. And there will be… We’re planning to put a lot of resources there. So be sure to check out Cybersight in maybe a month’s time. There will be much more resources. But… So AI and ophthalmology — as you said, it’s probably a bit too… Specialized. As it is now. To have especially books that are non-technical, and kind of for beginners. I would — every year, when you have ARVO and other conferences, there’s always a couple of tracks about AI, and I’ve found that there’s a few papers that came out with those. Again, we’ll reference those in a future Cybersight update. But they’re very good explainers for kind of newcomers, to when it comes to AI. Or going over what we just talked about — how to be critical about AI, but also giving the tools to understand AI and maybe do a bit of AI as well. And you have many, many tutorials out there, and courses. There are courses for AI and ophthalmology, but as far as I understand, there are no free courses for AI and ophthalmology. So I don’t know if I should recommend them, because they tend to be very expensive. This is certainly something we are going to look at probably in the future as well. If there is interest in it, we can do it. But what I would recommend, if you’re interested in machine learning in general, don’t… There are a million tutorials out there, how to get started on machine learning. If you go on Coursera, which is a massive learning thing, there is a free machine learning course on there that is very good, to get started on machine learning. I would not start on machine learning doing AI stuff, because you’re going to struggle… Doing ophthalmology stuff. Sorry. Because you’re going to struggle finding the data, and it’s not going to be right. I would start with… As I showed, cats and dogs photos that you find everywhere. And you have datasets that already exist. You can very easily download and start experimenting with. So again, all of these resources will be on the Cybersight page at some point, when we get time to update it.
DR NEELY: Yeah, I think in terms of the medical practice of AI and ophthalmology, it’s going to change so fast. But by the time the book would be published, it’s going to be out of date already. So I think Nicolas is right. If you want to learn about machine learning and have a basic grasp of the concepts, that’s pretty reasonable to get a book for. But to say I’m gonna learn about AI and ophthalmology from a book, it’s gonna be outdated. This is gonna change way faster than that. All right. Next question is… Considering how difficult, challenging it can be, to make a diagnosis of early glaucoma, to what extent is AI currently useful in this regard? I’ll answer this one. From a clinician standpoint, I would not rely on AI to make your diagnosis of glaucoma. Right? What’s this going to do? It’s going to be a screening or a monitoring tool. All right? So a diagnosis of early glaucoma is an elevated intraocular pressure with changes to the eye. You need those two things. So you can’t make that diagnosis with AI. You can highlight disc abnormalities that bring the patient to your attention. But you can’t make the diagnosis of glaucoma. So I think that’s where the use is. At this point, largely as a screening tool for glaucoma. In the next phase, as a monitoring tool for glaucomatous changes to the optic nerve. Or monitoring OCT changes over time, et cetera. Or visual field changes over time. So that’s the next evolution of a glaucoma AI package. All right? Next question. OCT images poor at detecting progression in glaucoma and AMD? This is more of a statement. I’m going to defer that one. Let’s see. What else do we have here? Can AI replace a technician and optometry in the future? No. I don’t think so. I don’t think AI is gonna replace anything. You know, this is a supplement. There’s been such an explosion of information in medicine. That we all have information overload. The purpose of AI is to, one, bring patients to our attention, with screening. Two, make your job easier. But I don’t think anyone in the near future expects AI to do anything in place of a technician or an optometrist. You need hands on. This is just supplementing closing the loop.
DR JACCARD: Yes, and certainly at Cybersight and Orbis in general, I think we’re looking at AI, as I said, as a decision support tool rather than something that will replace humans at doing what they do. So we are making sure you have the best possible information when you have to make a decision regarding a patient. This is where AI shines, really.
DR NEELY: Here’s a question. How about AI for cataract staging?
DR JACCARD: I think that, again, I’m not aware of a commercial product, but I’ve definitely seen papers on that. And I think there’s just been a release of a dataset on that as part of one of the challenges I was talking about, benchmarks and challenges. I think there is a challenge about cataract detection now. I’m not sure about staging, but certainly detection. And the typical trend is when you start seeing academic papers on this, a few years later, you will see products and features on platforms such as ours. But certainly — the answer to any of these questions is: Is there data out there for this particular condition in a sufficient amount? So, let’s say, is this relatively doable to collect, I don’t know, 10,000 examples or something? Then I can guarantee you, somewhere, someone is working on an algorithm to solve that problem.
DR NEELY: Right. And here’s another question related to glaucoma screening. This has to do with… I would like to see how it performs in small and large discs. Where, if accurate, it would be helpful. So if the optic nerve is larger than normal or smaller than normal, does the machine learning take that into account? Or is it kind of limited to the cup to disc ratio at this point?
DR JACCARD: So the way we diagnose abnormal disc is to weigh this through — as you say, we have the VCDR computation, which is very explicit. It shows you how it computes it. So it can verify it or reject it, very easily. You have the second bit, which is a classification algorithm which is much more black box-y. You get some kind of visualization, but it’s not… You know, quite often it’s not that useful. And that black box-y algorithm will certainly take that into account, because it was trained, as I mentioned. When you train your algorithm with a bunch of data, it will learn implicitly to do this, to take into account the size of the disc. So it’s not trivial. Because the size of the disc — how do you evaluate this? If you have non-calibrated cameras, we don’t know what a pixel is in absolute measurements, how do you even start? So you can… Algorithms, these types of Deep Learning algorithms, will typically find a way, and maybe use the discs — the vessels as a reference to evaluate some kind of scale. That being said, there is ongoing research in academia, as part of a commercial organization, and also in what we do, as to how to improve that and make it much more granular and explicitly take into account the size of the disc. So right now, it’s probably implicit. It does it… Somehow we can’t really verify. But we’re going to make it so that at some point we’ll have something we can look at and be told by the algorithm exactly how it came to a conclusion.
DR NEELY: Right. And I think that’s one of the near future evolutions. Is: Not only does an AI system give you a report. But there’s an educational component that goes along with that, that says: Okay. This is what we’re reporting. And here’s the reason why this report is abnormal. Not just highlighting it. So that’s a great point and a great evolution that’s in the near future. Someone had wondered in this question — they said: I see you submitted both a normal and a suspected abnormal image for AI analysis. Is this required? Could the normal image being supplied as baseline by Cybersight? No, it’s absolutely not required. I simply submitted a normal so that you would see what a normal report looks like. And again, that normal and that glaucoma suspect were not the same patient. Just simply taking two images to show you what the report is like and not have to submit multiple reports for the sake of time during our presentation. So no. Any image — you don’t need to have a normal in there. All right. Scrolling through more questions here. It says: What is the success rate of Cybersight AI when an image is of optimal quality?
DR JACCARD: So that will depend on which bit we’re looking at, whether it’s optic disc, DR grading, or just abnormality detection. I think that’s something we want to be much more transparent on, in the future. In the update I mentioned, of Cybersight, the artificial intelligence page of Cybersight, that’s something we want to be transparent on, and provide figures. Generally speaking, we are similar — for example, the grading performance for DR is more or less similar to an expert ophthalmologist or a grader. That being said, don’t trust what I say. Don’t trust people if they say it will exceed human performance. In a very controlled environment, this is what we see. For glaucoma, much more difficult to ascertain, because of the variability between experts. But we are within the range of experts, typically. So when we ask multiple experts, typically Cybersight AI is somewhere in between of all these experts. So it’s within reason, but again, it’s very difficult to say exactly how we’re doing, in comparison, because there’s so much variability. We are pretty good for abnormality detections. For example, the macular abnormalities. As soon as it deviates from what we expect a normal macula to look like, it’s really good at highlights this. Even very subtle cues as well, such as, for example, a macular hole, even if it’s a tiny macular hole, it will tend to be picked up by the AI. So yeah. All in all, we are pretty much similar to what human experts would do. Though again… Take that with a grain of salt. It varies greatly with quality. So performance decreases quickly with quality.
DR NEELY: That’s our next question, actually. How much does image quality affect the reliability of the results? Well, I mean, it affects it greatly. Right? If you put in a bad picture, you’re going to get bad results. That’s pretty much an automatic. You give me a bad patient history, I’m probably going to give you bad feedback and bad diagnosis and bad information as a teacher. So I think this is no different. You just have to have as good of a quality of images as you can. And I think what’s important… If you put in a bad image, we’re not gonna send you a report that has bad information. That bad image is going to be flagged, and you’re going to be told: This is a bad image. This information is not reliable to our level of satisfaction. So… Keep that in mind. Don’t interpret. We’ll give you the report. But we’re gonna tell you: This is not a reliable report. Because the image is not sufficient. And you can resubmit if you have the ability to get a better image. All right? So definitely affects it. It has to. All right. This is about artifact. Humans need two images to identify a camera artifact. Can AI identify artifacts with only one image? So it will identify it, right? But it’s not gonna tell you it’s artifact. Correct, Nicolas?
DR JACCARD: Yes. Some artifacts are quite obvious. We talk about lighting artifacts. Where you have a region of your image oversaturated.. That will be very easily identified. The edge cases become for example… With microaneurysm detection, let’s say you have dust on your lens, and the only way for a human to make sure that this is dust and not a microaneurysm is to look at two images of two different patients. If it’s the exact same location, you say… Yeah, this must be dust versus microaneurysm. We have some clever ways around it, but the answer is you cannot be 100% resilient to it. There are always some… For example, dust in some cases. Very, very similar in visual features. Even though we have a system in place to avoid this. If it’s very similar in the way it looks at a pixel level to a microaneurysm, there’s almost nothing you can do, except using two images. Which we can do, but not in the context of Cybersight consult.
DR NEELY: Right. And of course, again, this is the ground floor of this future ability. So you’re gonna see the machine learning get refined, the ability to accept a wider variety of images, or filter out things like artifact. That’s going to come with time. Ears another question. Can AI help in the differential diagnosis of retinal and optic nerve pathologies? Well, I think this is what I would like to see. I think that right now, we have a great screening tool. As a clinician, one of my goals with our future development is… Okay. We’ve screened and we’ve found this anomaly. And we’ve given you some baseline grading information. Now Mr. AI: How about giving me a list of differential diagnoses I need to consider? And then once I’ve picked a diagnosis, based on these suggestions you’re giving me, how is the best way for me to treat this? And so… If any of you use the Wills Eye Manual, that’s what that Wills Eye Manual does. You input a finding. And then it suggests possible diagnoses. And ways to rule in and out each of those diagnoses. And then once you’ve narrowed it down, you get a treatment algorithm. And that’s where I see us going in the future. Is this… Now assisting your care of the patient, once this diagnosis is made. So I think that’s gonna be terribly exciting. All right. You touched on this, Nicolas. Who can use this service, and how can I get access? So we have different kinds of accounts.
DR JACCARD: Yep. So when you go and sign up for Cybersight, for a Cybersight account, first of all, you need a Cybersight account. Then when you sign up, you have the choice between different accounts, and the base level is basically just accessing the learning material. Which doesn’t require any more validation steps. I think you just get the account created there and then. When you can choose online courses plus also consultation, so kind of doing access to Cybersight Consult, which is the telemedicine platform, and this will require some approval process. For various reasons, we need to check that you are a clinician or health care professional. Or someone with the background to make the most of that information. And use that information in a way that will not be detrimental. Make sure that if you don’t have a Cybersight account yet, create one. And you take that option, that you want access to the consult application as well. If you already have a Cybersight account that is only for learning materials, but you want also to have access to consults and the AI functionality, then just contact support, which I think is [email protected], and then we’ll walk you through the different steps with it.
DR NEELY: Right. So we can help you get converted to a consult user. Now, keep in mind the consultation service that Orbis provides is restricted to certain countries. The purpose of what Orbis does with the consult service is to assist physicians without mentors in low to middle income countries. And so if you’re in the United States, you have access to all the learning materials. But you don’t have access to the consult system. Will it be possible… All right. Here’s a question from the Philippines. Will it be possible to use the AI-only feature by uploading sample photographs from the internet? So that I can practice? That’s kind of a mixed bag. That doesn’t usually work. Does it, Nicolas?
DR JACCARD: Well, I think that if you find images that are of high enough resolution, for example, there is this one image that everyone uses — I think it’s the Wikipedia article on the fundus photograph. Literally the fundus article on Wikipedia — has a normal eye. And then it has like I think it’s a glaucomatous eye, or DR eye. These two images — everyone uses when they test the AI system. They are very high resolution and good quality. So I think although the system is mostly designed for you to upload your own data, so that you use the system and you get the report and then that report informs your decision when it comes to your patients, I think it’s acceptable, because there’s no mentor involved in the loop. By default. It’s acceptable to upload example data, if you want, for example, to get used to the system. Though… You can — in order to avoid maybe overloading the system with data that’s not real patient data — you can also go to cybersight.org, and I think if you go in consult and artificial intelligence, it will show you how to use the system. At least for patient cases. And how to interpret the AI report as well. So if it’s one or two images, go for it. And if you get a feel for the system. But please don’t upload a thousand, you know, random images.
DR NEELY: Yeah, don’t crash our system. But feel free — I think it’s perfectly reasonable to play around with it, and do a couple samples, so you can see how it works. Depending on the images, it may or may not work. Depending on your resolution, et cetera. So…
DR JACCARD: Just to clarify, please don’t do this on consult — a patient case. Do it only an AI-only cases.
DR NEELY: That’s a good point, yeah. We don’t want it going to one of our mentors, and sending them distracting things. All right. Next question. Oh, and I just undid it. Okay. The question was: Is your AI system able to do OCT images or follow OCT images over time?
DR JACCARD: So we have OCT capabilities. As in… We can, for example, take an OCT cube. We can work with a single slice, but typically it will be a whole cube. There are macular centered cubes and not disc centered cubes. We can detect stuff like MD, wet and dry. We can quantify the thickness of layers. The caveat to that is this is not available on consult yet. Just because consult is really — as Dan mentioned — for low to middle income countries. OCT is not as prevalent. So we want to really focus on fundus photograph for now, and get the system working as well as possible, with images, 2D images, before we start rolling out the full thing with OCT. Because it needs increased complexity and everything. The answer is yes. And it is possible. There is a lot of work going on with OCT, in addition to fundus. But for now, we’re only supporting fundus photography.
DR NEELY: Right. So that’s to be determined. But obviously we all have that goal in mind. We all think that that would be useful. So… Yes. Expect changes constantly. And that will be one of them, I’m sure. Here’s another question/comment. I think that combining smartphone fundoscopy with Cybersight AI would be a game changer for ophthalmology, and a cheap one too. And also thanking us for helping out. Well, we totally agree with you. And trust me, we realize that’s what needs to happen. Right? I mean, you see that on these fundus cameras questions — only about half of us have access to a good fundus camera. What we really need — and there are several variations out there — is the ability to take an easy, fast, accurate smartphone fundus photograph. Either the fundus or slit lamp photograph. And once we get a product that’s really good, that’s consistent, and that we can all access, that is going to be a game changer for ophthalmology. And I think right now that’s the major hurdle with a fair amount of telemedicine for ophthalmology. Is… We need something with a smartphone to image the eye. And we’re getting there. But I would say that a lot of stuff is not quite ready for prime time yet. It’s coming. Let’s see. How do you think this will shape ophthalmic training for the next ten years? That’s an interesting question. You see it already, with the simulation tools that are out there. Nicolas, do you have any thoughts on how it’s gonna shape training?
DR JACCARD: So that’s something we are working on and thinking very hard about. As I mentioned during my slide, I think diagnostics and prediction is one area where AI is useful. And this is kind of the low hanging fruit. Which is why this is probably the first thing that people will think about, and productize and sell. I do think there’s a lot of potential for AI in mentoring. So as we mentioned, if you have an algorithm that’s granular enough, to show you exactly why it came to a conclusion — for example, we’re going back to that disc size for glaucoma detection — if your algorithm is so granular that it tells you exactly step by step how it came to that conclusion, rather than telling you yes or no, I think this could be a very powerful training tool. Because then you could imagine anyone in their spare time just uploading a bunch of images and learning from this. And also I think this is kind of the low hanging fruit. I think there’s much more that can be done, going from personalized courses that adapt to your needs and experience, and creating course material on the go, basically, on the fly, based on what exactly you need to learn, a lot of AI work these days, about generation of images, can we generate, like, random images of DR patients with… If you say… Oh, I want to see what progression from DR, from moderate to severe to proliferative DR looks like, you will have a hard time, because finding these images in the wild is very difficult. But I’m sure there will be AI algorithms that will allow you to do that very easily in a manner that is very realistic. So a lot to go. I think we haven’t even started to explore what can be done for training. But I’m sure it will be a huge thing in the future. Not just diagnostics and clinical decision support.
DR NEELY: Right. And I think a lot of the AI training is going to be clinically oriented. So you’re submitting a consult on Purtscher’s retinopathy, and the AI system recognizes that you put in the word Purtscher’s, and it brings up — here’s a nice summary review article on Purtscher’s retinopathy. So lots of ways to link in information with the medical diagnosis process. And so I think that’s what AI is going to do. It’s going to call educational material and present it to you, based on what you’re currently doing. This is just a general background question about how AI was developed. Nicolas, can you give a short summary of the… When this started? And how long it took to come up with a viable system?
DR JACCARD: So… Very briefly, skipping a lot of important steps here, machine learning has been a thing for a very long time. I mentioned Deep Learning and convolutional neural networks, that are used in kind of modern machine vision, computer vision, AI systems. These artificial neural networks have been described in the ’80s and ’90s. It’s just that it turns out that you needed a lot of data and computational power to make good use of them, that we didn’t have back then. Which is why machine learning kind of progressed until 2010 or so, and there was a lot of progress, but it was very slow progress. Where every year you were chipping away at these benchmarks. Chipping 1% away every year. And then you have the Deep Learning and convolutional neural networks that came around in 2012, and this is where everything changed. And AI, even though it’s not true AI, it’s a good approximation of it. But I would say 2012 is when all these things I described in my slides, like key landmark moments — it all came from that point in time, in 2012, when Deep Learning was really introduced to the world. And nowadays, everything uses Deep Learning. Every single AI system. And it’s part of our lives now. You know. When you go to Google Assistant or Siri, or do a Google search, it’s all Deep Learning-based nowadays. So it’s a huge change. And this is definitely, I think, what we will see in the future, when we have true AI, at some point. 2012 will probably be the date that will be in the history books as the day it all started. For better or worse. I hope the outcome will be good with AI in the long term. But that will be probably when all that started, yeah.
DR NEELY: Right. All right. I’ve scrolled through to the end of the questions. And I think at this point, we’ve answered most of people’s questions directly. Or at least something related to it. So I may start to bring us to a close here. I will end on this last question, though. Can we depend fully on this? Or do we need to recheck? Well… Nicolas, do we just take it at its word?
DR JACCARD: I would say no, and this is why, for example, to get access to it, you need some kind of clinical background. Because we want to make sure that you have the capability to be critical about it. No AI system is perfect. Certainly ours isn’t. So there’s always a chance of false positives, false negatives. So having this as an aid — this is not autonomous decision making.
DR NEELY: Right. You’re still the doctor. You still have to take the information. You have to make a decision. This is another tool. And like any tool, it’s not going to be perfect. You have to take it for what it is. And ultimately, take all the information and the fact that the patient is there with you, and make your decision. Okay? And when you’re not sure about that, that’s where the mentorship side comes from. And the ability to kick it down to someone with more experience. And so we do want everyone to use this feature. We want you to have realistic expectations. We want you to take good fundus images. So that we can help you as much as possible. And we want you to stay tuned. Because this is just going to change month by month. And so don’t be frustrated that this is all we’re offering now. Because this is just going to change, and it’s just gonna be part of what we do. So… Nicolas, thank you for your great presentation there at the beginning. And certainly all the hard work that you’re doing for Orbis/Cybersight, and this is going to be an amazing adventure. I appreciate it.
DR JACCARD: Thank you.
DR NEELY: With that, we’ll close out our webinar and I thank everyone for their attendance today.
Download Slides
April 5, 2021
Like to know more. And learn something new.
Good and timely lectures