News & Highlights
Topics: Diversity & Inclusion, Five Questions, Technology
AI and Robotics: Long-Distance Brain Surgery Within Reach
Five Questions with Future Remote Neurosurgeon Destiny Green.
Destiny Green envisions a day when a neurosurgeon in Boston could use remote-controlled intelligent robotics to unclog a brain clot in a patient in say, Africa, where few neurosurgeons practice. The goal is telesurgery on the brain, performed across continents and perfected by artificial intelligence.
Green’s research, positioned at the intersection of AI, robotics, and neurosurgery, aims squarely at making that vision a reality.
An MD/MS candidate at the Mayo Clinic’s Alix School of Medicine, Green traces the roots of her current research track to her 2023 summer internship at Dana-Farber Cancer Institute in the oncology lab of Rani George, MD, PhD, through our former Visiting Research Internship Program (VRIP). The experience helped Green more clearly understand her true calling.
We caught up with her three years after her summer at Harvard Medical School, as she prepares for the upcoming neurosurgery residency match following a competitive application cycle.
How has your experience as a VRIP intern in 2023 influenced what you’ve done since?
VRIP showed me some of the different areas within neurosurgery that I could take my research and which of my skill sets had the most potential. My research that summer was focused in a basic science lab – and I’m not a basic science researcher, so the experience was very challenging. The learning curve was steep, but I continued to show up every day and didn’t give up.
Ultimately, the internship allowed me to self-reflect on the trajectory of my research career and actually pushed me away from basic science into computational research. It opened doors both to understanding where my interests truly lie and to a network of individuals within the Harvard Medical School and Mass General Brigham research and clinical communities that has been significantly beneficial to my academic career.
So, because of VRIP, you shifted your research focus to computational science and artificial intelligence as it applies to neurosurgery. What excites you about that?
This is going to be the next era not just in neurosurgery, but medicine as a whole. In just the past few years, we’ve seen a significant boom in the utility of artificial intelligence within healthcare for diagnostics, prediction tools, workflow assistance, and more. At my institution, the Mayo Clinic, we have robots running around assisting patients, delivering blankets or food, among other tasks.
“I think that’s the future of AI, and specifically robotics, in neurosurgery: to make it safer, more accurate, and more precise.”
What really excites me is the ability to augment the surgical and robotic tools that we already use in neurosurgery. Currently, many of these tools are not necessarily intelligent or smart. They don’t have the ability to learn, to take in data from the procedure and the surgical technique and adjust for future surgeries. I think that’s the future of AI, and specifically robotics, in neurosurgery: to make it safer, more accurate, and more precise.
That’s where I see this integration really taking off – not necessarily replacing surgeons, but augmenting what we do in the operating room to the best of our ability.
You spent three months in the Zurich laboratory of Bradley Nelson, PhD, working on robotics. What was your focus?
Bradley Nelson runs the Multi-Scale Robotics Lab at ETH Zurich, which is specifically aimed at coupling robotics with telesurgery technology. The aim is to build an avenue to perform thrombectomy from great geographic distance.
Low- and middle-income countries may not have resources or surgeons with enough expertise for complicated cases, so being able to connect with surgeons from across the globe can allow these surgeons to get advice or gain expertise on how to proceed with specific procedures.
Robotic endovascular neurosurgery is not necessarily new, but its application to telesurgery is definitely novel. To further advance this application, I worked with the Nelson lab to build an artificial intelligence model to use data from their robotic system and train the model to find the path from the origin to the target. In this case the target was a large vessel occlusion within the brain’s vascular system.
I was able to do this pretty much from scratch, using skills that I learned from my artificial intelligence degree here at Mayo Clinic. As a self-taught coder with no formal education in computer science, to be able to pull those skill sets together and apply them to a real-world project that could have major benefit for patients in the future was extremely fulfilling. And I was glad it worked – we were able to show the potential of this technology in this vascular model.
So the end product is remote brain surgery?
Absolutely. They’ve already started doing demos using vascular models in places [where neurosurgeons are scarce]. The summer I was there, [several lab members] took a trip to Ghana, where they have been doing community work and offering [to demonstrate how the system works]. They taught courses to introduce the technology to the community and teach them how to program and engineer the robotics systems needed for the telesurgery.
The impact of this work on global health and global neurosurgery can and will be significant, both directly from the advancement of this robotics system and from the work the Nelson lab has been doing within communities to bring this technology to their doorstep. What they’ve done in the past few years has been truly outstanding; I was glad to contribute in a small way.
The primary drivers of this work are the shortage of neurosurgeons worldwide and the need for affordability. The technology I worked with is low-cost and very affordable. Even a small community hospital in Ghana would be able to afford it.
You’ve earned a master’s in artificial intelligence during a time when AI elicits sharp controversy. How do you address the fears that people have?
“For many patients, AI is kind of a black box; we don’t really know what’s inside. I think the best way to combat that is to create a transparent box.”
I think the best way to address them is just to be open. One of the ethical considerations I learned with my AI degree was that so many people hear the buzzword “AI” and may not understand what it means. For instance, if you are in the emergency room and the physician comes in and says, “We’re going to use this AI diagnostic tool to check if you’re having a heart attack,” as a patient, you may wonder what that means. What data is it taking from me? How is it using that data? Where is that data stored? And how is that data being used within the clinical system or the healthcare system?
For many patients, AI is kind of a black box; we don’t really know what’s inside. I think the best way to combat that is to create a transparent box. Physicians and clinicians can be more open about it. They can explain what an AI diagnostic tool is, that it will take your age, race, heart rate, EKG, chest x-ray results, and whatever else is available, compile that information, and give us a score indicating the likelihood that you’re having a heart attack, or whether other diagnoses should be considered.
That’s just a very simple example. The idea is to be more open about how the model works, what type of data is being taken, where that data is going, and how it’s being used. I think we can help clarify AI to patients and make them feel safer as these tools are utilized more in the healthcare system and our hospitals.

