Visual Tech & AI Advancements in Materials Science Structuring New Material Discovery & Development
/Historically, materials discovery has relied on slow, costly trial-and-error methods. Artificial intelligence and machine learning are now transforming this process by enabling faster, more creative solutions to long-standing challenges. Generative AI models are expanding beyond composition to include processing, form and real-world constraints – accelerating the creation of new molecules, chemicals and materials across industries.
At our 11th Annual LDV Vision Summit earlier this year, a panel of distinguished experts explored the future of materials science powered by visual tech and AI:
Dr. David Smith is the Associate Chair and James B. Duke Distinguished Professor of ECE at Duke University, where he also directs the Center for Metamaterials and Integrated Plasmonics. He holds adjunct positions at UC San Diego and Imperial College London.
Dr. Kristin Schmidt, Strategy Assistant for Accelerated Discovery & Vice-Chair of Physical Sciences Council at IBM, leads the Accelerated Materials Discovery group at IBM Research Almaden.
Dr. Chirranjeevi Gopal, Co-Founder & CTO at Mitra Chem, the first lithium-ion battery materials manufacturer focused on shortening the lab-to-production timeline by over 90%, addressing the largest barrier to innovation: R&D and scale-up speed.
Dr. Amanda Petford-Long, Director of the Materials Science Division at Argonne National Lab. As Argonne Distinguished Fellow in the Materials Science Division, she participates in a BES-funded research program, and is currently leading the Argonne Microelectronics Institute.
Moderator: Ash Cleary, Associate at LDV Capital.
Check out the recording or read our lightly edited transcript below.
Ash: Let’s start with quick introductions from everyone.
Amanda: I’ve been at Argonne National Lab for nearly 19 years. Visual tech is something we're using all the time. We rely heavily on visual data and we need AI to process it, understand it and then do something useful with it.
Chirranjeevi: Our key differentiator at Mitra Chem is using machine learning and other tools to accelerate the process from developing a material in the lab to scaling it for manufacturing. By training, I’m a chemist, but most of my career has been at the intersection of data science and building physical devices.
David: My area of research is metamaterials – artificially structured materials – so I’m a bit adjacent to traditional materials science. I focus on the confluence of advanced materials and artificial structures. Our work led to the development of metamaterials when we created a material with a negative index of refraction back at UCSD. It became a poster child for metamaterials – a material that can’t exist in nature, first predicted by a Russian physicist in the 1960s. That breakthrough, followed by the invisibility cloak we developed at Duke, helped spark the field of metamaterials. Around 2012, I began focusing on practical applications and spinning off companies related to metamaterials. I’m particularly interested in how we replicate material functionality with artificial structures.
Kristin: I'm also a chemist by training but have ventured into the world of AI. My group develops AI for scientific applications, specifically identifying concerning materials and replacing them with safer alternatives. We also focus on building AI models that collaborate with human experts – because none of us can do it alone.
Ash: What present-day trends and opportunities in leveraging visual tech for new material discovery and development are exciting you the most?
Chirranjeevi: I can provide a unique perspective because we are both making physical materials and, as entrepreneurs who've taken investor money to build a revenue-generating business, we often have to balance taking as little risk as possible to reach revenue. That typically means relying on tried-and-tested, human intuition–guided synthesis approaches – while also doing something differentiated, like using AI-powered methods to speed up development cycles. Compared to five years ago, when techniques like Bayesian optimization were among the few ways to analyze data and accelerate development, I’m now especially excited about incorporating generative AI advances to sift through existing literature. This can potentially simplify years of work by graduate students or scientists, helping to translate what's already been done into a tangible product.
The key is leveraging AI to augment human intuition – not just to discover new materials but to synthesize and make them real, whether at lab scale or pilot scale.
Amanda: At Argonne National Lab, we work on everything from basic science to applied research and industry collaboration. I resonated with something Akhila said earlier: the best path forward is combining humans and machine learning. We still need scientists to come up with the big questions and define the problems we want to solve. Experimental validation is also absolutely essential, as she mentioned.
We often start by using AI to down-select potential materials. Then, for the materials we want to use – for example, in a microelectronics component – we’ll model and simulate them. After that, we take those materials to tools like our synchrotron or an electron microscope to image them and understand their behavior through in situ experiments. We feed that data back into our models and refine them in a circular, iterative process. That’s proving to be the best approach. But you can’t do it without scientists – at least not yet – and I hope that remains the case for a while, or we’ll all be out of jobs.
Ash: Are there any specific examples you can share from your microelectronics research?
Amanda: One key area we're focusing on is processing, which has traditionally been done relatively slowly. We're working on things like atomic layer deposition, where atoms are deposited one layer at a time. That used to be done on a very small scale, but now we can do it at wafer scale. This allows us to design materials in a much more controlled way. We study the fundamental processes through simulations and then conduct experiments to see what happens. From there, we can translate the findings into scalable techniques. One example is with 2D materials – for instance, a 2D semiconductor, which could be used for heat transfer. Understanding the underlying processes lets us use AI to design scalable fabrication methods. Patterning may also come into play.
Kristin: What excites me the most is developing systems that can work collaboratively with human experts. We're trying to build systems that can capture some of that intuition. Everyone who is an expert in the field knows that, over time, we develop a certain intuition about processes – sometimes understanding things we can’t even explain. The idea is to teach AI like we would teach a student: to work with us and learn from us. That, to me, is an exciting direction in AI right now – where AI could become a partner, not just a tool or something meant to replace us.
David: AI tends to be useful when you have large, copious amounts of data to sift through. One area where I could imagine it being useful is in cases where a material is sensitive to its fabrication, manufacture and yield. That process can generate a lot of data, and AI might be able to sort through it and identify optimal processes more quickly. I see it as more of an agent that supports the people doing the empirical and experimental work.
Ash Cleary, Associate at LDV Capital
At LDV Capital, we believe that visual technologies will continue to be the crux of advancements in materials science, across nanomaterials and metamaterials, resulting in the transformation of various industrial, medical and consumer applications. Embedded AI will enable materials to reconfigure themselves autonomously in response to their environment. As sensing abilities and algorithms improve, the full potential of navigating the compositional and configurational possibilities of materials will catalyze widespread transformation for optics, biomarker diagnostics, robotics, 3D printing and more.
Check out Ash’s article exploring the future of materials science developments powered by visual tech and AI.
Let us know if you are either thinking of building or have already started building a startup leveraging visual technologies and AI in the materials discovery & development space.
Ash: I guess on the topic of large-scale data and models – it could be argued that recent AI and machine learning breakthroughs in materials science were kickstarted when Google DeepMind published their GNoME model back in November 2023, which uses deep learning to discover new materials faster than ever. Since then, more large-scale models and tools have emerged, like Microsoft’s unveiling of MatterGen and MatterSim late last year. Do these new large-scale models and tools excite you all in terms of pushing the field forward? Or should we, on the other hand, be cautious about how successful these large-scale or “one-model-fits-all” approaches can be?
Amanda: I’d be careful with a “one-model-fits-all” idea. I’ve had computer scientist colleagues say, “There must be a single equation that tells you everything about material science.” And sure, there is – there’s Schrödinger’s equation, the basis of quantum mechanics. But it’s practically useless for understanding the properties and behaviors of materials in the ways we need. We also need to be looking at all the data out there – in the massive number of scientific papers published over a century or more, depending on how far back you go and which materials you're looking at. It’s a question of how you mine all that information and extract the insights you need. It comes down to having someone who can interpret what you find – to ensure you don’t go down some rabbit hole that leads nowhere. There’s a ton of data out there, but the challenge is: can you make sense of it?
Chirranjeevi: It’s important to remember that materials are not just one type. You’ve got biomaterials, molecular materials – and when you look at energy, you're talking about systems-level materials. If you peel open a battery, it's a big object, right? Inside, there are 10 active materials. If you peel open those electrodes and look under a microscope, you see tiny particles. Zoom in further, and you see the chemistry. So you’re dealing with a length scale that spans 10 orders of magnitude – and the timescales for processes span multiple orders too.
Discovering a material that works in isolation is great, but the real challenge lies in how it performs in a system. That’s the real bottleneck.
A-Lab combines automation and artificial intelligence to speed up materials science discovery. © Marilyn Sargent/Berkeley Lab
I commend the work happening at A-Lab. I know Gerbrand Ceder well – he’s the one who pushed forward the work on autonomous labs in collaboration with DeepMind, MatterGen and MatterSim from Microsoft. Those efforts are great for reducing the cost of data points. As referenced in Ash’s article, true materials innovation can require 100,000 iterations. Each one costs around $1,000 – that’s a $100 million discovery effort. Where these A-Lab efforts shine is in lowering the cost per data point. But even then, there’s still a huge gap between identifying a promising molecule and making it work. That leap requires systems-level models – which we still don’t have. At companies like ours, we’re working with materials that were postulated 15 years ago, and they’re still not application-ready. There's a reason for that.
Ash: Chirru, you've co-founded the first lithium-ion battery materials manufacturer focused on dramatically shortening the lab-to-production timeline, and addressing R&D and scale-up speed as barriers to innovation. What future challenges do you anticipate needing to overcome in the next five years to continue scaling innovation? And what role will visual tech play in addressing those?
Chirranjeevi: In R&D, the real challenge is iteration speed. It’s about reducing the cost of acquiring each data point. We operate in a small data regime. In the last four years, we’ve collected maybe 3,000 to 4,000 samples as a company. It takes time and money to generate that data. So how do you learn as quickly as possible from a limited data set? The biggest dimensionality reducer is physics. Then comes data. That’s what we’ve been focusing on. The next frontier for us, in terms of scaling innovation, is: it works in the lab, but how do I make “chocolate chip cookies” for 100,000 people? That’s a completely different problem than baking at home.
You have to use machine learning and generative AI to sift through data quickly, spot outliers, and increase operational efficiency.
Process scale-up and optimization are how you reduce failures at scale and minimize costs. I’m excited – we have a small internal effort toward applying AI to reduce the translation gap from lab to scale.
David: An interesting challenge for AI that I’ve been thinking about is superconductors. For decades, the transition temperature was stuck at a certain level – until copper oxides were discovered. That discovery shocked everyone and led to a big jump in Tc. No one was thinking of looking at those materials. But once that breakthrough happened, there was steady, incremental improvement over time because researchers now knew what kinds of things to investigate. I see AI being useful for accelerating that incremental improvement – shortening the timeline drastically. But whether AI can predict the next big leap, the next dramatic jump in Tc? That’s where I draw the line between what AI can do and what it can’t do yet.
Kristin: Even with incremental progress, we often don’t know which parameters control the process – and we don’t always capture them. It goes back to sifting through literature from the past decades or even a century. Sometimes, the key variables weren’t documented – things like temperature, humidity, or environmental factors.
That’s why another exciting area is automated labs packed with IoT devices that capture everything. AI is good at finding correlations we don’t see. So it becomes more about the processing, the next incremental step.
It remains to be seen whether AI can generate the next big leap in discovery, because that’s truly outside the bounds of the data it has seen. Most breakthroughs have been accidental. But maybe, one day, AI will make some happy accidents too.
Ash: To that point, Kristin – would an AI vision system be sitting on top of what’s occurring in the lab and then interpreting results? Or where do you see vision playing a role?
Kristin: I don’t necessarily mean optical vision, but rather a network of IoT devices capturing all sorts of physical parameters – temperature, humidity, lighting. For example, in semiconductor labs, special lighting is needed because some materials are sensitive to specific wavelengths. Today, you can capture all that and build digital twins – essentially duplicating your fab or lab in an AI system. Then, if you change parameters, the system can predict how the real-world setup would react.
And if you combine that with materials models – because like Chirru said earlier, no material in material science exists as a single molecule – it’s all systems. It’s about formulations and how materials behave together, depending on the process.
AI needs to understand that entire ecosystem, not just one piece of the puzzle.
Amanda: The systems part is important. Everything's happening at the interfaces between materials, and those can be quite unpredictable depending on how they’re processed. If you don’t understand that, you won’t understand how to assemble the different layers in your semiconductor device. You won’t understand how to manage heat. And you certainly won’t understand how batteries work, because it’s all about interfaces. It’s a matter of doing that – and it is iterative, in a way – but the question is: what information can you capture? What visual information? It could be hyperspectral data – similar to what Dr. Brandon Fields mentioned earlier about soil samples – we capture that same kind of information from materials. That way, we know how they behave spatially, and hopefully we’ve also recorded key parameters like temperature and humidity – things that probably control the process. Those are the details graduate students never write down because they don’t think they’re important and then they can’t reproduce their work. Hopefully, AI can do better by capturing all that.
Chirranjeevi: In all these discovery efforts, materials are often thought of just in terms of chemistry – as a collection of A-B-C molecules. But the same A-B-C composition can behave differently depending on how it’s made. Morphology is important. I can take the same material and design it for a car battery, a watch battery, or a vacuum cleaner battery – different use cases – simply because of morphology. It’s a visual property. A lot of the work Argonne National Lab does is about how to use both X-rays – which aren’t in the visible range – and visible data to understand how morphology maps onto real-world applications. This was an expensive endeavor five years ago, but with advances in AI, that cost has dropped significantly.
I’m excited that five to ten years from now, we’ll be able to “see” how materials form as we process them – like when we cook them in the oven – and use that vision to optimize the process. What used to be an opaque, black-box process is now becoming something we can visualize and analyze in real time.
Amanda: Not just how it’s processed, but how it behaves, too.
David: What Kristin said made me think – it’d be great to have an agent looking over your shoulder while doing empirical work. Everything matters, but we only think to record certain things. If something was watching, we might discover hidden variables we didn’t realize were important. We used to write everything down in a lab notebook – that’s kind of old-fashioned now. But having that process observed and analyzed makes a lot of sense.
Kristin: We tried that. We called it “The Lab That Learns.” You wear glasses with a little camera that records what you’re seeing. Of course, you run into privacy issues pretty quickly – but it was a great demonstration.
The technology is already here to capture everything you do in a lab. We just need to bring it all together and build AI on top of it.
Amanda: The pandemic helped with that. Kristin mentioned X-ray sources – our big X-ray facilities and electron microscopes at Argonne had to be made more automated because no one could be onsite. So, people accessed them remotely. That’s when we started having the systems record every single piece of information as metadata with each collected image. Now we collect way more metadata than before. So if we’re doing real-time analysis – say, looking at how a chip works under an X-ray beam – we’ve got all that info stored and ready for AI to come in and tell us what to do next.
David: I was involved in a biotech spinout, and I still have nightmares from running reactions – getting a result one day, then nothing the next, even though you think you're doing the same thing. If we had that kind of detailed record, maybe we could trace the cause. It’s interesting.
One potential use for AI is in the manufacturing stage – not just generating the material, but figuring out how to make it manufacturable. That includes analyzing the processes involved. One reason we work on metamaterials is because you can take marginal materials – ones that are interesting but not very strong – and enhance their properties by creating specific structures around them. So, part of my motivation is to use metamaterials to make advanced and emerging materials more practical.
Ash: What are those leapfrog advancements that feel inevitable, just a question of when?
Kristin: For us, it’s the convergence of different kinds of computing. We started with high-performance computing – big CPU systems – to simulate materials. Then AI entered with GPUs. Quantum is still emerging, but it’s coming. Once we can bring all three together – classical, AI and quantum – and converge them into a single overarching compute system, that could be huge. Then AI agents and humans can use it to understand materials. AI doesn’t truly understand – it finds patterns. But with this convergence, we might get to understanding. That would be amazing, though I’m not sure when it’ll happen.
Ash: Are there any quantum computing innovations happening now that excite you, or is it still years away?
Kristin: We do have quantum computers, and there's a lot of research focused on finding the killer use case – proving that quantum can do something classical computers can't. Materials are usually the poster child because they’re inherently quantum mechanical systems. So, it's a great domain to demonstrate that value – it just hasn’t been shown yet. But we’ll get there.
David: When it comes to imaging and materials, I see a convergence between sensing – vision – and perception – AI processing. The better these two work together, the more efficient the system. You don’t want a traditional architecture that captures images and then offloads data to a processor that eats up tons of power. If we could shift some of that to hardware – using advanced materials to embed intelligence closer to the sensor – we could dramatically improve efficiency. Eventually, we might even develop biomimetic systems using these new approaches.
Chirranjeevi: We’ve focused a lot on new technologies, but I want to talk about a problem that I’d love to see solved – synthesis by design. Traditional materials fields are pretty old school, and it’s hard to convince a plant worker or lab tech to use an AI-generated material. So there’s a cultural barrier as well as a technical one. But if we can show a real use case – where AI not only designs a molecule but also synthesizes it under real-world constraints, and it works end-to-end – that would be a huge win. That’s possible within the next 10 to 20 years.
Ash: Can you briefly define “synthesis by design”?
Chirranjeevi: Synthesis by design means going beyond saying, “Here’s a good molecule for an application.” It’s about: how do you make it? As someone said earlier, manufacturability is a big deal. Things fail on the production line for very real reasons – reasons that are often hard to model in AI. So, what starting materials do I need? What are the exact steps to synthesize it? It’s like: can AI not just say that chocolate, butter and sugar make something tasty – but tell you how to make perfect chocolate chip cookies? That’s the challenge.
Amanda: We’ve been playing with large language models (LLMs) to do what you're describing. We asked a couple of LLMs to design a new ferroelectric material for memory applications. We told the model what we wanted, and it gave us a list of temperature regimes, pressure ranges – it even suggested cool stuff that we later validated. It was all solid information. Then we asked for an experimental workflow to image the material’s properties, and it did surprisingly well. I was skeptical at first, but it worked. The one thing it struggled with was citing sources – it hallucinated scientific papers. The content was accurate, but it would get either the title, the authors, or the journal right – but never all three. So there's still room for fact-checking.
Ash: Why LLMs, specifically?
Amanda: It was part of a challenge called "A Thousand Scientist AI Day" across the national lab complex. For that event, we got to experiment with several LLMs. I didn’t expect them to work, but I was intrigued and impressed.
Ash: There’s certainly a lot to look forward to! Thanks again to all our fantastic panelists for such an engaging conversation. At LDV, we’re excited about the future of materials science, discovery and development powered by visual tech!
Here’s what our panelists said about their experience of participating in our 11th Annual LDV Vision Summit:
“I was honored to participate in the 11th LDV Vision Summit and to meet some of the visionary entrepreneurs building businesses powered by visual tech & AI. The Materials Science Panel was great fun and I agree that visual technologies will have a significant impact on the future of materials discovery and development. The conversation with the other panel members and attendees on AI and computer vision advances that are driving our community was very inspiring.” — Dr. Amanda Petford-Long, Director of the Materials Science Division at Argonne National Lab
“As a newcomer to the LDV Vision summit, it was refreshing to see the breadth of challenges being tackled by entrepreneurs using AI + visual technologies. Specifically, in the space of new materials innovation, there was clear resonance around how the next ten years will mark a change from AI-assisted discovery to AI-led synthesis by design." — Dr. Chirranjeevi Gopal, Co-Founder & CTO at Mitra Chem
“It was a privilege to join the panel on Visual Tech Advancements in Materials Science Structuring New Material Discovery & Development. The discussion highlighted how AI is not only accelerating the discovery of new materials, but also transforming lab and production workflows—enabling the design of safer, more sustainable compounds that truly perform in end-use applications.” — Dr. Kristin Schmidt, Strategy Assistant for Accelerated Discovery & Vice-Chair of Physical Sciences Council at IBM
“Thank you again for the invitation to participate on this panel! The fusion of science, technology and business topics was a really unique aspect of the summit and something very much of value. On the specific topic of artificial intelligence and machine learning in materials research: AI/ML is a very interesting and emerging tool for materials research! For those areas of materials research that search across massive materials databases looking to find or synthesize specific properties, AI/ML provides a potentially unparalleled accelerator that could drive many other technologies that depend on advanced materials. But even more interesting is the prospect that AI/ML may one day have enough intelligence to predict new materials based on actual physical reasoning, now almost exclusively in the domain of physicists and chemists. The experts you brought together for this discussion were top notch and brought really though-provoking perspectives to the table.” — Dr. David Smith, Associate Chair, James B. Duke Distinguished Professor of ECE at Duke University