Power up. From supercharging employee productivity and streamlining processes to inspiring innovation, Microsoft’s AI is designed to help you build the next big thing. No matter where you're starting, push what's possible and build your way with Azure's industry-leading AI. Check it out.
Despite the ever-growing trove of human knowledge on which they’re trained, systems like ChatGPT seldom surprise you. They dutifully answer questions, mock up bland emails, even churn out paint-by-numbers fiction. What they don’t often supply are original ideas or boundary-pushing creativity.
But as AI companies talk a big game about supercharging scientific discovery, a growing discussion has focused on how the new generation of AI can be taught to “think” in novel ways.
That’s one of the ambitious goals of a startup called Lila Sciences, born out of Flagship Pioneering, an investor in and incubator of biotech companies that also produced Moderna. With a stated mission to “build scientific superintelligence,” Lila has raised $200 million in seed funding. It has created autonomous labs—AI Science Factories, as it calls them—that use foundation models and robotics to train on the scientific process, from hypothesis to experimentation.
Headquartered in the biotech hub of Boston, Lila Sciences is set apart both philosophically and geographically from the “current paradigm” in the AI industry, according to the company’s CTO of AI research, Andy Beam. That is, Beam said, the prevailing belief that superintelligence might be reachable by scaling up models trained on massive amounts of online data. Even breakthrough reasoning models that have shifted the focus from pure scale in recent months are concentrated on “easy-to-verify domains like math and coding,” he said.
“There’s this palpable hope, specifically in the Bay Area, that if you keep doing more of that, you’re going to get these generalized AGI superintelligence capabilities that will then result in scientific breakthroughs,” Beam said. “And we just have a pretty firm belief that you actually have to learn to do science. You actually have to do experiments…We have this pretty core belief that internet data is only going to get you so far.”
Instead, Lila’s models train through hands-on experimentation. The AI generates hypotheses, then tests them in automated labs. Robotic arms whizz around and samples speed down magnetic conveyor lanes as the AI tests synthesized materials or nucleic acids. It collects data and observes along the way.
Lila Sciences
“Even large language models that have read the entire scientific literature, what they could do is give you a set of possibly interesting questions, or possibly interesting hypotheses, most of which are going to be false,” Beam said. In Lila’s system, he added, “the model can ask a question, the experiment gets run, the model then observes…The model gets to decide what to do with the result of the experiment, updates its understanding of how the world works, and then, importantly, gets to ask the next set of questions.”
An open end: At the center of this approach is a concept called open-endedness, a sub-field of artificial intelligence loosely concerned with devising ways to make AI and machine learning systems adapt, innovate, and chase novelty or the abstract quality of “interestingness,” as some research papers put it.
Kenneth Stanley, Lila’s SVP of open-endedness, has been studying these questions for close to two decades. Stanley wrote a book in 2015 called Why Greatness Cannot Be Planned that discussed his work on novelty search algorithms—systems that seek out a variety of outcomes as opposed to any particular objectives. In the book’s more fanciful portions, Stanley extends these findings into a life philosophy of sorts around the value of serendipity and following one’s interests. Stanley also previously led a team of open-endedness researchers at OpenAI until 2022.
“There’s open-endedness all around us as human beings. And I’ve been trying to understand, algorithmically, how do you build systems that have this property?” Stanley told Tech Brew. “Lila is placing open-endedness really at the heart of what it’s trying to do, but then combining it with…this opportunity to test it against the real world.”
Open-endedness has come to be seen by some researchers as a core tenet of the quest for artificial general intelligence—AI that can perform on par with or better than humans at most tasks. In a position paper last year, Google DeepMind’s open-endedness team claimed the concept would be “an essential property of any artificial superhuman intelligence.”
“This is at the absolute heart of what AI or AGI needs to be,” Stanley said. “If you can’t actually hack being genuinely creative, you’re not at the human level, no matter what else you can do. In some ways, the most salient thing that we leave behind is our legacy of ideas. We’re not remembered for our test scores. We’re remembered for the inventions we made.”
Lab grab: Lila is far from the only company attempting to turn generative AI into a tool for scientific discovery. OpenAI has framed its latest reasoning models as able to pull from various scientific disciplines and synthesize ideas, according to The Information. The company has been pitching them to the national labs through a partnership signed in January.
“I think that AI tools will mean we can accomplish, at some point, a decade worth of scientific progress in a year for the same cost or even less,” OpenAI CEO Sam Altman said during a recent Senate hearing.
Google recently debuted a multi-agent co-scientist platform meant to provide a resource throughout the entire scientific process. Anthropic announced an AI for Science program earlier this month. A host of startups are focused on building AI tools for drug discovery.
Lila is focused mostly on life and material sciences right now, two fields where AI has guided major breakthroughs. Beam said Lila’s labs have already “created new molecules that have never existed before” and “exceed previous states of the art.”
On a bigger scale, open-endedness is still very much an open question, Stanley said; creativity is “not something that happens reliably right now” in AI.
“A lot of the focus on reasoning—which is that ‘Here’s a problem, now solve it’—is somewhat orthogonal to the issue of having an interesting idea without knowing what the problem is. Like, what should we even be studying? What should the hypothesis be? Where’s an interesting direction to go?” Stanley said. “I think in general that we have an opportunity, because of our focus on science and our ability to ground out the ideas that we hypothesize in real experiments, to make progress in this fundamental way, and to actually learn how to actually have interesting ideas.”