Despite the ever-growing trove of human knowledge on which they’re trained, systems like ChatGPT seldom surprise you. They dutifully answer questions, mock up bland emails, even churn out paint-by-numbers fiction. What they don’t often supply are original ideas or boundary-pushing creativity. But as AI companies talk a big game about supercharging scientific discovery, a growing discussion has focused on how the new generation of AI can be taught to “think” in novel ways. That’s one of the ambitious goals of a startup called Lila Sciences, born out of Flagship Pioneering, an investor in and incubator of biotech companies that also produced Moderna. With a stated mission to “build scientific superintelligence,” Lila has raised $200 million in seed funding. It has created autonomous labs—AI Science Factories, as it calls them—that use foundation models and robotics to train on the scientific process, from hypothesis to experimentation. Headquartered in the biotech hub of Boston, Lila Sciences is set apart both philosophically and geographically from the “current paradigm” in the AI industry, according to the company’s CTO of AI research, Andy Beam. That is, Beam said, the prevailing belief that superintelligence might be reachable by scaling up models trained on massive amounts of online data. Even breakthrough reasoning models that have shifted the focus from pure scale in recent months are concentrated on “easy-to-verify domains like math and coding,” he said. “There’s this palpable hope, specifically in the Bay Area, that if you keep doing more of that, you’re going to get these generalized AGI superintelligence capabilities that will then result in scientific breakthroughs,” Beam said. “And we just have a pretty firm belief that you actually have to learn to do science. You actually have to do experiments…We have this pretty core belief that internet data is only going to get you so far.” Keep reading here.—PK |