How seriously should we take tech bro manifestos?
Also, what do the tech bros — the billionaires and centimillionaires who made their fortunes founding, running, or investing in tech companies — have to say specifically about the future of biology including regenerative medicine and stem cell research related to AI?
A fair bit, as it turns out, especially if you read between the lines. And in some cases, even if you don’t. Last week, news broke that OpenAI, the maker behind ChatGPT, has worked with Retro Biosciences to make cellular reprogramming more efficient via AI. Details are sparse for now, but this is clearly just the start.
In parallel with the development of these increasingly advanced AI models, it’s recently become a rite of passage for billionaires with a tie to the AI space to publish expansive AI manifestos, outlining their visions for the utopia AI will usher in (and, yes, sometimes highlighting the potential downfalls too). Some of these touch on regeneration.
Though somewhat disappointing, these tech bros and their manifestos will likely have far greater influence on the future than your or my visions will. So, I believe their writing merits some attention — and lots of scrutiny.
Eyes closed, head first, can’t lose: the quintessential “Techno-Optimist Manifesto”
Written by Marc Andreessen, who co-founded Netscape and made much of his fortune investing in companies like Facebook, “The Techno-Optimist Manifesto” sets out to provide a utopic, exciting vision of what AI will bring but fails at almost every turn. There has been more than enough critical coverage of this piece, which started the long chain of manifestos (and I highly encourage you to read some of the manifesto critiques for a good laugh).
However, its most fatal flaw is Andreessen’s inability to see nuance. In his writing, problem A can be solved by solution B, all of the struggles in getting there and ensuing problems seemingly forgotten. “We had a problem of isolation, so we invented the Internet,” he writes, never mind that isolation has never been top of mind for tech founders or that isolation has increased probably in part because of the Internet.
Marc Andreessen on AI and health
In his most explicit mention of the impacts of AI on health, Andreessen writes,
“[w]e believe Artificial Intelligence can save lives – if we let it. Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence working on new cures.”
So far so good. This claim is widely agreed upon and there’s already good evidence of it. In clinical or near-clinical settings, AI models can help with problems, from clinicians making better decisions when selecting which donor lungs to use for lung transplants to more efficiently assigning nurses in the emergency department. Specifically in the stem cell field, AI models are being used to more easily come up with ways to make pluripotent stem cells self-organize in specific patterns and accurately decide when a cell is differentiated or not, potentially helping with better in vitro tissue formation in the future.
But then things go off the rails in this manifesto: “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.” Here the lack of nuance appears again. It’s difficult to see the use of this frankly bizarre reasoning as anything more than a ploy to garner support for the development of AI without any guardrails.
Evaluating potential risks of AI
As highlighted above, there are countless positive use cases of AI. But that doesn’t mean that the risks can be ignored. In the regenerative medicine space, there are two distinct categories of risks that unrestrained AI can cause: a worsening of the misinformation crisis in the field and introducing or exacerbating issues in research. These issues can include not allowing scientists to see the reasoning behind the decisions the model makes and introducing bias (though methods are being developed and applied to address these issues).
As the common saying goes, garbage in, garbage out. If companies train AI models on faulty or biased data without taking the time to correct this issue first, the model will accordingly be faulty or biased, a fact that may not become apparent until it’s too late. And as Google found out, there are no easy band-aid solutions to eliminating this bias. Ultimately, when this happens in a chatbot, the consequences are less severe. In a scientific context, it’s a different story.
Dario Amodei’s tech bro manifesto: Machines of Loving Grace
Lauded as one of the more reasonable manifestos, in the 13,000-word “Machines of Loving Grace” Dario Amodei — the CEO of Anthropic, the company behind the popular Claude chatbot — outlines what he thinks an AI-led future will look like. He begins by criticizing the “excessively ‘sci-fi’ tone” some of his peers adopt and promises to provide more concrete details in his essay.
Amodei dedicates an entire section to “biology and health.” Right off the bat, he disagrees with the notion that AI is only for data analysis and says that it can eventually act as a “virtual biologist who performs all the tasks biologists do.” He later states that the AI could be akin to a principal investigator. Here we’re starting to stray away from the science itself and more so into the world of AI replacing jobs. But it’s worthwhile to take a look at this because it could have large impacts on how stem cell science is done in the future.
Ultimately, a big deciding factor for whether an AI model will be able to function as a PI will be influenced by AI’s ability to have true “creative” thoughts. In many ways, science is iterative and builds on previous work, but there is clearly ingenuity involved in the greatest scientific discoveries. Could those be conceived of by AI models? A shrug is the best I can offer for now. In some creative settings AI does seem to excel. In a business class innovation contest, for instance, GPT4 was able to beat students. On the other hand, in a short story idea generation setting, humans using AI had the best ideas but the AI-generated ideas were found to be less diverse than humans’.
Creativity, AIs, and PIs
A recent preprint on AI and innovation from the end of 2024 also shows interesting results — ones that are promising for worker productivity and slightly depressing for the humans who do the work. A materials science industry R&D lab did the study after an AI assistant was introduced to help with ideating new material structures with desired features.
Comparing scientists who were using the AI assistant and those who weren’t, there was a 44% increase in the number of materials discovered, though certain scientists really benefited from the productivity boost while others who weren’t as good at assessing the AI output’s validity did not. More depressingly, the scientists reported lower job satisfaction when using the AI assistant regardless of how well they were able to use the assistant.
So yes, maybe AI can be “creative.” But perhaps at the cost of us enjoying the work we do. Plus, as a podcast interview with the paper’s author discusses, there’s no way to know what impact AI could have on those groundbreaking, once every few decades or even every century, discoveries.
Would those still happen if AIs replaced PIs or will AI models stifle truly innovative work? And this is all without getting into other aspects of PIs’ day-to-day work like the vital in-person, mentor-mentee dynamics.
AI, clinical trials, and disease
Amodei also touches on clinical trials a few times, mentioning that while he thinks many of the legal constraints imposed by clinical trial rules are necessary, others are making the system inefficient. Though Amodei has a relatively moderate position, many in the Silicon Valley sphere reject health-related regulations (read: biohacking). And in the regenerative medicine field which is filled with attempts at preying on patients and exposing them to risky procedures, regulation is critical (and still lacking in many ways).
Finally, Amodei begins to list off specific problems he thinks AI can help with including “very effective prevention and cures for genetic disease.” When it comes to preventing genetic diseases, he mentions improved embryo screening, but also acknowledges the ethical concerns that come with this. It’s not hard to imagine how AI could complicate already complex matters by introducing further biases into the process — biases that, as discussed above, may not come to light until the model has already been deployed.
Amodei specifically mentions stem cells in the footnotes too, writing that AI could lead to “[b]etter control over stem cells, cell differentiation, and de-differentiation, and a resulting ability to regrow or reshape tissue.” This prediction doesn’t seem far off with examples we’ve already talked about like AI models aiding in creating specific self-organization patterns for cells. Due to these models’ ability to take in vast amounts of data and patterns and make predictions, they could one day help realize decades-long dreams of in vitro organogenesis — ultimately addressing problems like the organ donor shortage.
Honorable mentions on tech bro manifestos
We’ve taken in-depth looks at two of the more interesting AI manifestos out there, but there are quite a few others including by Sam Altman (OpenAI CEO) and Vinod Khosla (venture capitalist). We won’t get into those here but know you’re not missing out on much.
Most recently, Altman reflected on the immense growth OpenAI (and especially ChatGPT) has seen over the past few years in a blog post. The most notable quote was likely the following: “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” I tried my hardest to avoid using the word “dystopian” during this piece — it’s imprecise and largely unhelpful. But I’m not sure there are any other words I could use.
Altman also briefly touches on science, writing that future AI “tools could massively accelerate scientific discovery and innovation.” This is a refrain we’ve seen many times by now, and one that most would agree with. The biggest question remains what that accelerated process will look like. To what extent will humans be in the driver’s seat? How meticulously will we address potential biases and harms before deployment? Who will get access to the benefits that come from these discoveries?
A roadmap for the future of AI including related to stem cells.
The answers to the previous questions are complex. But they in part rely on a better idea of what we want the future to look like. Despite the relatively frequent mentions of biology in these manifestos, I ended up feeling like there was a big gap worth filling by scientists themselves, a need highlighted by Amodei in his manifesto too.
Visions for the future don’t need to be full of hyperbole and grandeur as so many of the manifestos discussed above are. But there is value in bringing together subject matter experts to dream up big visions and collaborate on creating suggestions for what the stem cell field could look like in the next 25 to 50 years. Even if every prediction turns out to be wrong, thinking through the potential of AI to contribute to the field, alongside where things could go wrong, would be a useful thought exercise.
Several great reviews have been written about where AI could and should go in the regenerative medicine/stem cell field. Yet a larger-scale effort including scientists from across fields, AI experts, ethicists, and perhaps policymakers is lacking and could help the field zoom out and reflect on the core goals moving forward.
But my real takeaway? Someone, please teach these tech billionaires to write more concisely.
Editor’s notes
- While tech bro Bryan Johnson may not have written a manifesto, he’s producing a manifesto-sized collection of data on his own longevity efforts.
- It’s also worth looking into Harvard Professor David Sinclair and longevity. While he may not be a tech bro exactly, some of his claims about anti-aging resonate with the tech bro manifestos discussed above.
Paul and Francisco – I’ve been musing on this and realized that this all makes sense when you realize that the entire new government has decided to use the same strategy that Musk used when he took over Twitter – fire everyone first, then decide who you want to come back. It’s a tragedy. It’s cruel.
Hi Jeanne: as someone smarter than me said, the cruelty is the point.
If history teaches us anything, it is that civilization demands guardrails. “Dystopian” is a pretty useful word. Many creative (human) sci-fi authors have foreseen some of the many ways that certain forms of “progress” could go completely off the rails. A typically excellent and thoughtful analysis and commentary