AI's Mythos Moment - And What It Signals
AI's Mythos Moment - And What It Signals
A recent set of headlines around Mythos, a model developed by Anthropic and covered in Scientific American, sparked reactions ranging from fascination to alarm.
At a high level, Anthropic has built a model that is extremely advanced at identifying and exploiting software vulnerabilities - so much so that they have chosen not to roll it out publicly. In testing, it demonstrated the ability to perform high-level hacking tasks that, until very recently, no AI system could achieve.
So, the fear that has arisen now to AI centre stage (and which is temporarily replacing those about sentience or values alignment or hallucination) is about cybersecurity.
To be honest, this is a long-overdue reckoning with the security side of the AI equation. One we’ve known is coming.
For the past few years since the explosion of GPT 3.5 onto the scene in late 2022, we have been in a deeply gung-ho phase of AI innovation. The race has been to capture the market and user adoption, even putting these aims above profitability.
Very few have paused to meaningfully consider guardrails, safety, security, data privacy, and broader system-level implications, even though these were well understood as necessary from the outset.
Adoption has blasted past the ability of the latter to catch up.
And so, of course AI is having a PR crisis.
Shadow usage abounds - cheating, plagiarism, cognitive offloading with little sense-checking, issues of retention incentives, sycophancy, hallucinations. These are early signals of a technology that has scaled faster than our ability to responsibly integrate it.
We forget that we are still in a very early moment in the technology cycle.
So where is the good in this?
Finally, issues of safety and security around AI will start to be taken more seriously.
This is typically how it works. It often takes a moment of crisis (real or perceived) for the response to catch up. We saw this with nuclear technology. We saw it with biotech. It is not ideal, but it is consistent with how we operate as a species.
The result is that the solutions layer begins to receive attention – and funding.
At Fifth Era, we have already had a lens on this within our AI investment strategy, and have been leaning into teams and managers operating at this intersection. For example, we’ve partnered with funds that focus explicitly on AI safety, security, and resilience, and are in touch with others who are taking a more humanist lens - backing technologies that are aligned with long-term human flourishing.
At the company level, this is translating into a new and increasingly critical layer of the stack. Goodfire, for example, is working on interpretability and control of AI systems, and has already reached unicorn status - an early signal of how important this category is becoming. Others like HiddenLayer and Robust Intelligence are focused on securing and stress-testing these systems before they scale. And there are many others.
As capital allocators, we should all ensure our investments are building toward a safe and flourishing human future - and that the teams and founders we back can articulate that case clearly and with conviction. In fact, it should be a prerequisite for unlocking funding. This is how we avoid looking back on this moment with the sense that we were not discerning enough.
More broadly…
It is easy to over-index on worst-case scenarios. It is equally easy to dismiss developments like Mythos as overblown.
The useful stance is somewhere in between.
There are real risks, uncertainties, and unknowns. And there are teams actively building to resolve them - if we can find and amplify their work.
AI is a powerful tool. And, like a hammer, it can be wielded to build a house, or as a murder weapon. It depends on whose hands it is in.
Through conscious capital allocation, we need to accelerate the individuals and collectives using it for good faster than those who may use it for nefarious purposes or gain - whether that is political oppression and surveillance, economic monopoly, scaling the impact of warfare and weaponry, or something worse.
There is one other positive macro dynamic worth noting.
These risks are forcing unprecedented global cooperation at scale.
We have seen this before with nuclear and biotech. Rivals sitting across the table from one another. Policymakers, technologists, cybersecurity experts, social scientists, ethicists, and operators coming together to navigate shared risk.
We have always “othered” as a species - along religious, ethnic, socioeconomic, and gender lines. But in a way, with AI we have now created an existential “other” that makes humans the “self.” And that will require us to come together, more than ever, to shape the future we are moving into.
The Case for Hope
A large part of my work right now is making sense of developments like this - cutting through both fear-based narratives and overly simplistic optimism. That’s the intent behind The Case for Hope: grounded perspectives and lucid reframes on AI and the future we’re entering - what’s real, what’s noise, what the headlines consistently miss.
I’ve recently begun sharing more publicly to bridge the gap between those building at the frontier of innovation and the broader public, which too often encounters these developments through fragmented or sensationalised narratives. I’ll explore these ideas in greater depth through an upcoming podcast, ahead of a book later this year.
I’m also hosting another Fifth Era webinar at the end of this month on AI's security and safety. I hope you can join me as I expand on how Mythos could catalyze the growth of cybersecurity and AI.
This is an important moment to lean in.
We are deciding – collectively - what kind of future we build with the most powerful innovation we’ve ever created.
Let’s choose wisely.
Thank you for reading.
Tallulah Le Merle
AI Partner
About Fifth Era
We are entering a period of unprecedented innovation we call the Fifth Era, and every industry and business will be dramatically impacted. We focus on investing into these new innovations. Fifth Era specializes in investment strategies which construct portfolios of hard-to-access funds and direct investments through our investment strategies - AI Access and Blockchain Coinvestors. Fifth Era's investment strategies are now in their 12th year and to date we have invested in a combined portfolio of 1,500+ companies and projects including 80+ unicorns. In the US we are a SEC registered investment advisor, in the UK a FCA appointed representative and our funds are registered in Switzerland. Visit us at www.FifthEra.com to learn more.
SEC Registration does not imply a certain level of skill or training.
“Focused on Innovation”