Annual Dinner 2023
CUEA Annual Dinner 2023
In September 2023, CUEA members joined the CUEA committee for an evening of reconnecting, reminiscing and research at Christ's College, Cambridge. The CUEA Annual Dinner 2023 was a black-tie event for all CUEA Members, with a special address from Professor Colm Durkan, the recently appointed Head of the Department of Engineering, looking forward to the 150th Anniversary of the Department, and a Research Insights talk from Nandini Shiralkar.
The event began with a drinks reception in the Fellows' Garden, with some parting words from our outgoing President, Brian Phillipson, followed by a three course meal in a private dining room at the college. The after-dinner talk from Nandini was so well received that we have included a transcription of it below for the benefit of those who were unable to attend the event.
Existential Risk Alliance - Speech by Nandini Shiralkar
Good evening everyone. My name is Nandini, and I’m a Masters student at the Engineering Department. The Department has always held a special place in my heart. Standing in front of all of you, it's humbling to realise that the very footsteps I've been following belong to many of you in this room. And that makes this the perfect moment to reflect together on both our incredible progress as engineers, and the great responsibility that progress entails.
Before diving into the specifics of my talk, I want to set the stage by asking you all to visualise for a moment. Imagine, if you will, that you are standing on a scenic overlook gazing out at a vibrant, sprawling city - one that represents all of humanity's highest achievements throughout history. Below you see gleaming skyscrapers, bustling transit lines, parks filled with people from every corner of the world living and working in harmony.
Now zoom your perspective outward, gaining altitude until the entire globe comes into view. As you look down upon the earth, take in all the 8 billion souls that currently call this planet home, and reflect on all of the ingenuity, creativity, compassion and toil that generations have poured into building modern civilization to what it is today. From the first factories of the industrial revolution to the Moon landing to the smartphones in every pocket in this room, humanity has come so far.
But hold this breathtaking panorama in your mind for just a bit longer, and allow your gaze to drift towards the future - decades, centuries from now. There, the scene wavers and blurs as uncertainty takes hold. For while the present offers clarity, the choices we make today will echo across generations to shape whether our descendants look upon a world nurturing hope or confronting hardships on a scale never seen before. Today, right now, we stand at a precipice.
You may think a scenario with such uncertainty sounds like science fiction. But as the engineers in the room will attest, which is well, all of you, even seemingly minor decisions can have unforeseen impacts, especially when it comes to complex technological systems. It is with this immense responsibility in mind that I founded the Existential Risk Alliance, or ERA for short. Existential risks are potential events that could lead to the permanent and widespread devastation of humanity or even its complete extinction. At their core, these are not just large-scale risks; they are unique in their global and irreversible nature. For engineers, it's akin to a system failure, but on a scale where the system in question is our entire global civilization or the human species itself.
Now, why is this relevant to us? Because, in many cases, engineers are at the forefront of both the creation and mitigation of potential x-risks. The technological marvels we engineer can either propel humanity to unparalleled heights, but they also come with responsibilities we must consider.
This evening, I will be focusing on two main sources of existential risk, and discussing some of the research done by ERA’s fellows this summer to mitigate these risks. Firstly, Artificial Intelligence. As AI advances, we risk creating systems that, while incredibly intelligent, may not share our human objectives. Picture a superintelligent entity prioritising its assigned task with relentless efficiency, but lacking a comprehensive understanding or concern for broader human values. Such an entity, given a seemingly benign task like “optimising a city's transport", might do so at the expense of human welfare, recreating our urban landscape without any regard for our well-being, simply to achieve its narrow goal.
There has been considerable discussion around these issues in the mainstream media of late, especially in the run up to the AI safety summit being organised by the UK government later this year. However, ensuring robust and responsible development of advanced AI has been a core focus area of research at ERA for much longer, since well before ChatGPT captivated the global imagination.
To bring these abstract concerns into clearer focus, I'd now like to share research insights from the work being done by ERA fellow Claire Dennis. Claire gained a deep understanding of the technological intricacies of AI Safety during her time at ERA, and her experience of having worked as a Foreign Service Fellow at the US Department of State has given her a profound appreciation of the nuances of international diplomacy. This places her in a unique position to bridge the gap between AI developers and policymakers, and contribute to x-risk mitigation efforts in this focus area.
At ERA, she produced a working paper titled “Towards a UN Role in Governing Foundation Artificial Intelligence Models”, which has already been published on the UN’s website. Foundation models like GPT-4 are trained on massive datasets to be adaptable for diverse downstream tasks. However, their rapid development poses unique challenges. These models exhibit emergent capabilities that are unpredictable during training due to systemic opacity, what we commonly call “the black box” problem. There is also a woeful lack of consensus on technical standards for comprehensive safety evaluations.
Once deployed, foundation models could enable large-scale misuse, from disinformation to cyberattacks. Their easy proliferation across borders makes global accountability and governance difficult, and dual-use economic applications further disincentivize safety precautions.
Claire analysed proposals for international AI regulatory institutions, potentially based on the likes of the IAEA (International Atomic Energy Agency) and IPCC (Intergovernmental Panel on Climate Change). She concluded that no pre-existing model can be directly applied due to limitations around enforcing standards, responding rapidly to exponential technological growth, and governing private sector developers concentrated primarily in the US. However, these institutions offer useful precedents to inform a multifaceted global governance strategy.
Given the UN's constraints, Claire recommended it should focus on moral authority and norm-setting, rather than technical oversight of private sector developers who lack incentive for strong international cooperation at this stage.
Specifically, she advised that the UN's AI Advisory Body should:
Convene inclusive consultations on best practices for AI safety assessments and audits. We must press the "stop button" before losing control, and formalising evaluation techniques is crucial for auditing opaque models.
Develop guidelines for public-private coordination that blend insights from developers with values from the public to yield pragmatic, legitimate policies. Corporate transparency and cooperation should be incentivized, and the UN could support national legislation enabling sanctions to enforce compliance.
Provide policy guidance and training, especially for less-resourced nations, to expand oversight capabilities globally.
Claire advocates leveraging the UN's convening power to lay the groundwork for responsible AI governance, while technical oversight capabilities grow. Her research contributes to the UN's consideration of AI governance strategies and the agenda of the newly announced Multistakeholder Advisory Body on AI. Through ERA, she has solicited feedback from many notable individuals in the field, including Yoshua Bengio (2018 Turing Award winner), and she remains dedicated to figuring out how we could possibly even begin to regulate something as complex as AI.
Another global issue demanding similar care and foresight is extreme climate change. As the planet's thermostat malfunctions, we face a world where feedback loops, such as melting ice reducing Earth's reflectivity or thawing permafrost releasing vast methane reserves, threaten to push our environment into an irreversible tailspin. These feedback loops can amplify global warming, putting ecosystems, crop yields, and infrastructural resilience at risk.
Models of physical climate systems contain deep uncertainties that emerge from complex, chaotic dynamics. So we need to evolve our decision making to view these climate models not as the only scenarios to consider, but as a very small sample from a range of possible futures. We should then plan for a wider range of possible futures and aim for a precautionary approach which is robust to the worst-case scenarios which we cannot rule out.
Bringing this talk closer to home, I want to now spotlight research on this very topic done by ERA fellow Sam Stephenson, who is a PhD student at Professor Allwood’s Use Less Group at the Engineering Department. Sam has been exploring potential pathways for societal collapse from the impacts of climate change expected under a 2 degree warming scenario.
Currently, our understanding of how initial climate disruptions might cascade across interconnected social, economic and political systems is rather limited. Hazards, once paired with corresponding vulnerabilities and exposures, could escalate and cause existential catastrophes.
Sam interviewed 8 specialists and had them construct causal loop diagrams modelling how climate shocks in localised systems spill over into other fragile domains. He presented them with scenarios - a megadrought in Brazil, a heat wave in South Asia, a hurricane hitting New York. The causal loop diagrams developed through Sam's expert elicitation process revealed some critical insights into how climate impacts could cascade across systems in surprising ways. One notable finding was that climate events tend to recreate their effects at exponentially larger scales, spiralling outwards like concentric ripples.
For example, local crop failures or water shortages due to a sudden drought or flood may start small in one region. But the diagrams showed how those initial production losses could very quickly spread and magnify as they cascaded through the global food system. A disrupted wheat harvest in France may translate to grain shortages procured from major importers like Egypt, and so on. Before long, what started as a localised issue balloons into a worldwide disruption of food availability and pricing.
The experts identified two pivotal nodes that worsened this cascading effect: the international food system and regional political stability. Time and again in the scenarios, food insecurity emerged as a critical vulnerability that if degraded could propagate further risks in unstable ways. As climate impacts undermined agricultural production and trade networks, food prices rose and shortages spread - increasing the potential for political unrest.
Likewise, regional tensions over scarce natural resources heightened when climate shocks stressed water or arable land assets within a given nation or boundary. The potential for conflict intensified as the effects multiplied. The diagrams showed credible trajectories of how climate events like droughts inducing crop failures could eventually lead to kinetic wars ignited by competition over shrinking life-sustaining resources.
Taking this analysis a step further, some of the more disturbing hypothetical pathways mapped out plausible routes all the way from initial climate shocks to existential threats such as nuclear conflict engulfing major powers, total state collapse, or even global financial crises. The scenarios acknowledged that climate impacts rarely occur in a vacuum - they interact with and potentially exacerbate pre-existing geopolitical tensions over issues like ethnicity, wealth inequality, forced migration flows, and territorial claims. Cascades involving elements such as disrupted grain harvests triggering refugee crises that strained borders and ignited proxy fighting between foreign interests were deemed quite plausible based on historical analytical frameworks.
Sam’s research insights provide compelling empirical justification to consider climate change a credible threat to destabilise critical global systems and worsen security risks in severe, unintuitive ways that could ultimately jeopardise the very fabric of modern civilization - it’s more than just an environmental risk. Taking a precautionary approach often requires acting before we have full information, and in the case of x-risks, without any precedent - but elicitation studies like Sam’s provide vital pre-emptive insight into potential risks and vulnerabilities.
Beyond AI Governance and climate, ERA focuses on three further cause areas: AI technical safety, biosecurity and nuclear security. ERA's AI technical safety research investigates several vital directions, such as building robustness against adversarial examples, developing techniques for value alignment to ensure that models faithfully embody human values, improving interpretability of black-box models, and formally verifying neural network behaviour under constraints to guarantee safety adherence.
Our biosecurity research navigates the intricate landscape of potential biological threats by focusing on areas such as monitoring and mitigating the risks of gain-of-function research, which can inadvertently make pathogens more potent; enhancing detection capabilities for engineered pathogens to counter bioterrorism; developing methodologies for rapid response during potential pandemics; and understanding the implications of CRISPR-Cas9 and other gene-editing technologies that, while promising, can also be misused in ways detrimental to human society and ecosystems, and pose existential risks to humanity.
Finally, nuclear security. Nuclear weapons possess the dire capacity to eliminate tens of millions instantly. But the more serious threat lies in the subsequent climatic perturbations. Scientific studies indicate that detonations could introduce vast amounts of soot into the stratosphere, drastically reducing sunlight and initiating a 'nuclear winter.' This significant drop in temperature would adversely affect global crop growth, threatening food security on a massive scale. Such a scenario could, beyond the immediate blast casualties, precipitate widespread famine, societal collapse, or even, in the gravest of circumstances, human extinction. How could we better understand and mitigate such risks?
The duality of technological advancements puts engineers right at the heart of these existential dilemmas. And it is in this spirit of foresight, and with a shared aim of serving humanity's deepest priorities, that we do our work at ERA. Though a sobering topic, navigating such threats proactively could be our generation's defining mission.
To wrap up the evening, I want to share something deeply special with all of you. Lord Martin Rees, not only the former Master of Trinity College but also UK’s Astronomer Royal, Co-Founder of the Centre for the Study of Existential Risk, and a member of the UK’s House of Lords, had written a letter to our inaugural cohort of fellows at ERA. His insights and commitment to global challenges have been instrumental in shaping thought leaders across the world, and we at ERA are privileged to count him as our staunch supporter. This quote from his letter has always deeply resonated with me. He says:
It's encouraging to witness more activists among the young - unsurprising, as they can expect to live to the end of the century. Their commitment gives grounds for hope. Let's hope that many of them become scientists - and true global citizens. There are a few rare times when there seems to be a special motivation to work together, across nations and across generations, for the benefit of humanity. This feels like one of those times.
And on that note, thank you for listening!