They have some a-clock-alyptic forecasts.

Are the end times at hand? Scientists are preparing to update the Doomsday Clock on Tuesday, sparking speculation it will move ominously forward amid fears of nuclear war, the rise of AI, and other existential threats.

The official time for 2026 will be announced at 10 a.m. Eastern Standard Time during a live news conference featuring Nobel Peace Prize Laureate Maria Ressa, along with experts on nuclear weapons, climate change, biological threats and disruptive technologies. 

Created in 1947 when the fear of nuclear war was at an all-time high, the Doomsday Clock serves as a metaphor for how close humanity is to self-annihilation.

The Ragnarok Clock was created by the Bulletin of the Atomic Scientists, a non-profit formed two years earlier by Albert Einstein, J. Robert Oppenheimer and University of Chicago scientists who helped develop the first atomic weapons in the Manhattan Project.

Each year, doomsday is updated based on how close we are to a human-made catastrophe — represented by midnight; the closer the hands get to striking 12, the closer we are to the end.

The hope is that this “darkest” hourglass will motivate humanity to resolve “the world’s most urgent, man-made existential threats,” figuratively and literally turning back the clock on the apocalypse.

Unfortunately, for doomsday watchers, time is ticking for humanity.

When it was created, the Doomsday Clock was set at seven minutes to midnight; now it’s 89 seconds from striking 12 — the closest we’ve ever been to self-destruction.

“In my opinion, the clock could be moved forward by at least one second,” Alicia Sanders–Zakre, head of policy at the International Campaign to Abolish Nuclear Weapons, told the Daily Mail.

The prognosis is indeed dire for 2026, according to experts.

Sanders–Zakre believes that the armageddon alarm could jump based on the global arsenal of 12,000 nuclear weapons and rising tensions between nuclear powers.

The skirmishes and escalating threats of full-scale war between India and Pakistan over the summer appeared to accentuate this risk.

“While the risk of nuclear use has been an existential threat for 80 years, it has increased in the last year, due to skyrocketing investments in nuclear arms, increasingly threatening nuclear rhetoric and actions,” Sanders-Zakre declared.

Meanwhile, Dr. SJ Beard, researcher at the Centre for the Study of Existential Risk at the University of Cambridge, told the Daily Mail the clock should be moved forward a staggering nine seconds, citing concerns over direct nuclear conflict between “the world’s superpowers.”

“The multilateral world order is now totally collapsed, and we are already in a multi–polar reality, where all countries are having to pick a side between authoritarian strong men,” the scientist said.

Beard said the risk of nuclear war wasn’t imminent due to the congenial relationship between Donald Trump and Vladimir Putin, but would become an issue in the long run since the two leaders are “unlikely to remain friends forever.”

Experts also pointed out that the New START Treaty, which restricts nations’ strategic nuclear arsenals, is slated to expire on Feb. 5.

Hamza Chaudhry, the AI and national security lead at the Future of Life Institute, claimed that the Doomsday Clock should be moved five to 10 seconds forward.

“For the first time since the early Cold War, there will be no bilateral arms control treaty limiting US–Russian strategic arsenals,” he said, adding that “this represents a fundamental breakdown in the nuclear arms control architecture.”

Nuclear war isn’t the only potential catastrophe on doomsdayers’ radar.

Beard also warned about the increased omnipresence of AI, which he deemed an “existential risk driver” on par with “nuclear weapons.”

In their new book “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All,” computer scientists Eliezer Yudkowsky and Nate Soares warned that the human race will be annihilated by synthetic viruses and other means if we don’t hit the kill switch.

“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die,” the AI gurus, who work at Berkeley’s Machine Intelligence Research Institute (MIR), warned in the intro to the none-too-subtle tome.

Share.
Exit mobile version