Thursday, November 21, 2024

Creating liberating content

Realme 12X 5G Tipped...

The Realme 12x 5G was launched by Realme last week in China. The...

iQOO will launch a member...

iQOO Neo 10 series's new member will feature SDG3 SoC In April, iQOO is...

Samsung Galaxy A35 and...

Samsung Galaxy A35 and A55 Specs and featuresRelated Samsung released the Galaxy A35 and...

Motorola confirms upcoming smartphone...

Motorola has begun to tease the release of its next smartphone. It is...
HomeAIAI is supposedly...

AI is supposedly the new nuclear weapons — but how similar are they, really?

You will undoubtedly come across the analogy of nuclear weapons if you read about artificial intelligence for a long enough period of time. Similar to nuclear weapons, the argument goes, AI is a cutting-edge technology that evolved with alarming quickness and carries significant risks that are hard to foresee and that society is unprepared to handle.

Researchers like Geoffrey Hinton and Yoshua Bengio, as well as well-known individuals like Bill Gates, the heads of AI labs OpenAI, Anthropic, and Google DeepMind, as well as the analogy was made explicitly in an open letter signed in May that read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The director of Oppenheimer, Christopher Nolan, disagrees that AI and nuclear weapons are extremely comparable. According to Richard Rhodes, author of The Making of the Atomic Bomb, there are significant similarities. The New York Times sponsored a test to see if readers could tell quotes concerning AI from those regarding nuclear weapons. To make the comparison even more stark, some policy experts are advocating for an AI equivalent of the Manhattan Project. I have personal knowledge of a large number of individuals working on AI policy who have been reading Rhodes’ book for motivation. When I was recently at Anthropic’s offices on business, I noticed a copy on a coffee table.

It’s understandable why people look for similarities of this kind. We need conceptual tools to help us understand the new, perplexing, and many experts feel very dangerous technology known as artificial intelligence (AI) and to think about its potential repercussions. However, the comparison is flimsy at best, and there are significant technological distinctions between the two that must be taken into consideration when deciding how to govern AI to assure its safe deployment, lack of prejudice against underrepresented groups, and inclusion of safeguards against improper usage by bad actors.

The parallel is exceedingly quick scientific progress.

In December 1938, chemists Otto Hahn and Fritz Strassmann discovered that bombarding the radioactive element uranium with neutrons produced what appeared to be barium, a considerably smaller element than uranium. It was a perplexing observation — radioactive elements had previously only been known to emit small particles and transmute to slightly smaller elements — but by Christmas Eve, their collaborators, physicists Lise Meitner and Otto Frisch, had devised an explanation: the neutrons had split the uranium atoms, resulting in solid barium and krypton gas. Frisch referred to the process as “fission.”

On July 16, 1945, the US military detonated the Trinity device, the first nuclear weapon ever deployed, after billions of dollars of investment and the equivalent of 67 million hours of labour from workers and scientists including Frisch, using the process that Frisch and Meitner had only theorised less than seven years earlier.

Few scientific domains have seen a theoretical breakthrough immediately turned into a hugely important practical technology. However, AI may come close. Artificial intelligence as a field was founded in the 1950s, but modern “deep learning” techniques in AI, which process data through several layers of “neurons” to form artificial “neural networks,” did not take off until around 2009, when it was realised that specialised chips called graphics processing units (GPUs) could train such networks much more efficiently than standard central processing units (CPUs) on computers. Deep learning models soon began winning tournaments that tested their ability to categorise photos. The same strategies were used to defeat world champions in Go and StarCraft, as well as to create models like GPT-4 and Stable Diffusion, which produce very captivating text and graphics.

Deep learning progress looks to be nearly exponential, as processing power and data applied to it appear to be continually increasing. The topic of model scaling predicts what happens to AI models when their data, computational capacity, and number of parameters increase. In an empirical paper published in 2017, a team at the Chinese tech giant Baidu demonstrated that “loss” (the measured error of a model, compared to known true results, on various tasks) decays at an exponential rate as the model’s size grows, and subsequent research from OpenAI and DeepMind has reached similar conclusions.

All of this is to argue that, just as nuclear fission advanced at an incredible rate, advanced deep learning models and their capabilities appear to be improving at an equally startling rate.

Similarity: the potential for widespread harm

I’m guessing I don’t have to explain how nuclear weapons, let alone the thermonuclear weapons that make up modern arsenals, may bring enormous devastation on a scale we’ve never seen before. The same AI potential necessitates a little more explanation.

Many researchers have shown that existing machine learning systems used for reasons such as alerting parents for Child Protective Services frequently reproduce biases from their training data. These kinds of biases will become more and more consequential as these models evolve and are accepted for more and more reasons, and as we become more reliant on them.

For sufficiently complex AI systems, there is also a significant misuse potential. In an April paper, Carnegie Mellon researchers were able to stitch together large language models to create a system that, when told to make chlorine gas, could figure out the correct chemical compound and instruct a “cloud laboratory” (an online service where chemists can conduct real, physical chemistry experiments remotely) to synthesise it. It appeared to be capable of producing VX or sarin gas (as well as methamphetamine) and was only limited by built-in safety measures that model developers could easily disable. Bioweapons could be created using similar methods.

The distinction is that one is a military technology and the other is a general-purpose technology.

I don’t utilise nuclear weapons in my daily life, and unless you work in one of a handful of militaries, you probably don’t either. Nuclear fission has had an impact on our daily lives through nuclear energy, which provides about 4% of the world’s energy, but due to its limited acceptance, that technology has also not altered our lives.

We don’t know how AI will effect the world in any detail, and anyone who tells you what’s going to happen in great detail and with great confidence is definitely lying to you. But there are reasons to believe that AI will be a general-purpose technology, similar to electricity, telegraphy, or the internet, that fundamentally alters the way businesses across sectors and nations operate, rather than an innovation that has a limited impact (as nuclear fission did in the energy sector and military and geopolitical strategy).

Large language models produce text quickly, which is a very generally useful service for everything from marketing to technical writing to internal memo authoring to lawyering (if you know the technology’s boundaries) to, unfortunately, deception and propaganda. Using AI to improve services like Siri and Alexa so they can behave more like personal assistants, intelligently planning your calendar and responding to emails, would be beneficial in a variety of fields. According to McKinsey, the impact of generative AI on productivity might potentially add up to $4.4 trillion to the global economy – more than the UK’s annual GDP. Take these numbers with a grain of salt, but the idea is that the technology will be generally significant to a variety of vocations and industries.

Also Check Out This The advertising sector is going all in on ai

Get notified whenever we post something new!

Continue reading

Realme 12X 5G Tipped to Launch in India Soon

The Realme 12x 5G was launched by Realme last week in China. The Realme 12x 5G sits lower than other current models, such as the Realme 12 5G and 12+ 5G. There are multiple rumors that the smartphone will...

iQOO will launch a member of the Neo 10 series featuring a Snapdragon 8 Gen3 chipset.

iQOO Neo 10 series's new member will feature SDG3 SoC In April, iQOO is planning to release a new Z series of smartphones in the domestic market of China. The newly released will feature the Snapdragon 8s Gen 3 processor,...

Samsung Galaxy A35 and Galaxy A55 have best displays in the price range: DxOMark

Samsung Galaxy A35 and A55 Specs and featuresRelated Samsung released the Galaxy A35 and A55 smartphones worldwide earlier this week. DxOMark, a well-known authority on camera and display tests, gave both devices good ratings soon after they were released. To top...