Superintelligence

From Tournesol
Revision as of 18:53, 15 February 2021 by imported>Le science4all (→‎Existential risk)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Intuitively, a superintelligence is an entity vastly superior to humans' individual and collective intelligence at solving information processing tasks.

Bostrom-16 Tegmark-17 Russell-19 ElmhamdiHoang-19FR and Ord-20 argue that superintelligent represent an existential risk.

It is controversial whether and when computing machines will become superintelligent GraceSDZE-17 RobertMiles-18 Science4All-18.

Existential risk

Because of the hardness of alignment and the ubiquity of undesired side effects, a catastrophic future is often argued to be a default scenario, in case of the emergence of a superintelligence. This is why researchers concerned about existential risks often call for a massive investment in AI safety and AI ethics.

The problem of consciousness is generally argued to be irrelevant to existential risks Bostrom-16 ElmhamdiHoang-19FR.

Strong versus weak AI

A distinction is often made between strong and weak artificial intelligence. However, PrunklWhittleston-20 argues that such a binary categorization is neither descriptive nor useful. Arguably, algorithms simply become more and more performant at their respective tasks.

Perhaps more importantly, no matter how performant an algorithm is, alignment seems critical to avoid dangerous side effects.

Human-level AI

Human-level AI is often defined as an algorithm capable to solve any task that a human can solve, with less time and at smaller cost Bostrom-16 GraceSDZE-17. It is however unclear how relevant this notion is to apprehend AI risks ElmhamdiHoang-19FR.

ElmhamdiHoang-19FR also argues that the Youtube recommendation algorithm is already vastly superhuman at its task, partly because of its scale. It is indeed noteworthy that this algorithm is reviewing 500 hours of new videos per minute, and monitors the daily activities of billions of humans.

General versus narrow AI

Another common distinction made is between artificial general intelligence (AGI) and "narrow AI".

Arguably, AGI is not a capability, but rather a framework. AGI models, like AIXI, typically tackle reinforcement learning in a complex interactive environment. It is noteworthy that many algorithms, especially recommendation algorithms, already run in such an environment at very large scale, and that their environment is extremely complex.

By opposition, "narrow AI" may refer to specific tasks, such as supervised learning.