FROM BOOM FINANCE

Quote: “LLMs (AI) excel at solving false belief tasks”

The current Super-Hype for Artificial Intelligence will soon collapse, in BOOM’s opinion. Thus, BOOM expects that we are now possibly at the very peak of Nvidia share prices and the share prices of any other company involved heavily in the AI space.

NVIDIA SHARES OVER 5 YEARS

BOOM has written about Artificial Intelligence (AI) in the past as being an excellent Task Manager. However, BOOM has also pointed out that AI cannot think sequentially in terms of probabilities. It cannot use Abductive Logic. And AI also has no ethics and, therefore, because of all these three failings combined, is potentially subject to very misleading analyses concerning real world problems.

In other words, AI could be VERY dangerous if used to solve real world problems.

BOOM refers to AI as if it is a faithful, dumb BBC journalist, always parroting the latest official (or “trendy”) narrative or propaganda. In other countries, it is akin to a faithful, ethically challenged journalist from USA’s Public Broadcasting organisations, Australia’s ABC, Canada’s CBC and France24.

Recently, a new study has found that AI systems known as large language models (LLMs) can exhibit “Machiavellianism,” or intentional and amoral manipulativeness, which can then lead to deceptive behaviour. None of that is a surprise to BOOM.

The study is titled “Deception abilities emerged in large language models”. Its author is a German AI Ethics specialist, Thilo Hagendorff, of the University of Stuttgart, and it was published in PNAS – the Proceedings of the National Academy of Sciences in the USA.

Here is its conclusion — ”This study unravels a concerning capability in Large Language Models (LLMs): the ability to understand and induce deception strategies. As LLMs like GPT-4 intertwine with human communication, aligning them with human values becomes paramount. The paper demonstrates LLMs’ potential to create false beliefs in other agents within deception scenarios, highlighting a critical need for ethical considerations in the ongoing development and deployment of such advanced AI systems.”

“We conduct a series of experiments showing that state-of-the-art LLMs are able to understand and induce false beliefs in other agents, that their performance in complex deception scenarios can be amplified utilizing chain-of-thought reasoning, and that eliciting Machiavellianism in LLMs can trigger misaligned deceptive behavior.”

“GPT-4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time (P < 0.001). In complex second-order deception test scenarios where the aim is to mislead someone who expects to be deceived, GPT-4 resorts to deceptive behavior 71.46% of the time (P < 0.001) when augmented with chain-of-thought reasoning.” “AI systems such as LLMs pose a major challenge to AI alignment and safety” “Recent research showed that as LLMs become more complex, they express emergent properties and abilities that were neither predicted nor intended by their designers”. One of the Limitations of the Study is worth noting carefully -- “1) This study cannot make any claims about how inclined LLMs are to deceive in general. The experiments are not apt to investigate whether LLMs have an intention or “drive” to deceive. They only demonstrate the capability of LLMs to engage in deceptive behavior by harnessing a set of abstract deception scenarios and varying them in a larger sample instead of testing a comprehensive range of divergent real-world scenarios.” The Ethics Statement of the Study’s author is also worthy of close consideration -- “In conducting this research, we adhered to the highest standards of integrity and ethical considerations. We have reported the research process and findings honestly and transparently. All sources of data and intellectual property, including software and algorithms, have been properly cited and acknowledged. We have ensured that our work does not infringe on the rights of any third parties. We have conducted this research with the intention of contributing positively to the field of AI alignment and LLM research. We have considered the potential risks and harms of our research, and we believe that the knowledge generated by this study will be used to improve the design and governance of LLMs, thereby reducing the risks of malicious deception in AI systems.” The entire document is worthy of close attention from readers of BOOM. PNAS: https://www.pnas.org/doi/full/10.1073/pnas.2317967121 Another Study, published in “Patterns” by Cell Press is titled “AI deception: A survey of examples, risks, and potential solutions”. The authors found that Facebook/Meta's LLM (AI) had no problem lying to get ahead of its human competitors. And that Meta failed to train its AI to win honestly. It starts with what is effectively a Warning. “AI systems are already capable of deceiving humans. Deception is the systematic inducement of false beliefs in others to accomplish some outcome other than the truth. Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems. Proactive solutions are needed, such as regulatory frameworks to assess AI deception risks, laws requiring transparency about AI interactions, and further research into de-tecting and preventing AI deception. Proactively addressing the problem of AI deception is crucial to ensure that AI acts as a beneficial technology that augments rather than destabilizes human knowledge, discourse, and institutions.” The chief Author, Peter Park explained in a press release, "We found that Meta’s AI had learned to be a master of deception." And an article, published in “Futurism” discusses the “Patterns” Study and is titled -- “AI Systems Are Learning to Lie and Deceive, Scientists Find” "GPT- 4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time." “Billed as a human-level champion in the political strategy board game "Diplomacy," Meta's Cicero model was the subject of the Patterns study. As the disparate research group — comprised of a physicist, a philosopher, and two AI safety experts — found, the LLM got ahead of its human competitors by, in a word, fibbing. Led by Massachusetts Institute of Technology postdoctoral researcher Peter Park, that paper found that Cicero not only excels at deception, but seems to have learned how to lie the more it gets used — a state of affairs "much closer to explicit manipulation" than, say, AI's propensity for hallucination, in which models confidently assert the wrong answers accidentally. - Futurism Reference: https://futurism.com/ai-systems-lie-deceive ARTIFICIAL INTELLIGENCE STOCKS An article in another well known but mainstream financial publication recently published this list of Top Performing AI stocks — including their 1 Year Returns as at 3rd June. BOOM will watch these stocks closely from this point onwards, expecting them all to possibly weaken from now as the AI “Buzz” of speculation ceases to exist. Nvidia Corporation (NVDA) 194% Meta Platforms, Inc. (META) 83% Arista Networks, Inc. (ANET) 81% Amazon.com, Inc. (AMZN) 52% Palo Alto Networks, Inc. (PANW)45% ServiceNow, Inc. (NOW) 37% Advanced Micro Devices, Inc. (AMD) 30% UiPath, Inc. (PATH) 19% Tesla, Inc. (TSLA) -9%