What Is an “Algorithm of Thoughts” (AoT)? Microsoft’s Human-like AI Training Method

fiverr
What Is an "Algorithm of Thoughts" (AoT)? Microsoft's Human-like AI Training Method
Bitbuy

[ad_1]

Want to earn a free on-chain certificate to boast your AI knowledge? Take one of Decrypt U’s free courses, Getting Started with AI, AI and Music, and AI for Business.

Microsoft, creator of OpenAI, published a white paper jointly with Virginia Technical University on August 20, 2023, introducing its groundbreaking  “Algorithm of Thoughts” (AoT). This novel approach to AI aims to make large language models (LLMs) such as ChatGPT learn with a progression “akin to humans,” as the paper puts it.

AoT purports to go above and beyond previous methods of LLMs instruction. The paper makes this daring claim: “our results suggest that instructing an LLM using an algorithm can lead to performance surpassing that of the algorithm itself.”

Does this mean that an algo make itself smarter than… itself? Well, arguably, that’s the way the human mind works. That’s the holy grail in AI, and has been from the beginning.

Binance

Human Cognition

Microsoft claims that AoT fuses together the “nuances of human reasoning and the disciplined precision of algorithmic methodologies.”

A bold-sounding claim, but this aspiration itself is nothing new. “Machine learning,” which its pioneer Arthur Samuel defined as “the field of study that gives computers the ability to learn without being specifically programmed,” goes as far back as the 1950s. Unlike traditional computer programming—wherein a programmer must create a detailed list of instructions for a computer to follow in order to achieve the set task—machine learning uses data to train the computer to train itself to find patterns and solve problems. In other words, operate in a manner vaguely resembling human cognition. OpenAI’s ChatGPT uses a category of machine learning called RLHF (reinforcement learning from human feedback), which gave it the back-and-forth nature of “conversations” with its human users. 

AoT goes beyond even that, claiming to surpass the so-called “Chain of Thought” (CoT) approach.

Chain of Thought: What problem is AoT aiming to solve?

If all inventions are an attempt to solve an existing problem with the status quo, one might asy that AoT was created to solve the shortcomings of the Chain-of-Thought approach. In COT, LLMs come up with a solution by breaking down a prompt or question into “simpler linear steps to arrive at the answer,” according to Microsoft. While a huge advancement over standard prompting, which involves one simple step, it presents certain pitfalls.

Does this mean that an algo make itself smarter than… itself?

It sometimes presents incorrect steps to arrive at the answer, because it is designed to base conclusions on precedent. And a precedent based on a given data set is limited to the confines of the data set. This, says Microsoft, leads to “increased costs, memory, and computational overheads.”

AoT to the rescue. The algorithm evaluates whether the initial steps—”thoughts,” to use a word generally associated only with humans—are sound, thereby avoiding a situation where an early wrong “thought” snowballs into an absurd outcome.

What Will Microsoft Do With AoT?

Though not expressly stated by Microsoft, one can imagine that if AoT is what it’s cracked up to be, it might help mitigate the so-called AI “hallucinations”—the funny, alarming phenomenon whereby programs like ChatGPT spits out false information. In one of the more notorious examples, in May 2023, a lawyer named Stephen A. Schwartz admitted to “consulting” ChatGPT as a source when conducting research for a 10-page brief. The problem: The brief referred to several court decisions as legal precedents… that never existed.

“Mitigating hallucinations is a critical step towards building aligned AGI,” OpenAI said in a post on its official site.

Stay on top of crypto news, get daily updates in your inbox.

[ad_2]

Source link

Paxful