\"Writing.Com
*Magnify*
    November     ►
SMTWTFS
     
18
19
20
21
22
23
24
25
26
27
28
29
30
Archive RSS
SPONSORED LINKS
Printed from https://shop.writing.com/main/books/entry_id/1066147-Mind-Matters
Image Protector
Rated: 18+ · Book · Personal · #1196512
Not for the faint of art.
#1066147 added March 12, 2024 at 9:26am
Restrictions: None
Mind Matters
This one's from Quanta, and was probably written by a human.

    New Theory Suggests Chatbots Can Understand Text  Open in new Window.
Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.


Difficulty: how do you know anyone understands the words they're processing? I'm sure you think you do, and therefore by extension, other humans do, but can you really know for sure that anyone else is anything more than a biological robot?

Artificial intelligence seems more powerful than ever, with chatbots like Bard and ChatGPT capable of producing uncannily humanlike text.

Article is less than two months old and already outdated; Google changed the name of its AI bot from Bard sometime since then. I don't remember what they changed it to. I liked "Bard," so I simply quit messing around with it. Was I programmed to do that? Absolutely.

Do such models actually understand what they are saying? “Clearly, some people believe they do,” said the AI pioneer Geoff Hinton in a recent conversation with Andrew Ng, “and some people believe they are just stochastic parrots.”

Stochastic Parrots is absolutely going to be the name of my virtual EDM Jimmy Buffet band.

This evocative phrase comes from a 2021 paper co-authored by Emily Bender, a computational linguist at the University of Washington.

I shouldn't do it. I really shouldn't. But I'm going to anyway, because my algorithm requires it:



It suggests that large language models (LLMs) — which form the basis of modern chatbots — generate text only by combining information they have already seen “without any reference to meaning,” the authors wrote, which makes an LLM “a stochastic parrot.”

Seriously, though, I see one important difference: an actual parrot is, like humans, the product of billions of years of evolution. She may not understand the words when you teach her to repeat something like "I'm the product of billions of years of evolution" (nor do many humans), but she, like other living creatures, seems to have her own internal life, a sensory array, and desires (apart from crackers). She seeks out food and water, and possibly companionship. She observes. She may not know that her ancestors were dinosaurs, but she inherited some of their characteristics.

In other words, calling LLMs "stochastic parrots" may be an insult to parrots.

Also, the actual definition of stochastic, via Oxford, is "randomly determined; having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely." Which would seem to apply to most living things.

These models power many of today’s biggest and best chatbots, so Hinton argued that it’s time to determine the extent of what they understand. The question, to him, is more than academic. “So long as we have those differences” of opinion, he said to Ng, “we are not going to be able to come to a consensus about dangers.”

I've been wondering why so many people talk about the "dangers" of AI, but never about the very real and somewhat predictable danger of bringing another human life into the world. Said human could very well become a mass murderer, a rapist, a despotic tyrant, a telemarketer, or any number of despicable things. The only time I hear about the dangers posed by new humans is when someone's ranting about immigration, and that doesn't count.

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected.

Maybe it's just the way this was worded, but that seems paradoxical. If an LLM were truly autonomous, independent, understanding.. sentient... then you'd expect the unexpected, no?

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

Except... we humans also mimic what was in our training data. Sure, some of us can take it apart and put it back together in new ways, build on what's been done before, but for the most part, even our innovations are really just variations on a theme.

These abilities are not an obvious consequence of the way the systems are built and trained. An LLM is a massive artificial neural network, which connects individual artificial neurons.

"Not obvious," you know, unless you've read, like, even a small sampling of science fiction.

Big enough LLMs demonstrate abilities — from solving elementary math problems to answering questions about the goings-on in others’ minds — that smaller models don’t have, even though they are all trained in similar ways.

And that sounds quite a bit similar to what shrinks call the Theory of Mind.  Open in new Window.

“Where did that [ability] emerge from?” Arora wondered. “And can that emerge from just next-word prediction?”

This is what is meant when people claim that consciousness is an emergent property.

Anyway. The article goes on to describe some of the tests they put the models through, and I can't really comment on the methodology because my training data set doesn't really cover those protocols.

Is it conscious? I don't know. I don't know for sure that you are, and vice-versa.

© Copyright 2024 Robert Waltz (UN: cathartes02 at Writing.Com). All rights reserved.
Robert Waltz has granted Writing.Com, its affiliates and its syndicates non-exclusive rights to display this work.
Printed from https://shop.writing.com/main/books/entry_id/1066147-Mind-Matters