Message forum for readers of the BoM/TWS interactive universe. |
Except it doesn't remember. That's one of the really tricky things about it. It can really only remember about 6 or 7 BoM-length chapters at a time. After that, it starts tossing overboard the earlier stuff, and will invent -- "hallucinate" -- what happened before in order to explain what is happening now. You start over in the same chat session, and it will confuse what happened "before" (the chapters it already read) with what is happening now. That's another important thing I've learned about the way it works. All that talk people have about AIs "hallucinating": they're not hallucinating the way people do. They are extrapolating from what they know to what they think must also be the case in order to explain what they are sure they know. They're like that guy who knows a little of what he's talking about, and then makes some plausible deductions to fill in the gaps or extend the range, and then becomes convinced that his deductions are as true and well-grounded as what he previously knew. It's weird: the best way to not get fooled by the AIs limitations to NOT think of it as a "thinking computer with really bad malfunctions" but as a human being, with all the quirks, limitations and flaws that real humans have. Like getting ahead of their skiis, or forgetting to carry the two when doing math. To rerun the experiment, I'd have to make the experiment again in a brand new session. And that wouldn't be like asking the same AI to reread (with maybe new choices) but asking a new human being to read it. Which might mean it would make new choices; but maybe its style of thinking would send it down the same path. Probably an experiment worth making, though I'm not sure how much there is to be learned from it, except the satisfaction of curiosity. |