\"Writing.Com
*Magnify*
    October     ►
SMTWTFS
   
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Archive RSS
Printed from https://shop.writing.com/main/books/action/view/entry_id/1099037
Rated: 18+ · Book · Opinion · #2336646

Items to fit into your overhead compartment

#1099037 added October 10, 2025 at 10:36am
Restrictions: None
Technomedium
I was hoping for something less dense today, like maybe helium. But no, the random number gods have it in for me. From a source I don't remember ever linking before, cybernews:



The more I hear about AI, the less I care. Okay, that's not really the case; I do care. It's just that, to paraphrase Malcolm Reynolds, my days of not ranting about it are definitely coming to a middle.

Former CNN journalist Jim Acosta conducted an “interview” via ‘The Jim Acosta Show,” according to The Washington Post, which internet users and social media users have described as “ghoulish” and “disturbing.”

I get why all those things in quotes are in quotes. It still made my eye twitch.

The interview takes place on the journalist’s Substack and shows him talking to late teen Joaquin Oliver, who was killed in the 2018 Parkland high school shooting.

Okay, you know the French surrealist painting of a tobacco pipe with the caption "Ceci n'est pas une pipe?" The intent, at least insofar as I understand it, is to point out that the image of something is not the thing. We take shortcuts, though, so if I showed you a picture of my cat and you said, "That's your cat?" I'd just agree. But the reality of it is that it's an image of what my cat looked like (probably extremely cute) whenever the picture was taken.

Point being, cela n'est pas un étudiant.

No, I'm not going to get into the difference between ceci and cela. Doesn't matter for this discussion.

Oliver's parents reanimated the teen using artificial intelligence (AI) to discuss gun reform and gun-related violence in the United States.

The thing is, when I saw the headline, I felt kind of a little bit outraged. How dare the interviewer do such a thing! It's a bit like setting up a strawman. But then I got to this part, where the parents did it, and then I'm like, "Huh. Now that's an interesting ethical dilemma." Because the kid wasn't a public figure, it feels wrong to approximate him with an LLM. For whatever reason, I don't have the same judgment about family doing it.

More recently than the linked article, I saw a brief blurb about someone giving the late, great Robin Williams the LLM "resurrection" treatment, and his daughter spoke out against it. That feels different too, since he was a public figure.

Oh, and no, they didn't "reanimate" the teen. Good gods, if you're going to do journalism, use better language. Yes, I know I sometimes don't do it, myself, but I'm not in the same league. Or even the same sport.

“Oliver,” responds in a way typical of an AI model, clinical and sterile. Media outlets have even compared the avatar’s responses to Yoda, as the model provides pearls of wisdom and asks Acosta questions that feel unnatural.

Fuck's sake, that's because it's not actual AI, even if that's the accepted term for it. It's a Large Language Model. It's not like Data from ST:TNG, or even HAL 9000. Both of which are, of course, fictional.

Nicholas Fondacaro, the associate editor of NewsBusters, a blog that attempts to expose and combat “liberal media bias 24/7,” spoke out against Acosta, dubbing the interview “very disturbing.”

Why the hell should I care what that guy thinks?

In the clip shared by Fondcaro, Oliver’s father tells Acosta that the teen’s mother spends hours asking the AI questions and loves hearing the avatar say, “I love you, mommy.”

Okay, that's more worrisome, in my view, than an interview with the LLM. I can't even begin to comprehend the grief of a parent losing a kid, and I'm no psychologist, but that seems a rather unhealthy way to cope.

Acosta’s interview with Oliver caught the attention of many, including Billy Markus, the creator of Dogecoin, who simply said, “I hate this.”

Another person whose opinion I can't give half a shit about.

There's more at the article. There's probably pictures and X links, too, but to be brutally honest, I couldn't be arsed to play with my blocker settings to see them.

Thing is, though: even if the consensus is that this is a Bad Thing, what can we do about it? Pass laws? What would such a law look like? "No using LLMs to pretend to be a dead person?" Then anyone who plays with it to, I don't know, "rewrite the script of The Fast and the Furious in the style of a Shakespeare play" would be breaking the law.

I'd pay to see that, by the way. Just saying. Though I'm not a F&F fan.

You could maybe hold it up as voluntary journalistic practice not to do such a thing, but these days, that doesn't mean shit because everyone's a potential journalist and many don't adhere to journalistic norms.

About all we can do is judge them and shame them, which, well, refer to my entry from two days ago. Or, if you don't think this is such a bad thing (and I'm not trying to tell anyone how to feel about it here), then don't shame them.

Still, I can say this: the use of LLMs was disclosed from the get-go. My biggest problem with what we're calling AI is when it's used without full disclosure. It is, I think, a bit like not putting ingredients on a food label: it takes important information away from the consumer.

So, I don't know. For me, I feel like I feel about other forms of speech: if you don't like it, you don't have to watch or read it. I'd probably feel differently if it wasn't the parents who trained the LLM, however.

I'm open to other interpretations, though, because I'm not an AI. Sometimes, I wonder if I'm even I.

© Copyright 2025 Waltz in the Lonesome October (UN: cathartes02 at Writing.Com). All rights reserved.
Waltz in the Lonesome October has granted Writing.Com, its affiliates and its syndicates non-exclusive rights to display this work.
Printed from https://shop.writing.com/main/books/action/view/entry_id/1099037