A subscriber (not sure he wants me to share his name) writes:

===

One of your recent emails was about AI and robots. IIRC it was about relative latitudes and a Chat GPT error.

This article might interest you (link below)

The article basically says that AI bots like to be agreeable (they just want to be liked, loved and respected, and to have an ongoing relationship, just like the rest of us, lol).

All of this aside, ChatGPT is not really a place to find facts like relative latitude, I don’t use it that way.

===

This in response to me "flaming" ChatGPT for telling me that the relative latitude distance between Toronto and Vancouver was something like 2000 kilometres (it's not).

T'was actually a post on Crackbook, not an email, but let's not split hairs.

The article goes on to talk about how LLMs (large language models, like ChatGPT, Claude, Bard, etc) have sycophancy built in.

Sycophancy, if you don't know (I didn't) means:

Overly attentive and servile behaviour toward someone important in order to gain advantage.

In this case, the A.I. want you to like them. (For what purpose we don't yet know.)

And they will pick up on your bias and reflect it back to you, in order to make you happy.

Which means we won't actually have the the robots to blame for our eventual demise, but rather ourselves, because this no-longer-uniquely human trait came from us, somehow coded into the A.I. (intentionally or otherwise, I don't know) when the models were designed.

Now, this doesn't explain the error about relative latitude, since there was no bias expressed until I told it it was wrong.

And, for clarity, I googled that question for five minutes without any luck before turning to ChatGPT. (All I got from Google were travel related results from YVR-YYZ or vice versa.)

But it does explain some of the other issues with A.I. taking an input (prompt) and outputting incorrect, incomplete or otherwise inferior responses.

I share this to reiterate the point:

If you're using A.I. to write any form of content for your business... or if you're using it to do any kind of research (qualitative or quantitative) then you'd best be fact-checking everything it spits out.

Because the slightest nuance in how you phrase a question — or the implicit bias in previous conversations with your tool of choice — could dramatically affect the output.

You can read the full article from Nielsen Norman Group here:

https://www.nngroup.com/articles/sycophancy-generative-ai-chatbots/

Paul Keetch