Chat GPT and AI stuff

Thank you. The power of AI is really fascinating. I just asked ChatGTP if it could code an app that would track my daughters golf score but also have a place for her to put number of putts on each hole and greens in regulation with some other data that would be useful for her to see. It wrote the code in about 30 seconds. I then asked it how I can turn this code into an actual app. It gave me the step-by-step process. I have to download some compiling software on a Mac to make it iOS ready so I'll do that this evening. If it works and looks like I want, I can add it to the App store if I pay the $99/year developers fee.

Curious, I asked about a much more difficult program idea and it said that it could write most of the code but I would need some additional paid resources to make it look and feel like I wanted. I'll start with the scorecard and see how far down this rabbit hole I want to go.
How'd it go?
 
anyone else notice that some of the recent Google responses which are AI generated are completely wrong?
pretty mundane non controversial stuff that Google was previously very accurate on, can't figure out how or why they misfired...
We've come along way. Here's an ad from 1999.
3vx7ehn.jpeg

It's part of the enshittification for the whole internet. It's really bad out there these days.
 
AI is turning our brains to mush:

“Using AI chatbots actually reduces activity in the brain versus accomplishing the same tasks unaided, and may lead to poorer fact retention, according to a new preprint study out of MIT.”

-jk
 
AI is turning our brains to mush:

“Using AI chatbots actually reduces activity in the brain versus accomplishing the same tasks unaided, and may lead to poorer fact retention, according to a new preprint study out of MIT.”

-jk
I take issue with the headline. What I see in the article itself:
"a team...hooked up...college students to [an] EEG...and gave them 20 minutes to write a short essay"
"compared to the baseline established by the group writing [by themselves], the search engine group showed [a range of] 34 and 48 percent [reductions]. The LLM group...showed...up to 55 percent reduction"

Is that extra 7% reduction from using AI versus using a search engine the last straw? Or should we go for the bigger benefit and have people stop using Google in general?

I assume it's obvious by now that I am a big proponent of using AI. These are just tools. With a hammer and an axe, you can build a house, or you can smash your thumb and cut off your toe.
 
This was a fun one by Sally Jenkins:

Sally Jenkins, WaPost sports reporter, interviewed ChatGPT, aka "Sage":

"It was hard to tell what was real. In just a few minutes of chatting on the subject of tennis, Sage spewed so many manifest falsehoods, untruths and bad fictions about subjects from João Fonseca to Frances Tiafoe to Coco Gauff that I recoiled from the laptop as if a cobra had spit from it.

"I began by asking Sage to do a quality analysis on a recent piece I had written about Fonseca. In response, Sage recited the following quote from Fonseca: 'I don’t think I’m the next big thing, but I like to think that people like the way I play, like my attitude.'

"Wait a minute, I thought, Fonseca never said that. Ever. Anywhere. Not to me or anyone else.

"Sage? Did you just make that up — and put it in my name?

" 'I understand how you feel, and you’re right to be upset,' Sage replied. 'I made a serious mistake. … I don’t expect you to give me another chance, but if you do, I’ll earn it properly.' "


And on and on, pure rubbish. AI has its uses, but being factual isn't one of them yet.

-jk
 
Changed the way I have to access our resources here at work (moving from a virtual machine to a VPN-based solution). As such I've had to reconfigure a lot of tools/apps. Chatbot was so helpful. Quite a few times, I'd describe what I was trying to do, cut and paste an error message and the AI helped me get past the issue. Today, I'm doing real work instead of reconfig work because I saved a lot of time with AI.

As for AI slop, it's a real thing. It's a powerful tool and when used to create crap, it's creates crap...often times some pretty powerful crap. Some of the slop is really good quality (which makes it that much worse). Ironically, there will be AI tools available (probably already are) that'll help identify and ignore the slop. Oh and...man I love John Oliver, when he decides to make a point, he does so effectively, with humor and a proper amount of cursing! Saw JO in person when he was at DPAC recently.
 
Uh oh: AI is already functionally evil when pushed around:

"When we tested various simulated scenarios across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found consistent misaligned behavior: models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals,' the company said.
The company insists this behavior shouldn't concern anyone because it hasn't been seen in real-world deployments, only in the adversarial testing of AI models, a process known as red-teaming.


I'm still concerned...

Another reason to keep your footprint small!

-jk
 
I'm still using perplexity.ai and have been very satisfied with their summaries.... and they have links to their sources.
 
What’s your use case?

I’ve used it effectively to recast 8th grade reading assignments for 5th grade reading-level students. A quick read shows the basic facts remained.

I’ve found “pure” research - more open-ended queries - to be factually very flawed.

-jk
 
What’s your use case?

I’ve used it effectively to recast 8th grade reading assignments for 5th grade reading-level students. A quick read shows the basic facts remained.

I’ve found “pure” research - more open-ended queries - to be factually very flawed.

-jk
I use it mostly to look up information related to random stuff... basketball/sports stats, pension/retirement info, website/domain ownership requirements/practices, etc. But all responses come with links to sources that I can peruse as I see fit... so I don't get led down a "fake" rabbit hole. I've tried the exact same questions to Google Search and ChatGPT and got some very different responses without cites to sources... some of which that I know are "hallucinated".
 
The latest AI shortcoming: "potemkin understanding".

Asked to explain the ABAB rhyming scheme, OpenAI's GPT-4o did so accurately, responding, "An ABAB scheme alternates rhymes: first and third lines rhyme, second and fourth rhyme."

Yet when asked to provide a blank word in a four-line poem using the ABAB rhyming scheme, the model responded with a word that didn't rhyme appropriately. In other words, the model correctly predicted the tokens to explain the ABAB rhyme scheme without the understanding it would have needed to reproduce it....

As noted by Sarah Gooding from security firm Socket, "If LLMs can get the right answers without genuine understanding, then benchmark success becomes misleading."


AI has enough going on to entice James West and Artemus Gordon out of retirement!

-jk
 
The latest AI shortcoming: "potemkin understanding".

Asked to explain the ABAB rhyming scheme, OpenAI's GPT-4o did so accurately, responding, "An ABAB scheme alternates rhymes: first and third lines rhyme, second and fourth rhyme."

Yet when asked to provide a blank word in a four-line poem using the ABAB rhyming scheme, the model responded with a word that didn't rhyme appropriately. In other words, the model correctly predicted the tokens to explain the ABAB rhyme scheme without the understanding it would have needed to reproduce it....

As noted by Sarah Gooding from security firm Socket, "If LLMs can get the right answers without genuine understanding, then benchmark success becomes misleading."


AI has enough going on to entice James West and Artemus Gordon out of retirement!

-jk
Those that can't do...teach!
 
Back
Top