Chat GPT and AI stuff

Those that can't do...teach!
Sure. I’m just trying to teach the DBR denizens that we can’t depend on ChatGPT and other AI systems as reliable sources here. (Yet. Some day...)

LLMs that are fed a diet rich in the abundant factual errors on the internet, that create “facts” on demand, that have no meaningful grasp of what they’re regurgitating should only be used as a starting point.

Too many people want to boldly trust AI when it’s clearly still in the “trust but verify” stage.

-jk
 
My Dr. Pedantic alter ego is boldly crying out to be unleashed right now. Trying my best to keep him contained.
 
I tried several times to get ChatGPT to give me a list of the historically lowest finish of each MLS team. It spewed out nonsense saying things like Miami FC ended up in last place in 2024 (they ended in 1st). I even told it the website to find the answers (mlssoccer.com) and it still failed miserably. For 90% of the teams it gave lowest finish as ending up in “last place in year 20xx” which was seldom true. I finally just gave up.
 
I tried several times to get ChatGPT to give me a list of the historically lowest finish of each MLS team. It spewed out nonsense saying things like Miami FC ended up in last place in 2024 (they ended in 1st). I even told it the website to find the answers (mlssoccer.com) and it still failed miserably. For 90% of the teams it gave lowest finish as ending up in “last place in year 20xx” which was seldom true. I finally just gave up.list of the historically lowest finish of each MLS team

I put the following into perplexity.ai and got the following response… FYI, I have no idea if this is accurate… just providing info from perplexity.ai. Not sure the referenced link is really appropriate.

********************
list of the historically lowest finish of each MLS team


https://www.mlssoccer.com/news/mls-tiers-revisited-ranking-all-30-teams
Here is a list of the historically lowest finish (worst regular season performance) for each Major League Soccer (MLS) team, based on available historical records and notable "Wooden Spoon" (last place) seasons. For teams with multiple notably poor seasons, the absolute lowest is listed. Expansion teams with only recent entries are included with their worst finish to date. Some recent teams or seasons may not be fully represented if data is not yet available.
TeamLowest Finish (Year, Place, Points/Record)
Atlanta United2022, 11th in East (missed playoffs)
Austin FC2021, 12th in West (expansion year, last in conference)
CF Montréal2014, 19th overall, 28 pts (Wooden Spoon)
Charlotte FC2022, 9th in East (inaugural season, missed playoffs)
Chicago Fire2016, 20th overall, 31 pts (Wooden Spoon)
Colorado Rapids2019, 9th in West, 16th overall (2 pts in first 11 games)
Columbus Crew2019, 10th in East, 20th overall
D.C. United2013, 10th in East, 19th overall, 16 pts (fewest ever in non-shortened year)
FC Cincinnati2021, 14th in East, 27th overall, 20 pts (Wooden Spoon, 3rd straight year)
FC Dallas2003, 10th overall, 23 pts (as Dallas Burn)
Houston Dynamo2020, 12th in West, 25th overall (shortened season)
Inter Miami CF2021, 11th in East, 20th overall
LA Galaxy2017, 11th in West, 22nd overall, 32 pts (club-worst)
Los Angeles FC2021, 9th in West, 17th overall
Minnesota United2017, 9th in West (inaugural season, missed playoffs)
Nashville SC2020, 7th in East (inaugural season, made playoffs)
New England Revolution2017, 7th in East, 15th overall
New York City FC2015, 8th in East, 17th overall (inaugural season)
New York Red Bulls1999, 12th overall, 15 pts (as MetroStars)
Orlando City SC2018, 11th in East, 22nd overall, 28 pts (club-worst)
Philadelphia Union2010, 7th in East, 14th overall (inaugural season)
Portland Timbers2012, 8th in West, 17th overall
Real Salt Lake2005, 5th in West, 12th overall (inaugural season)
San Jose Earthquakes2018, 12th in West, 23rd overall, 21 pts (Wooden Spoon)
Seattle Sounders2022, 11th in West, 21st overall (missed playoffs for first time)
Sporting Kansas City1999, 11th overall, 20 pts (as Kansas City Wizards)
St. Louis CITY SC2024, 10th in West (most recent, expansion in 2023)
Toronto FC2012, 10th in East, 19th overall, 23 pts (Wooden Spoon, 0-0-9 start)
Vancouver Whitecaps2011, 9th in West, 18th overall (inaugural season)

Notes:
  • The "Wooden Spoon" is the unofficial title for finishing last overall in MLS.
  • Some teams (e.g., Chivas USA, Tampa Bay Mutiny) are now defunct and not included here.
  • For expansion teams, the inaugural season is often their lowest finish.
  • The cited years and placements are based on available historical summaries and may be updated as new seasons conclude.
Key sources for these records include Reddit historical breakdowns and Wikipedia summaries of MLS seasons. If you need a more detailed breakdown for a specific team or want defunct teams included, let me know.
********************
 
Last edited:
I put the following into perplexity.ai and got the following response… FYI, I have no idea if this is accurate… just providing info from perplexity.ai.


list of the historically lowest finish of each MLS team


https://www.mlssoccer.com/news/mls-tiers-revisited-ranking-all-30-teams
Here is a list of the historically lowest finish (worst regular season performance) for each Major League Soccer (MLS) team, based on available historical records and notable "Wooden Spoon" (last place) seasons. For teams with multiple notably poor seasons, the absolute lowest is listed. Expansion teams with only recent entries are included with their worst finish to date. Some recent teams or seasons may not be fully represented if data is not yet available.
TeamLowest Finish (Year, Place, Points/Record)
Atlanta United2022, 11th in East (missed playoffs)
Austin FC2021, 12th in West (expansion year, last in conference)
CF Montréal2014, 19th overall, 28 pts (Wooden Spoon)
Charlotte FC2022, 9th in East (inaugural season, missed playoffs)
Chicago Fire2016, 20th overall, 31 pts (Wooden Spoon)
Colorado Rapids2019, 9th in West, 16th overall (2 pts in first 11 games)
Columbus Crew2019, 10th in East, 20th overall
D.C. United2013, 10th in East, 19th overall, 16 pts (fewest ever in non-shortened year)
FC Cincinnati2021, 14th in East, 27th overall, 20 pts (Wooden Spoon, 3rd straight year)
FC Dallas2003, 10th overall, 23 pts (as Dallas Burn)
Houston Dynamo2020, 12th in West, 25th overall (shortened season)
Inter Miami CF2021, 11th in East, 20th overall
LA Galaxy2017, 11th in West, 22nd overall, 32 pts (club-worst)
Los Angeles FC2021, 9th in West, 17th overall
Minnesota United2017, 9th in West (inaugural season, missed playoffs)
Nashville SC2020, 7th in East (inaugural season, made playoffs)
New England Revolution2017, 7th in East, 15th overall
New York City FC2015, 8th in East, 17th overall (inaugural season)
New York Red Bulls1999, 12th overall, 15 pts (as MetroStars)
Orlando City SC2018, 11th in East, 22nd overall, 28 pts (club-worst)
Philadelphia Union2010, 7th in East, 14th overall (inaugural season)
Portland Timbers2012, 8th in West, 17th overall
Real Salt Lake2005, 5th in West, 12th overall (inaugural season)
San Jose Earthquakes2018, 12th in West, 23rd overall, 21 pts (Wooden Spoon)
Seattle Sounders2022, 11th in West, 21st overall (missed playoffs for first time)
Sporting Kansas City1999, 11th overall, 20 pts (as Kansas City Wizards)
St. Louis CITY SC2024, 10th in West (most recent, expansion in 2023)
Toronto FC2012, 10th in East, 19th overall, 23 pts (Wooden Spoon, 0-0-9 start)
Vancouver Whitecaps2011, 9th in West, 18th overall (inaugural season)

Notes:
  • The "Wooden Spoon" is the unofficial title for finishing last overall in MLS.
  • Some teams (e.g., Chivas USA, Tampa Bay Mutiny) are now defunct and not included here.
  • For expansion teams, the inaugural season is often their lowest finish.
  • The cited years and placements are based on available historical summaries and may be updated as new seasons conclude.
Key sources for these records include Reddit historical breakdowns and Wikipedia summaries of MLS seasons. If you need a more detailed breakdown for a specific team or want defunct teams included, let me know.
Wow - that is 1000% better than ChatGPT. It went to the sources I would have used and they appear correct. Of course I would have to check these individually to be sure. Plus the context and caveats at the end are perfect. Is perplexity.ai free?
 
Wow - that is 1000% better than ChatGPT. It went to the sources I would have used and they appear correct. Of course I would have to check these individually to be sure. Plus the context and caveats at the end are perfect. Is perplexity.ai free?
Yes… free. They ask you to sign up for a free account but I always ignore that. There is a paid version (Pro?) that apparently is more detailed and extensive. You can get more detail of the sign up “tiers”… just ask perplexity.ai that question :)
 
Sure. I’m just trying to teach the DBR denizens that we can’t depend on ChatGPT and other AI systems as reliable sources here. (Yet. Some day...)

LLMs that are fed a diet rich in the abundant factual errors on the internet, that create “facts” on demand, that have no meaningful grasp of what they’re regurgitating should only be used as a starting point.

Too many people want to boldly trust AI when it’s clearly still in the “trust but verify” stage.

-jk
Seems that it all comes full circle to one of the earliest truths of computers/computing in general: garbage in, garbage out.
 
With AI, there's a bit of verification. The more serious the subject, the more verification necessary. There's also a good bit of prompt engineering and knowing that one of the strengths of a chat bot is that it's conversational so followup questions/prompts can be super helpful and the answer you want is often at the end of the conversation not what gets spit out with your first query.

Trust but verify is a good thing. That goes for both AI and any information you get from doing a general Google search.
 
Sure. I’m just trying to teach the DBR denizens that we can’t depend on ChatGPT and other AI systems as reliable sources here. (Yet. Some day...)

LLMs that are fed a diet rich in the abundant factual errors on the internet, that create “facts” on demand, that have no meaningful grasp of what they’re regurgitating should only be used as a starting point.

Too many people want to boldly trust AI when it’s clearly still in the “trust but verify” stage.

-jk
Saw this today on FB and it reminded me of this discussion. ;)

1751915241047.png
 
Sure. I’m just trying to teach the DBR denizens that we can’t depend on ChatGPT and other AI systems as reliable sources here. (Yet. Some day...)

LLMs that are fed a diet rich in the abundant factual errors on the internet, that create “facts” on demand, that have no meaningful grasp of what they’re regurgitating should only be used as a starting point.

Too many people want to boldly trust AI when it’s clearly still in the “trust but verify” stage.

-jk
There are many things I do not understand about AI's path forward, but the biggest one is the overarching belief that AI will ever escape the "trust but verify" state.

Today, essentially AI is one step beyond crowd-sourcing its learning from the Internet. It is fed a bunch of information (with the aforementioned factual errors), at which point humans "train" the AI by telling it when it is doing something wrong in specific cases (if the trainer finds the wrong information and identifies that information is incorrect and correctly trains the AI with accurate information).

I see how that can improve AI, but how does that ever lead to a place where AI doesn't make mistakes? How do we ever get to a place where verification isn't necessary? Are we essentially going to crowd-source corrections to AI users and hope that eventually we will weed most of them out? And if AI is constantly scraping new information and that information will have a non-zero percentage of mistakes, won't that be a never-ending battle?

To be clear, I use AI. AI can do a number of things very well, including making connections on existing knowledge that previously had not been made. Even if AI never advances significantly further, it is an amazing tool when trust-but-verify is followed. However, in this moment, I am unconvinced that there isn't a ceiling on AI in its current incarnation, nor can I envision a future incarnation beyond where it currently is. I suppose if I could, I would be running an AI start-up.

Whenever so many people believe something that I cannot wrap my head around, I tend to assume that I am the one who is missing something. Could somebody help me understand what I am missing? (Happy to do my own reading on the topic.)

P.S. A shout-out to rsvman for using one of my favorite comp sci sayings: "garbage in, garbage out".
 


Oh my. One of the worst case scenario for AI is making truth impossible to tell from fiction. Looks like someone used it to pretend to be Marco Rubio.
In an age where no one agrees on facts and sources, it makes the waters even muddier. Especially frustrating when so many things that are actually happening are both absurd and politicized.
 
There are many things I do not understand about AI's path forward, but the biggest one is the overarching belief that AI will ever escape the "trust but verify" state.

Today, essentially AI is one step beyond crowd-sourcing its learning from the Internet. It is fed a bunch of information (with the aforementioned factual errors), at which point humans "train" the AI by telling it when it is doing something wrong in specific cases (if the trainer finds the wrong information and identifies that information is incorrect and correctly trains the AI with accurate information).

I see how that can improve AI, but how does that ever lead to a place where AI doesn't make mistakes? How do we ever get to a place where verification isn't necessary? Are we essentially going to crowd-source corrections to AI users and hope that eventually we will weed most of them out? And if AI is constantly scraping new information and that information will have a non-zero percentage of mistakes, won't that be a never-ending battle?

To be clear, I use AI. AI can do a number of things very well, including making connections on existing knowledge that previously had not been made. Even if AI never advances significantly further, it is an amazing tool when trust-but-verify is followed. However, in this moment, I am unconvinced that there isn't a ceiling on AI in its current incarnation, nor can I envision a future incarnation beyond where it currently is. I suppose if I could, I would be running an AI start-up.

Whenever so many people believe something that I cannot wrap my head around, I tend to assume that I am the one who is missing something. Could somebody help me understand what I am missing? (Happy to do my own reading on the topic.)

P.S. A shout-out to rsvman for using one of my favorite comp sci sayings: "garbage in, garbage out".
Thanks for sharing your concerns and skepticism, jafarr1 -- I'm right there with you, and we're not the only ones. The Journal of Broadcasting & Media (JOBEM) has issued a call for papers on the subject of 'AI, Misinformation, and the Future of the Algorithmic Fact-Checking.' The "Topics of Interest" for this special issue are worth reviewing (see below).

Fingers crossed that they get some meaningful contributions from self-funded experts. (This is the type of research that I'd typically expect our government to fund, but I'm not holding my breath at this point.)

  • AI-driven fact-checking and AI hallucinations
  • Large Language Models (LLMs) in journalistic fact-checking
  • Human cognition and responses to AI-generated fact-checks
  • Case studies on AI-powered fact-checking in electronic media
  • Media-AI collaborations in misinformation detection
  • Ethical concerns in AI-driven fact-checking
  • Comparative effectiveness of human vs. AI fact-checking
  • Algorithmic transparency and accountability in media platforms
  • Real-time misinformation detection and verification models
  • Policy frameworks for AI-assisted fact-checking
  • User trust and engagement with AI-verified content
  • AI-based credibility scoring systems for news verification
  • Psychological and social impacts of AI-driven fact-checking
  • Strategies for inoculating audiences against misinformation
  • Biases in AI fact-checking models and training data
  • Evaluation metrics for AI-driven fact-checking accuracy
  • AI’s role in preventing information silos and echo chambers
  • Cross-platform misinformation tracking and countermeasures
  • Fact-checking across cultural and linguistic contexts
  • Interactive and visual tools for AI-verified information
 
In AI's corner is the fact that companies will be happy to offload jobs onto AI and won't necessarily care how well it performs, stuff like "customer service"...
 
I know that for serious stuff the verify is still needed but I will say that for Python coding, technical discussions and more trivial topics I find AI to be very useful.

In the last 24 hours I have inquired about TV panel types (OLED, Mini-LED, QLED, etc), configuring my mouse with my MacBook, a phishing attempt where I cut and pasted the text, troubleshooting a VSCode issue for a co-worker, etc. The responses that came back were all useful - some of them I already knew about somewhat and wanted more information so in a sense I verified.

One thing that will be interesting going forward is if people will continue to be more tolerant of human error than an error the AI makes (we see the same thing with self driving cars, BTW). I mean, the number of things that people believe these days from reading crap on the internet is unreal and the human responses I read to many discussions are so full of disinformation it's ridiculous.
 
I see how that can improve AI, but how does that ever lead to a place where AI doesn't make mistakes?

One thing that will be interesting going forward is if people will continue to be more tolerant of human error than an error the AI makes (we see the same thing with self driving cars, BTW). I mean, the number of things that people believe these days from reading crap on the internet is unreal and the human responses I read to many discussions are so full of disinformation it's ridiculous.
Elvis, I like your answer to Jafar's question. I don't think we will get to a place where AI doesn't make mistakes, but I don't think that's necessary. As long as it does better than a human, that can be good enough. But you're right, today people are much more tolerant of a human's mistakes than AI's.
 
We recently bought a foreclosed house in a nice neighborhood. It was a long and difficult process. We’ve been in the house 6 months. We have some updates of the exterior that need to be done. I asked ChatGPT to help to change the color and make it more contemporary. I’m just going to show the photo to the contractor and tell them this is what I want.
 

Attachments

  • IMG_0435.jpeg
    IMG_0435.jpeg
    137.7 KB · Views: 21
  • D9B01C1D-6222-4E63-8370-7000D4A84F23.png
    D9B01C1D-6222-4E63-8370-7000D4A84F23.png
    1.7 MB · Views: 21
In an age where no one agrees on facts and sources, it makes the waters even muddier. Especially frustrating when so many things that are actually happening are both absurd and politicized.
I’m starting to think that an AI disclosure law is necessary. If any content is created by AI it must be disclosed when the content is published or used. That would hopefully inhibit deepfakes, the weird Rubio story, and numerous other nefarious uses of AI. Many online journalists already engage in this practice, e.g “the summary was generated by AI. But the article was written by a human.”
How to craft such a law I leave up to our highly competent legislators. 😉 And it may not pass First Amendment scrutiny. Still, I would like to see it tried and tested.
 
Back
Top