Tuesday, November 26, 2024

Hallucinations? No, Bullshit!

[Updated 2025-02-06]
I have broken this post out of my "Bullshit All The Way Down" "AI" rant. It was an update to that post dated 2024-11-26. I am back-dating this post to 2024-11-26. I will use the BATWD post as a home page for the topic "Bullshit".


This post today from the AWS (Amazon Web Services) team gave me LOL after LOL. I'm going to do some bolding of the best of the best.

Hallucinations in large language models (LLMs) refer to the phenomenon where the LLM generates an output that is plausible but factually incorrect or made-up. This can occur when the model’s training data lacks the necessary information or when the model attempts to generate coherent responses by making logical inferences beyond its actual knowledge. Hallucinations arise because of the inherent limitations of the language modeling approach, which aims to produce fluent and contextually appropriate text without necessarily ensuring factual accuracy.

Remediating hallucinations is crucial for production applications that use LLMs, particularly in domains where incorrect information can have serious consequences, such as healthcare, finance, or legal applications. Unchecked hallucinations can undermine the reliability and trustworthiness of the system, leading to potential harm or legal liabilities. Strategies to mitigate hallucinations can include rigorous fact-checking mechanisms, integrating external knowledge sources using Retrieval Augmented Generation (RAG), applying confidence thresholds, and implementing human oversight or verification processes for critical outputs.

Oh boy, so on top of LLM Bullshit Generators, we can add RAG Bullshit Generators! &, even worse, "human oversight", reverse centaurs! I cannot imagine a worse job than trying to fact-check machine-generated bullshit.

Meanwhile, every corporation in the world is being pushed to implement this crap. Generate bullshit, or be left behind?!?!? I think I'll choose, be left behind. Sigh.

No comments: