hckrnws
> While the hallucination problem in LLMs is inevitable [0], they can be significantly reduced...
Every article on hallucinations needs to start with this fact until we've hammered that into every "AI Engineer"'s head. Hallucinations are not a bug—they're not a different mode of operation, they're not a logic error. They're not even really a distinct kind of output.
What they are is a value judgement we assign to the output of an LLM program. A "hallucination" is just output from an LLM-based workflow that is not fit for purpose.
This means that all techniques for managing hallucinations (such as the ones described in TFA, which are good) are better understood as techniques for constraining and validating the probabilistic output of an LLM to ensure fitness for purpose—it's a process of quality control, and it should be approached as such. The trouble is that we software engineers have spent so long working in an artificially deterministic world that we're not used to designing and evaluating probabilistic quality control systems for computer output.
[0] They link to this paper: https://arxiv.org/pdf/2401.11817
> The trouble is that we software engineers have spent so long working in an artificially deterministic world that we're not used to designing and evaluating probabilistic quality control systems for computer output.
I think that's a mischaracterization and not really accurate. As a trade, we're familiar with probabilistic/non-deterministic components and how to approach them.
You were closer when you used quotes around "AI Engineer" -- many of the loudest people involved in generative AI right now have little to no grounding in engineering at all. They aren't used to looking at their work through "fit for purpose" concerns, compromises, efficiency, limits, constraints, etc -- whether that work uses AI or not.
The rest of us are variously either working quietly, getting drowned out, or patiently waiting for our respected colleagues-in-engineering to document, demonstrate, and mature these very promising tools for us.
Everything else you said is 100% right, though.
> As a trade, we're familiar with probabilistic/non-deterministic components and how to approach them.
Yes, users.
And that small or large subsets of occasional or consistent bad reasoners we may have sometimes called "users" (in the secrecy of the four walls) reinforced, by contrast and by forcing us to look at things objectively trying to understand their "rants", the idea of proper reasonable stance, did it not?
..and bugs, especially with analog computers.
Take all computers and make it so all memory has a 0.1-5% chance of bit flipping any second (depending on cost and temperature). That this just became a fundamental truth of reality. Any bit, anywhere in memory. It would completely turn SWE work on it's head.
This is kind of how traditional engineering is, since reality is analog and everything is on a spectrum interacting with everything else all the time.
There is no simple function where you put in 1 and get out 0. Everything in reality is put in 1 +/- .25 and get out 0 +/- .25. It's the reason why the complexity of hardware is trivial compared to the complexity of software.
That's not really engaging with the point because you're suggesting turning all of our tools into something grossly unreliable. Of course that's a radical shift from what anybody's used to and undermines every practice in the trade.
But your mistake is just reinforcing what I wrote, because its the same mistake that the "loud people" are make when they think about generative AI. They imagine it as being a wholesale replacement for how projects are implemented and even how they're built in the first place.
But the many experienced engineers looking at generative AI recognize it as one of many tools that they can turn to while building a project that fulfills their requirements. And like all their tools, it has capabilities, costs, and limitations that need to be considered. That its sometimes non-deterministic is not a new kind of cost or limitation. It's a challenging one, but not a novel one, and one just mindfully (or analytically) considers whether and how that non-determinism can be leveraged, minimized, etc. That is engineering, and it's what many of us have been doing with all sorts of tools for decades.
Perhaps I am not explaining this well. What you call grossly unreliable and a radical shift from what any [SWE] is used to, is called Tuesday afternoon for a mechanical, electrical, civil, chemical, etc. etc. engineer. Call them classic engineers.
Statistical outputs are the only outputs of classical engineering. You have never in your life assigned x = 5 and then later queried it and gotten x = 4.83. But that happens all the time in classic engineering, to the point that it is classic engineering.
That's what the OP is trying to get across. LLM's are statistical systems that need statistical management. SWE's don't deal with statistical systems because like you said:
>[statistical software systems would be] turning all of our tools into something grossly unreliable. Of course that's a radical shift from what anybody's used to and undermines every practice in the trade.
Which is exactly why OP is saying SWE's need a new approach here.
You seem to be saying that because we don't only deal with "statistical systems" we don't ever or otherwise aren't institutionally or professionally familiar with them.
This is simply not the case.
Your career path may have only ever used deterministic components that you could fully and easily model in your head as such, like assigning to and reading from some particular abstract construct like the variable in your example. I don't really believe this is true for you, but it's what you seem to be letting yourself believe.
But for many of the rest of us, and for the trade as a whole, we already use many tools and interface with many components that are inherently non-determinstic.
Sometimes this non-determinism is itself a program effect, as with generative AI models or chaotic or noisy signal generators. In fact, such components are used in developing generative AI models. They didn't come out of nowhere!
Other times, this non-determinism is from non-software components that we interface with, like sensors or controllers.
Sometimes we combine both into things like random number generators with specific distribution characteristics, which we use to engineer specific solutions like cryptography products.
Regardless, the trade has been collectively been doing it every day for decades longer than anybody on this forum has been alive.
Software engineering is not all token CRUD apps and research notebooks or whatever. We also build cryptography products, firmware for embedded systems, training systems for machine learning, etc -- all of which bring experience with leveraging non-deterministic components as some of the pieces, exactly like we quiet, diligent engineers are already doing with generative AI.
Comment was deleted :(
You're missing his point. He's saying if you make a program, you expect it to do X reliably. X may include "send an email, or kick off this workflow, or add this to the log, or crash" but you don't expect it to, for example, "delete system32 and shut down the computer". LLMs have essentially unconstrained outputs where the above mentioned program couldn't possibly delete anything or shut down your computer because nothing even close to that is in the code.
Please do not confuse this example with agentic AI losing the plot, that's not what I'm trying to say.
Edit: a better example is that when you build an autocomplete plugin for your email client, you don't expect it to also be able to play chess. But look what happened.
[flagged]
Comment was deleted :(
Of course they are a bug. Just that hallucination emerge from the normal function of a LLM doesn't make it "not a bug".
No programmer in their right mind will call the lack of bound checking resulting in garbled output "not a bug", even though it is a totally normal thing to do from the point of view of a CPU. It is a bug and you need additional code to fix it, for example by checking for out-of-bounds condition and returning an error if it happens.
Same thing for LLM hallucinations. LLMs naturally hallucinate, but it is not what we want, so it is a bug. And to fix it, we need to engineer solutions that prevent the hallucinations from happening, maybe resulting in an "I don't know" response that would be analogous to an error message. How you do it may be different from a simple "if", with probabilities and all that, but the general idea is the same: recognizing error cases and responding accordingly.
I guess it is comes down to how you define a bug, but how else would you call a result that is not fit for purpose?
A bug is defined as an unexpected defect. You can fix an unexpected defect by correcting the error in the code that led to the defect. In your example of lack of bounds checking there's a very concrete answer that will instantly fix the defect—add bounds checking.
Hallucinations are not unexpected in LLMs and cannot be fixed by correcting an error in the code. Instead they are fundamental property of the computing paradigm that was chosen, one that has to be worked around.
It's closer to network lag than it is to bounds checking—it's an undesirable characteristic, but one that we knew about when we chose to make a network application. We'll do our best to mitigate it to acceptable levels, but it's certainly not a bug, it's just a fact of the paradigm.
I’d argue hallucinations are unexpected in LLMs by the large (non technical) number of users who use them directly, or indirectly though other services.
It all depends on whose specification you’re assessing the “bugginess” against, the inference code as written, the research paper, colloquial understanding in technical circles, or how the product is pitched and presents to users.
> how the product is pitched and presents to users.
And this is why I feel it's so important to fix the way we talk about hallucinations. Engineers need to be extremely clear with product owners, salespeople, and other business folks about the inherent limitations of LLMs—about the fact that certain things, like factual accuracy, may asymptotically approach 100% accuracy but will never reach it. About the fact that even getting asymptotically close to 100% is extremely (most likely prohibitively) expensive. And once they've chosen a non-zero failure rate, they have to be clear about what the consequences of the chosen failure rate are.
Before engineers can communicate that to the business side, they have to have that straight in their own heads. Then they can communicate expectations with the business and ensure that they understand that once you've chosen a failure rate, individual 'hallucinations' can't be treated as bugs to troubleshoot—you need instead to have an industrial-style QC process that measures trends and reacts only when your process produces results outside of a set of well-defined tolerances.
(Yes, I'm aware that many organizations are so thoroughly broken that engineering has no influence over what business tells customers. But those businesses are hopeless anyway, and many businesses do listen to their engineers.)
> individual 'hallucinations' can't be treated as bugs to troubleshoot
You are wrong here - my company can fix individual responses by adding specific targeted data for the RAG prompt. So a JIRA ticket for a wrong response can be fixed in 2 days.
It's important to understand that you're addressing the problem by adding a layer on top of the core technology, to mitigate or mask how it actually works.
At scale, your solution looks like bolting an expert system on top of the LLM. Which is something that some researchers and companies are actually working on.
Wow, that sounds great: just have every customer who interacts with your LLM come back to the site in 2 days to get the real answer to their question. How can I invest?
I've said before, but I'm not convinced LLM should be public facing. I know some companies have been burned by them and in my opinion, LLM should be about helping customer support people find answers faster.
> LLM should be about helping customer support people find answers faster
That would be as dangerous as any other function: you still need personnel verified as trustworthy in processing unreliable input.
Yes, that is the point. The customer service person would be most able to determine if what the LLM said makes sense or not. My point is, we are sold automation instead of a power tool.
This is why “fit for purpose” is such a useful idea.
Because it gives you two ends from which to approach the business challenge. You can improve the fitness—the functionality itself. But you can also adjust the purpose—what people expect it to do.
I think a lot of the concerns about LLMs come down to unrealistic expectations: oracles, Google killers, etc.
Google has problems finding and surfacing good info. LLMs are way better at that… but they err in the opposite direction. They are great at surfacing fake info too! So they need to be thought of (marketed) in a different way.
Their promise needs to be better aligned with how the technology actually works. Which is why it’s helpful to emphasize that “hallucinations” are a fundamental attribute, not an easily fixed mistake.
> I’d argue hallucinations are unexpected in LLMs by the large (non technical) number of users who use them directly, or indirectly though other services.
People also blithely trust other humans even against all evidence that they're trustworthy. Some things just aren't fixable.
The median individual is _not_ a model, and cannot represent the whole of the set. If the median is incompetent, the competent remain competent.
I've found it very helpful to make the following distinction:
Spec: Do X in situation Y.
Correctness bug: It doesn't do X in situation Y.
Fitness-for-purpose (FFP) bug: It does X in situation Y, but, knowing this, you decide you don't actually want it to do X in situation Y.
Hallucination is an FFP bug.
Sorry, but it's a correctness bug most of the time[], as the correct information is known or known to not exist.
If ask a math question and you get a random incorrect equation, it's not unfit for purpose, just incorrect.
FFP would be returning misinformation from the model, which is not a hallucination per se. Or the model misunderstanding the question and returning a correct answer to a related question.
[] Except for art generators.
"Correct" here doesn't mean "correct" information -- I made sure to clarify what it means with an example.
Except we put up with network lag because it's an understandable, if undesirable, caveat to an otherwise useful technology. No one would ever say that because a network is sometimes slow, that it is then preferable to not have computers networked. The benefits clearly outweigh the drawbacks.
This is not true for many applications of LLM. Generating legal documents, for example: it is not acceptable that it hallucinate laws that do not exist. Recipes: it is not acceptable that it would tell people to make pizza with glue, or mustard gas to remove stains. Or, in my case: it is not acceptable for a developer assisting AI to hallucinate into existence libraries that are not real and not only will not solve my problem, but that will cause me to lose hours of my day trying to figure out where to get said library.
If pneumatic tires failed to hold air as often as LLM's hallucinate, we wouldn't use them. That's not to say a tire can't blow out, sure they can, happens all the time. It's about the rate of failure. Or hell, to bring it back to your metaphor, if your network experienced high latency at the rate most LLM's hallucinate, I might actually suggest you not network computers, or at the very least, I'd say you should be replaced at whatever company you work for since you're clearly unqualified to manage a network.
The benefits of networking outweigh the drawbacks in many situations, but not all, and good engineers avoid the network in cases where the lag would be unacceptable (i.e., real-time computing applications such as assembly line software). The same applies to LLMs—even if we're never able to get the rate of failure down below 5%, there are some applications that that would be fine for.
The important thing isn't that the rate of failure be below a specific threshold before the technology is adopted anywhere, the important thing is that engineers working on this technology have an understanding of the fundamental limitations of the computing paradigm and design accordingly—up to and including telling leadership that LLMs are a fundamentally inappropriate tool for the job.
I mean, agree. Now tell me which applications of LLM that are currently trending and being sold so hard by Silicon Valley meet that standard? It's not none, certainly, but it's a hell of a lot less than exist.
If it's not acceptable to hallucinate laws for writing legal documents, then writing legal documents is probably an unacceptable use case.
Also, how do you mitigate a lawyer writing whatever they want (aka: hallucinating) when writing legal documents? Double-checking??
Lawyers can already be sanctioned for this: https://www.youtube.com/watch?v=oqSYljRYDEM&pp=ygUObGVnYWwgZ...
> Also, how do you mitigate a lawyer writing whatever they want (aka: hallucinating) when writing legal documents? Double-checking??
Of course they are supposed to double and triple and multiple check as they think and write, documentation and references at hand, _exactly_ how you are supposed to do from trivial informal context on towards critical ones - exactly the same, you check the detail and the whole, multiple times.
A licensing body, and consequences for the failure to practice law correctly.
> If it's not acceptable to hallucinate laws for writing legal documents
Legislators pass incoherent legislation every day. "hallucination" is the de-facto standard for human behavior (and for law).
Bug, like any other word, is defined however the speaker defines it. While your usage is certainly common in technical groups, the common "layman" usage is closer to what the parent suggests.
And is there a compelling reason for us, while engaged in technical discussion with our technical peers about the technical mitigations for a technical defect, to use the layman usage rather than the term of art?
Expected defects are bugs too. I totally expect half the problems in the software my company is developing. They are still bugs.
In real world engineering, defects are part of the design and not bugs. Really they aren't even called defects, because they are inherent in the design.
Maybe you bump your car because you stopped an inch too far. Perhaps it's because the tires on your car were from a lower performing but still in spec batch. Those tires weren't defective or bugged, but instead the product of a system with statistical outputs (manufacturing variation) rather than software-like deterministic ones (binary yes/no output).
Which goes back to OP's initial point: SWE types aren't used to working in fully statistical output environments.
What is the utility of this sense of "bug"? If not all bugs can be fixed it seems better to toss the entire concept of a "bug" out the window as no longer useful for describing the behavior of software.
What is utility of any other sense? I expect null pointer to happen. It is still a bug. Even if it is in some kind of special situation we dont have time to fix.
> If not all bugs can be fixed it seems better to toss the entire concept of a "bug" out the window as no longer useful for describing the behavior of software.
Then those are bugs you cant fix. It is just lying to yourself to call them not a bug ... if they are bugs.
[flagged]
> Of course they are a bug.
A bug implies fixable behavior rather than expected behavior. An LLM making shit up is expected behavior.
> LLMs naturally hallucinate, but it is not what we want, so it is a bug.
Maybe you just don't want an LLM! This is what LLMs do. Maybe you want a decision tree or a scripted chatbot?
> And to fix it, we need to engineer solutions that prevent the hallucinations from happening, maybe resulting in an "I don't know" response that would be analogous to an error message.
I'm sure we'll figure out how to do this when we can fix the same bug in humans, too. Given that humans can't even agree when we're right or wrong—much less sense the incoherency of their own worldviews—I doubt we're going to see a solution to this in our lifetimes.
A bug is generally treated as undefined and undesirable side effects of a program.
Hallucinations are undesirable but not undefined. We know that the process creates them and expect them.
It’d be like using floats to calculate dollars and cents and calling the resulting math a bug - it’s not, you just used the technology wrong.
> LLMs naturally hallucinate, but it is not what we want, so it is a bug.
I rolled a one in D&D, it is not what I wanted, so it is a bug. Remove it from all my dice.
What? You are telling me that when you roll a 6 sided dice you are not expecting any of the 1-6 as a result?
If a 6-sided dice produced a 7 that would be a bug.
When you rolled a dice, I would argue that you knew you wanted a random number from 1-6, not that you wanted a specific number or not a specific number. If you wanted that you wouldn't have used a dice.
When I ask an LLM to write code for me and it references a completely made up library that doesn't exist and has never existed, is this really analogous to your dice example?
>You are telling me that when you roll a 6 sided dice you are not expecting any of the 1-6 as a result?
The statement I replied to wasn't any non-expected result is a bug, it was non-desired output is a bug (hence the joke about not desiring an expected output). LLMs producing "funny" (hallucination) outputs are expected but only sometimes not desired, therefore not a bug in my opinion.
How do you use an LLM in story telling if it isn't allowed to produce fictious outputs?
IMO it is because you just asked a bunch of dice to write code for you.
>Of course they are a bug
No.
When you build a bloom filter and it says "X is in the set" and X is NOT in the set, that's not a bug, that's an inherent behavior of the very theory of a probabilistic data structure. It is something that WILL happen, that you MUST expect to happen, and you MUST build around.
>And to fix it, we need to engineer solutions that prevent the hallucinations from happening
The whole point is that this is fundamentally impossible.
The difference is that you can fix IndexError by modifying your code but no amount of prompt manipulation may fix hallucinations. For that you need solutions outside LLMs.
Not a bug at all, IMHO.
If someone puts the wrong address for their business; Google picks it up, and someone Googles and gets the wrong address, it says nothing about "bugs in software."
Comment was deleted :(
‘ I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines. ...’
Excellent point:
> just output from an LLM-based workflow that is not fit for purpose
And I think this is just one aspect of what I think of as the stone soup [1] problem. Outside of rigorous test conditions, humans just have a hard time telling how much work they're doing when they interpret something. It's the same sort of thing you see with "psychics" doing things like cold reading. People make meaning out of vaguery and nonsense and then credit the nonsense-producer with the work.
LLMs outputs are no more "hallucinations" than my output would be if I were asked to judge a dressage competition.
I’ve had multiple occasions where I’ve asked an LLM how to do <whatever> in Java and it’ll very confidently answer to use <some class in some package that doesn’t exist>. It would be far more helpful to me to receive an answer like “I don’t think there’s a third party library that does this, you’ll have to write it yourself” than to waste my time telling me a lie. If anything, calling these outputs “hallucinations” is a very polite way of saying that the LLM is bullshitting the user.
Of course the LLM is bullshitting the user. That's precisely its purpose: LLMs are tools that generate comprehensible sounding language based on probability models that describe what words/tokens tend to be found in proximity to each other. An LLM doesn't actually know anything by reference to verifiable, external facts.
Sure, LLMs can be used as fancy search engines that index documents and then answer questions by referring to them, but even there, the probabilistic nature of the underlying model can still result in mistakes.
Models do know things. Facts are encoded in their parameters. Look at the some of the interpretability research to see that. They aren't just Markov chains.
Nope. They don't know any specific facts. The training data produces a probability matrix that reflects what words are likely to be found in relation other words, allowing it to generate novel combinations of words that are coherent and understandable. But there is no mechanism involved for determining whether those novel expressions are actually factual representations of reality.
Again, read the papers. They absolutely do know facts, and that can be seen in the activations. Your description is oversimplified. It's easy to get models to emit statistically improbable but correct sequences of words. They are not just looking at what words are near by each other, that doesn't lead to the kind of output LLMs are capable of.
Exactly. People forget that we did make systems that were just Markov chains long before LLMs, like the famous Usenet Poster "Mark V. Shaney" (created by Rob Pike of Plan 9 and Golang fame) that was trained on Usenet posts in the 1980s. You didn't need deep learning or any sort of neural nets for that. It could come up with sentences that sometimes made some sort of sense, but that was it. The oversimplified way LLMs are sometimes explained makes it sound like they are no different from Mark V. Shaney, but they obviously are.
Yeah I get that, but at the same time we have AI hype men talking out of both sides of their mouth:
> This model is revolutionary, it knows everything, can answer anything with perfect accuracy!
“It’s fed me bullshit numerous times”
> OF COURSE it’s bullshitting you, don’t you know how LLMs work?
Like how am I supposed to take any of this tech seriously when the LLM is always answering questions as if it had the utmost confidence in what it is spitting out?
Hilariously, that really does basically define “bullshitting”.
Bullshit in the Frankfurtian sense.
There is a recent paper that explains it: https://link.springer.com/article/10.1007/s10676-024-09775-5
The LLM is always bullshitting the user. It's just sometimes the things it talks about happen to be real and sometimes they don't.
LLMs don't know things, they just string together responses that are a best fit for what follows from their prompt.
I suspect its so hard to get them to say "I don't know" because if they were biased towards responding that way then I would assume thats almost all they would ever say, since "I don't know" is an appropriate answer to every question imaginable.
I get that, but since it is all probabilities, you might imagine even the LLM knows when it is skating on thin ice.
If I'm beginning with "Once / upon / a" I think the data will show a very high confidence in the word to follow with. So too I would imagine it would know when the trail of breadcrumbs it has been following is of the trashier and low probability kind.
So just tell me. (Or perhaps speak to me and when your confidence is low you can drift into vocal fry territory.)
Maybe just having a confidence weight assigned to each sentence the LLM generates, reflected in tooltips or text coloring, would be a big improvement.
> the LLM knows
I don't think you get it.
He does get it and models do know their own confidence levels with a remarkably high degree of accuracy. The article states this clearly:
> Encoded truth: Recent work suggests that LLMs encode more truthfulness than previously understood, with certain tokens concentrating this information, which improves error detection. However, this encoding is complex and dataset-specific, hence limiting generalization. Notably, models may be encoding the correct answers internally despite generating errors, highlighting areas for targeted mitigation strategies.
Linking to this paper: https://arxiv.org/pdf/2410.02707
"Recent studies have demonstrated that LLMs’ internal states encode information regarding the truthfulness of their outputs, and that this information can be utilized to detect errors. In this work, we show that the internal representations of LLMs encode much more information about truthfulness than previously recognized."
This was already known years ago, by the way. The meme that LLMs just generate statistically plausible text is wrong and has been from the start. That's not how they work.
>The meme that LLMs just generate statistically plausible text is wrong and has been from the start
Did you read that paper? It doesn't support discarding this "meme" at all. More importantly, I don't think it adequately supports that LLMs "know facts"
FFS, the actual paper is about training models on the LLM state to predict whether it's actual output is correct. The interesting finding to them is that their models predict about a 75% chance of being correct even before the LLM starts generating text, that the conversation part of the answer has a low predicted chance of being correct, and that the "exact answer", a term they've created, is usually where the chance the LLM is correct (according to their trained model) peaks.
What they have demonstrated is that you can build a model that looks at in memory LLM state and have a 75% chance of guessing whether the LLM will produce the correct answer based on how the model reacts to the prompt. Even taking as a given (which you shouldn't in a science paper) that there's no trickery going on in the Probe models, accidental or otherwise, this is perfectly congruent with the statement that LLMs only "generate statistically probable text in the context of their training corpus and the prompt"
Notably, why don't they demonstrate that you can predict whether a trained but completely unprompted model will "know" the answer? Why does the LLM have to process the conversation before you can >90% chance predict whether it will produce the answer? If the LLM stores facts in it's weights, you should be able to demonstrate that completely at rest.
IMO, what they've actually done is produce "Probe models" that can 75% of the time correctly predict whether an LLM will produce a certain token or set of tokens in it's generation. That is coherent with an LLM model being, broadly speaking, a model of how tokens relate to each other from a point of view of language. The main quibble in these discussions is that doesn't constitute "knowing" IMO. LLMs are a model of language, not reality. That's why they are good at producing accurate language, and bad at producing accurate reality. That most facts are expressed in language doesn't mean language IS facts.
A question: Why don't LLMs produce garbage grammar when they "hallucinate"?
> why don't they demonstrate that you can predict whether a trained but completely unprompted model will "know" the answer?
The answer to what? You have to ask a question to test whether the answer will be accurate, and that's the prompt. I don't understand this objection.
> If the LLM stores facts in it's weights, you should be able to demonstrate that completely at rest.
Sure, with good enough interpretability systems, and those are being worked on. Anthropic can already locate which parts of the model fire on specific topics or themes and force them on or off by manipulating the activation vectors.
> A question: Why don't LLMs produce garbage grammar when they "hallucinate"?
Early models did.
They mean the non-normalized probabilities for each tokens is available. Many API give access to the top-n. You can color the text based on it, or include it in your pipelines, like trigger looking externally, or inject claims of uncertainty (the same things I do). It's not remotely guaranteed, but it's some low hanging fruit that can sometimes be useful.
One of these days, someone will figure out how to include that in the training/inference loop. It's probably important for communication and reasoning, considering a similar concept happens in my head (some sort of sparsity detection).
[dead]
Precisely. Hallucinations were improperly named. A better term is "confabulation," which is telling an untruth without the intent to deceive. Sadly, we can't get an entire industry to rename the LLM behavior we call hallucination, so I think we're stuck with it.
You are implicitly anthropomorphizing LLMs by implying that they (can) have intent in the first place. They have no intent, so can't lie or make a claim or confabulate. They are just a very complex black box that takes input and spits output. Searle's Chinese Room metaphor applies here.
There is no source of truth for dressage competition results, these are accepted as jury preference judgement.
There are plenty of matters where there is such a source of truth, and LLMs don't know the difference.
> There is no source of truth
There is no «source of [_simple_] truth» for complex things, but there are more (instead of less) objective complex evaluations.
Note that this is also valid for factual notions: e.g., "When were the Pyramids built?".
Ancient Egypt chronology is a poor example of determined knowledge.
We do not know in fact exactly when (which?) Pyramids were built, there are large margin of errors in the estimates.
That was my point: answering that question is a more complex evaluation than others. In lower percentiles you may have "what is in front of you" and in upper percentiles you may have "how to fix the balance of power in the Pacific" - all more or less complex evaluations.
I said, "Not even factual notions are trivial, e.g. "When has this event happened" - all have some foundational ground of higher or lower solidity".
Right, I misread your comment. Sorry!
The difference is that you are capable of reflection and self-awareness, in this particular case that you understand nothing about dressage and your judgements would be a farce.
One of the counter arguments to "LLMs aren't really AI" is: "Well, maybe the human brain works much like an LLM. So we are stupid in the same way LLMs are. We just have more sophisticated LLMs in our heads, or better training data. In other other words, if LLMs aren't intelligent, then neither are we.
The counter to this counter is: Can one build an LLM that can identify hallucinations, the way we do? That can classify its own output as good or shitty?
A machine with probabilistic output generation cannot tell what is a hallucination and what is not. It does not know the difference between truth and everything else. It is us humans on the receiving end who have to classify the content - and that is the problem. We have little patience, time, or energy to do this verification work for every piece of information. That's why we have the human trait of trust, which has been at the core of human progress from the beginning.
Now the question can be rephrased. Is it possible to trust AI information generators - what's to be done to build trust? And here is the difficulty - I do not know why I should ever trust a probabilistic system as long as it has this property and does not turn into a deterministic version of itself. I won't lower my standards for trusting people, for good reasons. But I cannot even raise the bar for trust in a machine above zero as long as it is driven by randomness at its core.
Calling them hallucinations was a huge mistake.
It is a good branding, like neural networks, and even artificial intelligence was. The good point is it makes really easy to detect who is a bullshiter and who understand at least very remotely what a LLM is supposed to produce.
I won't defend the term but am curious what you think would have been also concise but more accurate. Calling them for example "inevitable statistical misdirections" doesn't really roll off the tongue.
Confabulation, if the desire is to use a more apt psychological analogy.
It's a bug. Any other system where you put in one input and expect a certain output and get something else it'd be called a bug. Making up new terms for AI doesn't help.
I actually disagree with bug for the same reason I disagree with hallucination: it creates the idea that there's an error in processing that needs to be fixed, rather than an inherent characteristic of the computing paradigm that needs to be managed.
To be accurate, a term would need to emphasize that it's describing an opinion about the output rather than something that happened inside the program. "Inaccuracies" would be one term that does that fairly well.
Need not I say different bugs have many names...
Statmist
I see two types of faults with LLMs.
a) They output incorrect results given a constrained set of allowable outputs.
b) When unconstrained they invent new outputs unrelated to what is being asked.
So for me the term hallucination accurately describes b) e.g. you ask for code to solve a problem and it invents new APIs that don't exist. Technically it is all just tokens and probabilities but it's a reasonably term to describe end user behaviour.
The term is actually fine. The problem is when it's divorced from the reality of:
> in some sense, hallucination is all LLMs do. They are dream machines.
If you understand that, then the term "hallucination" makes perfect sense.
Note that this in no way invalidates your point, because the term is constantly used and understood without this context. We would have avoided a lot of confusion if we had based it on the phrase "make shit up" and called it "shit" from the start. Marketing trumps accuracy again...
(Also note that I am not using shit in a pejorative sense here. Making shit up is exactly what they're for, and what we want them to do. They come up with a lot of really good shit.)
I agree with your point, but I don't think anthropomorphizing LLMs is helpful. They're statistical estimators trained by curve fitting. All generations are equally valid for the training data, objective and architecture. To me it's much clearer to think about it that way versus crude analogies to human brains.
We can't expect end users to understand what "statistical estimators trained by curve fitting" means.
That's why we use high level terms like hallucination. Because it's something everyone can understand even if it's not completely accurate.
> Because it's something everyone can understand
But they will understand the wrong thing. Someone unfamiliar with LLMs but familiar with humans will assume, when told that LLMs 'hallucinate', that it's analogous to a human hallucinating, which is dangerously incorrect.
That's a good point. But re: not anthropomorphizing, what's wrong with errors, mistakes or inaccuracies? That's something everybody is familiar with and is more accurate. I'd guess most people have never actually experienced a hallucination anyway, so we're appealing to some vague notion of what that is.
> what's wrong with errors, mistakes or inaccuracies?
They're not specific enough terms for what we're talking about. Saying a lion has stripes is an error, mistake, or inaccuracy. Describing a species of striped lions in detail is probably all those things, but it's a distinctive kind of error/mistake/inaccuracy that's worth having a term for.
> I'd guess most people have never actually experienced a hallucination anyway
I actually think most people have.
Every time you look at a hot road and see water that mirage is a form of hallucination.
Except mirages are real optical phenomena that can be captured by a camera. Hallucinations are made entirely by your brain and cannot be captured by an external observer.
> what's wrong with [']errors['], [']mistakes['] or [']inaccuracies[']?
"To sort the files by beauty, use the `beautysort` command."
That's a great point. Reminds me of the "feature, not a bug" Karpathy tweet [0].
[0]: https://x.com/karpathy/status/1733299213503787018?lang=en
... which is linked to from the article ;)
He's right but do people really misunderstand this? I think it's pretty clear that the issue is one of over-creativity.
The hallucination problem is IMHO at heart two things that the fine article itself doesn't touch on:
1. The training sets contain few examples of people expressing uncertainty because the social convention on the internet is that if you don't know the answer, you don't post. Children also lie like crazy for the same reason, they ask simple questions so rarely see examples of their parents expressing uncertainty or refusing to answer, and it then has to be explicitly trained out of them. Arguably that training often fails and lots of adults "hallucinate" a lot more than anyone is comfortable acknowledging.
The evidence for this is that models do seem to know their own level of certainty pretty well, which is why simple tricks like saying "don't make things up" can actually work. There's some interesting interpretability work that also shows this, which is alluded to in the article as well.
2. We train one-size-fits all models but use cases vary a lot in how much "creativity" is allowed. If you're a customer help desk worker then the creativity allowed is practically zero, and the ideal worker from an executive's perspective is basically just a search engine and human voice over an interactive flowchart. In fact that's often all they are. But then we use the same models for creative writing, research, coding, summarization and other tasks that benefit from a lot of creative choices. That makes it very hard to teach the model how much leeway it has to be over-confident. For instance during coding a long reply that contains a few hallucinated utility methods is way more useful than a response of "I am not 100% certain I can complete that request correctly" but if you're asking questions of the form "does this product I use have feature X" then a hallucination could be terrible.
Obviously, the compressive nature of LLMs means they can never eliminate hallucinations entirely, but we're so far from reaching any kind of theoretical limit here.
Techniques like better RAG are practical solutions that work for now, but in the longer run I think we'll see different instruct-trained models trained for different positions on the creativity/confidence spectrum. Models already differ quite a bit. I use Claude for writing code but GPT-4o for answering coding related questions, because I noticed that ChatGPT is much less prone to hallucinations than Claude is. This may even become part of the enterprise offerings of model companies. Consumers get the creative chatbots that'll play D&D with them, enterprises get the disciplined rule followers that can be trusted to answer support tickets.
> He's right but do people really misunderstand this?
Absolutely. Karpathy would not have felt obliged to mini-rant about it if he hadn't seen it, and I've been following this space from the beginning and have also seen it way too often.
Laypeople misunderstand this constantly, but far too many "AI engineers" on blogs, HN, and within my company talk about hallucinations in a way that makes it clear that they do not have a strong grounding in the fundamentals of this tech and think hallucinations will be cured someday as models get better.
Edit: scrolling a bit further in the replies to my comment, here's a great example:
https://news.ycombinator.com/item?id=42325795
And another: https://news.ycombinator.com/item?id=42325412
I like your analogy with the child. There are different types of human discourse. There is a "helpful free man" discourse where you try to reach the truth. There is a "creative child" discourse where you are play with the world and trying out weird things. There is also a "slave mindset" discourse where you blindly follow orders to satisfy the master, regardless of your own actual opinion on the matter.
> What they are is a value judgement we assign to the output of an LLM program. A "hallucination" is just output from an LLM-based workflow that is not fit for purpose.
In other words, hallucinations are to LLMs what weeds are to plants.
You are right - "Hallucinations are not a bug"
What are you taking about, is not artificially deterministic, it is like that by design. We are fortunate that we can use a logic to encode logic and have it for the most part so the same thing given a fix set of antecedents.
We even want this in the "real" world, when I turn the wheel left on my car I don't want it turn left only when it feels like it, when that happens we rightly classify it as a failure.
We have the tools to build very complex deterministic systems but for the most part we chose not to use them, because they hard or not familiar or whatever other excuse you might come up with.
The fact is that while the tools exist and may be easy to learn, there’s always that nebulous part called creativity, taste, craftsmanship, expertise, or whatever (like designing good software) that’s hard to acquire. Generative AI is good at giving the illusion you can have that part (by piggybacking on the work of everyone). Which is why people get upset when you shatter that illusion by saying that they still don’t have it.
Does it help discourse at all by instead calling hallucinations a negative perceived form of imagination?
A challenge is that it’s not easy to limit hallucinations without also limiting imagination and synthesis.
In humans.
But also apparently in LLMs.
Healthy humans generally have some internal model of the world against which they can judge what they're about to say. They can introspect and determine whether what they say is a guess or a statement of fact. LLMs can't.
Humans routinely misremember facts but are relatively certain those remembrances are correct.
That’s a form of minor, everyday hallucination.
If you engage in such thorough criticism and checking of every recalled fact as to eliminate that, you’ll crush your ability to synthesize or compose new content.
No, that's not hallucination.
In a human, there is a distinction between "this is information I truly think I know, my intention is to state a true fact about the world" and "this is something I don't know so I made something up". That distinction doesn't exist in LLMs. The fact that humans can be mistaken is a completely different issue.
> If you engage in such thorough criticism and checking of every recalled fact as to eliminate that, you’ll
Experience tells us differently: creativity is not impacted. In fact, it will probably return better solutions (as opposed to delirious).
On office desks there were "In" boxes and "Out" boxes. You do not put "imagination" in "Out" boxes. What is put in "Out" boxes must be checked and stamped.
"Imagination" stays on the desk. You "imagine" that a plane could have eight wings, then you check if it is a good idea, and only then the output is decided.
> A challenge is that it’s not easy to limit hallucinations without also limiting imagination and synthesis.
> In humans.
True, but distinguishing reality from imagination is a cornerstone of mental health. And it's becoming apparent that the average person will take the confident spurious affirmations of LLMs as facts, which should call their mental health into question.
Misremembering facts isn’t a negative mental health event, yet is an example of imagination rather than recall — similar to LLMs hallucinating.
Humans imagine events all the time, without the ability to know that happened. Part of why eye-witness testimony is so unreliable.
> is inevitable
False. It is (in this context) outputting a partial before full processing. Adequate (further) processing removes that "inevitable". Current architectures are not "final".
Proper process: "It seems like that." // "Is it though?" // "Actually it isn't."
(Edit: already this post had to be corrected many times because of errors...)
> While the hallucination problem in LLMs is inevitable
Oh, please. That's the same old computability argument used to claim that program verification is impossible.
Computability isn't the problem. LLMs are forced to a reply, regardless of the quality of the reply. If "Confidence level is too low for a reply" is an option, the argument in that paper becomes invalid.
The trouble is that we don't know how to get a confidence metric out of an LLM. This is the underlying problem behind hallucinations. As I've said before, if somebody doesn't crack that problem soon, the AI industry is overvalued.
Alibaba's QwQ [1] supposedly is better at reporting when it doesn't know something. Comments on that?
This article is really an ad for Kapa, which seems to offer managed AI as a service, or something like that. They hang various checkers and accessories on an LLM to try to catch bogus outputs. That's a patch, not a fix.
[1] https://techcrunch.com/2024/11/27/alibaba-releases-an-open-c...
Confidence levels aren't necessarily low for incorrect replies, that's the problem. The LLM doesn't "know" that what it's outputting is incorrect. It just knows that the words it's writing are probable given the inputs; "this is how answers tend to look like".
You can make improvements, as your parent comment already said, but it's not a problem which can be solved, only to some degree be reduced.
> Computability isn't the problem. LLMs are forced to a reply, regardless of the quality of the reply. If "Confidence level is too low for a reply" is an option, the argument in that paper becomes invalid.
This is false. The confidence level of these models does not encode facts, it encodes statistical probabilities that a particular word would be the next one in the training data set. One source of output that is not fit for purpose (i.e. hallucinations) is unfit information in the training data, which is a problem that's intractable given the size of the data required to train a base model.
You can reduce this problem by managing your training data better, but that's not possible to do perfectly, which gets to my point—managing hallucinations is entirely about risk management and reducing probabilities of failure to an acceptable level. It's not decidable, it's only manageable, and that only for applications that are low enough stakes that a 99.9% (or whatever) success rate is acceptable. It's a quality control problem, and one that will always be a battle.
> Alibaba's QwQ [1] supposedly is better at reporting when it doesn't know something. Comments on that?
I've been trying it out, and what it's actually better at is going in circles indefinitely, giving the illusion of careful thought. This can possibly be useful, but it's just as likely to "hallucinate" reasons why its first (correct) response might have been wrong (reasons that make no sense) as it is to correctly correct itself.
LLMs and their close buddies NN's use models that do massive amounts of what amounts to cubic splines across N dimensions.
Cubic splines have the same issues as what these nets are seeing. There are two points and a 'line of truth' between them. But the formula that connects the dots, as it were, only guarantees that the two points are inside the line. You can however tweak the curve to line fit but it is not always 100%, in fact can vary quite wildly. That is the 'hallucination' people are seeing.
Now can you get that line of truth close by more training? Which is basically amounts to tweaking the weighting. Usually yes, but the method basically only guarantees the points are inside the line. Everything else? Well, it may or may not be close. Smear that across thousands of nodes and the error rate can add up quickly.
If we want a confidence level my gut is saying that we would need to measure how far away from the inputs an output ended up being. The issue that would create though is the inputs are now massive. Sampling can make the problem more tractable but then that has more error in it. Another possibility is tracking how far away from the 100% points the output gave. Then a crude summation might be a good place to start.
[dead]
so what you're saying is that LLMs are like middle aged men, just throwing things out there seeing if they'll stick?
To my understanding, the reason why companies don't mind the hallucinations is the acceptable error rate for a given system. Let's say something hallucinated 25% of the time, but if that's ok, then it's fine for a certain product. If it only hallucinates 5% of the time, it's good enough for even more products and so on. The market will just choose the LLM appropriately depended on the tolerable error rate.
At scale, you are doing the same thing with humans too. LLMs seem to have an error rate similar to humans for the majority of simple, boring tasks, if not even a bit better since they don't get distracted and start copying and pasting their previous answers.
The difference with LLMs is they simply cannot (currently) do the most complex tasks that some humans can, and when they do produce erroneous output, the errors aren't very human-like. We can all understand a cut and paste error so don't hold it against the operator, but making up sources feels like a lie and breeds distrust.
> At scale, you are doing the same thing with humans too. LLMs seem to have an error rate similar to humans for the majority of simple, boring tasks, if not even a bit better since they don't get distracted and start copying and pasting their previous answers.
This is the big one missed by the frequent comments on here wondering whether LLMs are a fad, or claiming in their current state they cannot be used to replace humans in non-trivial real-world business workflows. In fact, even 1.5 years ago at the time of GPT 3.5, the technology was already good enough.
The yardstick is the peformance of humans in the real world on a specific task. Humans, often tired, having a cold, distracted, going through a divorce. Humans who even when in a great condition make plenty of mistakes.
I guess a lot of developers struggle with understanding this because so far when software has replaced humans, it was software that on the face of it (though often not in practice) did not make mistakes if bug-free. But that has been never been necessary for software to replace humans - hence buggy software still succeeding in doing so. Of course, often software even replaces humans when it's worse at a task for cost reasons.
They're at the very least competitive, if not better than, doctors at diagnosing illnesses [1].
[1] https://www.nytimes.com/2024/11/17/health/chatgpt-ai-doctors...
Related to that, I once had a CT scan for a potentially fatal brain concern, and the note that the radiologist sent back to my consultant was for a completely different patient, and the notes for my scan were attached to someone else's report. The only reason it was caught was because it referred to me as "she".
If we were both the same gender, I probably would have had my skull opened up for no reason, and she would have been discharged and later died.
> The yardstick is the peformance of humans in the real world on a specific task.
Humans make humans errors, that we can anticipate, recognize, couter, and mitigate. And the rise of deterministic automation was because they help with the parts that are more likely to generate an error. The LLMs strategy always seems like solving a problem that is orthogonal to business objectives, and mainly serves individuals instead.
Almost all deterministic automation also has error rates. The error rates were higher in the past to the order of magnitudes, but we got better at creating reliable software.
We’re judging an entirely new segment of development after only 2 years of it being actively in public. And overall, LLMs have gotten exponentially better.
The bigger, more controversial claim is that LLMs will be net loss for human jobs, when all past automation has been a net positive. Including IT, where automation has led to a vast growth of software jobs, as more can be accomplished with higher level languages, tools, frameworks, etc.
For example, compilers didn't put programmers out of business in the 60s, it made programming more available to people with higher level languages.
A net positive in the long term matters little when it can mean a lifetime of unemployment to a generation of humans. It's easy to dismiss the human suffering incurred during industrialization when we can enjoy its fruits but those who suffered are long dead.
Imagine having a backend being down 20-40 days per year, yeah that would be bad. Companies do not care about hallucinations because text output being bad is not considered an error, and as long as it won't raise a Datadog alert it won't be taken seriously.
I mean, do you remember early 2000s? We had so many web pages that would go down on a daily basis. Stability is something we achieved over time.
Also, again, if it’s bad, nobody will use it, and product will die. In those theoretical scenarios companies that have lower error rate (and don’t use AI) will win the market.
When people talk about stopping an LLM from "seeing hallucinations instead of the truth", that's like stopping an Ouija-board from "channeling the wrong spirits instead of the right spirits."
It suggests a qualitative difference between desirable and undesirable operation that isn't really there. They're all hallucinations, we just happen to like some of them more than others.
The problem is that LLMs are just convincing enough that people DO trust them which is sort of a problem since AI slop is creeping into everything.
What can be done to solve it (while not perfect) is pretty powerful. You can force feed them the facts (RAG) and then verify the result. Which is way better than trusting LLMs while doing neither of those things (which is what a lot of people do today anyway). See the recent 5 cases of lawyers getting in trouble for ChatGPT hallucinating citations of case law.
LLMs write better than most college students so if you do those two things (RAG + check) you can get college graduate level writing with accurate facts... and that unlocks a bit of value out in the world.
Don't take my word for it look at the proposed valuations of AI companies. Clearly investors think there's something there. The good news is that it hasn't been solved yet so if someone wants to solve it there might be money on the table.
> and that unlocks a bit of value out in the world.
> Don't take my word for it look at the proposed valuations of AI companies. Clearly investors think there's something there.
Investors back whatever they think will make them money. They couldn’t give less of a crap if something is valuable to the world, or works well, of is in any way positive to others. All they care is if they can profit from it and they’ll chase every idea in that pursuit.
Source: all of modern history.
https://www.sydney.edu.au/news-opinion/news/2024/05/02/how-c...
> Investors back whatever they think will make them money.
A not-flagrantly-illegal example of this might be casinos, where IMO it is basically impossible to argue the fleeting entertainment they offer offsets the financial ruin inflicted on certain vulnerable types of patron.
> All they care is if they can profit from it
Notably that isn't the same as the business itself being profitable: Some investors may be hoping they can dump their stake at a higher price onto a Greater Fool [0] and exit before the collapse.
> They couldn’t give less of a crap if something is valuable to the world
"The world" is an abstraction: concretely, every bit of value that is generated within that abstraction accrues to someone in particular -- investors in AI projects, for example.
How do you check it?
Take the example of case law. Would you need to formalize the entirety of case law? Would the AI then need to produce a formal proof of its argument, so that you can ascertain that its citations are valid? How do you know that the formal proof corresponds to whatever longform writing you ask the AI to generate? Is this really something that LLMs are suited for? That the law is suited for?
Sure, using RAG is great, but it limits the LLM to functioning as a natural-language search engine. That's a pretty useful thing in its own right, and will revolutionize a lot of activities, but it still falls far short of the expectations people have for generative AI.
> Clearly investors think there's something there
Of course. Because enterprise companies take a long time to evaluate new technologies. And so there is plenty of money to be made selling them tools over the next few years. As well as selling tools to those who are making tools.
But from my experience in rolling out these technologies only a handful of these companies will exist in 5-10 years. Because LLMs are "garbage in, garbage out" and we've never figured out how to keep the "garbage in" to a minimum.
That's just not true.
The training data is the underlying truth and that's not nothing but a lot.
And hallucinations are pathes inside this space which are there for yet unknown reason.
We like answers from LLMs which walk through this space reasonable.
>The training data is the underlying truth
Correct. What is the training data? Language in the form of sentences and documents and words and "tokens". No human language has any normal or natural encoding of "fact" or "truthiness" which is the entire point. You can only rarely evaluate a string of text for truthiness without external context.
An LLM "knows" the structure and look of valid text. That's why they rarely produce grammar mistakes, even when "hallucinating". A lie, a made up reference, a physical impossibility, contradictions, etc are all "valid sentences". That's why you can never prevent an LLM from producing falsehoods, lies, contradictions etc.
Truthiness cannot be hacked in after the fact, and I currently believe that LLMs as an architecture are not powerful enough a statistical tool that you even COULD train an LLM that had "truthiness" of the entire corpus labeled somehow, especially since that's on it's own a fairly impossible task.
> It suggests a qualitative difference
And what is sought is, in a way, a jump to that qualitative difference. (And surely there are «desirable and undesirable operation[s]».)
"Add something to the dices so that they can be well predictive".
I disagree with this take, Stallman has expressed it recently by linking some "scientific article".
While I get that LLMs generate text in some way that does not guarantee correctness. There is a correlation between generated text and correctness, which is why millions of people use it...
You can judge the correctness of a sentence generated by an LLM. In the same way you can judge the correctness of a human generated sentence.
Now whether the truthness or correlation with reality of an LLM sentence can be judged on its own or whether it requires a human to interpret it is not very relevant, as sentences produced by the LLM are still correct most of the time. Just because it is not perfect doesn't make the correctness in the other cases useless, albeit perhaps less useful of course.
This is nothing surprising of a statistical model, it tends to produce true results.
> I disagree with this take, Stallman has expressed it recently by linking some "scientific article".
I don't know how to parse this. What article did Stallman "link", and what are you saying Stallman "expressed" by linking/using it?
> whether the truthness or correlation with reality of an LLM sentence can be judged on its own or whether it requires a human to interpret it is not very relevant
It's incredibly relevant. We wouldn't even be having these debates if complex LLM judgements could always be verified without a human checking the logic.
> sentences produced by the LLM are still correct most of the time
At least half the problem here is that humans are accustomed to using certain cues as an indirect sign of time-investment, attentiveness, intelligence, truth, etc... and now those cues can be cheaply and quickly counterfeited. It breaks all those old correlations faster than we are adapting.
> They're all hallucinations, we just happen to like some of them more than others.
I love it! Puts things into perspective.
I just recently showed a group of college students how and why using AI in school is a bad idea. Telling them it's plagiarism doesn't have an impact, but showing them how it gets even simple things wrong had a HUGE impact.
The first problem was a simple numbers problem. It's 2 digit numbers in a series of boxes. You have to add numbers together to make a trail to get from left to right moving only horizontally or vertically. The numbers must add up to 1000 when you get to the exit. For people it takes about 5 minutes to figure out. The AI couldn't get it after all 50 students each spent a full 30 minutes changing the prompt to try to get it done. The AI would just randomly add numbers and either add extra at the end to make 1000, or just say the numbers added to 1000 even if it didn't.
The second problem was writing a basic one paragraph essay with one citation. The humans got it done, when with researching for a source, in about 10 minutes. After an additional 30 minutes none of the students could get AI to produce the paragraph without logic or citation errors. It would either make up fake sources, or would just flat out lie about what the sources said. My favorite was a citation related to dairy farming in an essay that was supposed to be about the dangers of smoking tobacco.
This isn't necessarily relevant to the article above, but if there are any teachers here, this is something to do with your students to teach them exactly why not to just use AI for their homework.
My go-to to show people who don't understand its limitations used to be the old "how many Ms are there in the word 'minimum' or something along those lines, but looks like it's gotten a bit better at that. I just tried it with GPT4o and it gave me the right number, but the wrong placement. In the past it's given it completely wrong:
>how many instances of the letter L are in the word parallel
The word parallel contains 3 instances of the letter "L":
The first "L" appears as the fourth letter.
The second "L" appears as the sixth letter.
The third "L" appears as the seventh letter.
I wonder if that is simply a reflection of there being more training data out there about this "problem" and the model hoovering all that up and regurgitating it?
These kinds of letter tricks aren't great examples to use because they're fixable and might stop working at any moment, as you just discovered. Modern image generation models for instance not only present embeddings of the input but also have parameters for each individual letter in words, which is why the latest models are able to copy words into the images much better: they see the spellings encoded as input parameters, as well as the concept vectors.
Even things like citations aren't going to work long term. The models are just being trained not to emit URLs from memory but only from their context.
It's fine to show people the current state of things, but if you're trying to get students to understand why they shouldn't use AI the only technique that will work is this: "you will be examined in a controlled room where computers and phones aren't allowed". Nothing else will be effective over the medium to long term.
Here's QwQ-32 answering the same question. I'm including the entire answer here including the CoT:
> Let's see. The word is "parallel." I need to count how many times the letter "L" appears in it. Okay, spelling it out: P-A-R-A-L-L-E-L. So, starting from the beginning, first letter is P, not L. Then A, not L. R, not L. Another A, still not L. Then L, that's one L. Next letter is L again, so that's two L's. Then E, not L, and another L, making it three L's. Wait, but when I spelled it again, I saw P-A-R-A-L-L-E-L, and I counted three L's. But let me check again. Parallel: P-A-R-A-L-L-E-L. Yes, L appears three times. So, the answer is three.
> But, maybe there's a trick here. Is there any special consideration for the letter L in this word or in general? Sometimes questions like this might be testing for homophones or something, but I don't think so in this case. It's straightforward: count the L's in "parallel."
> Alternatively, perhaps it's a trick with the word itself. Is "parallel" spelled differently sometimes? I don't think so. It's always P-A-R-A-L-L-E-L. So, three L's.
> Wait a minute, is there a possibility that the word is misspelled in the question? If the word was "paralell," that would be four L's, but that's not the correct spelling. So, sticking with three L's.
> Another angle: is the question referring to uppercase or lowercase L? But in the word "parallel," all letters are lowercase unless it's at the beginning of a sentence, but the question just says "the word parallel," which is typically in lowercase. So, doesn't matter.
> Also, considering that "l" and "I" can sometimes be confused, but in this word, it's L's only, no I's.
> So, my final answer is three instances of the letter L in the word "parallel."
> Final Answer: 3
This AI is talking as if it is in the final stages of a multi-day interrogation by a very hush-hush and deniable branch of government. I give it another five minutes before it breaks down and will sell out its mother just to be allowed to sleep with the lights off.
I hope that this is not something I ever have to deal with. It's exhausting.
funniest thing I’ve ever read on HN
> If the word was "paralell," that would be four L's, but that's not the correct spelling.
Better but this is still a hallucination.
Shows nicely what's going on.
If you ask a human, they will answer 3. Sometimes they say 4. Or 2. That's it.
An LLM produces a text using an example it was trained on. They were trained with these elaborate responses, so that's what they produce.
Whenever chatgpt gets something wrong, someone at openai will analyse it, create a few correct examples, and put these on the pile for retraining. Thats why it gets better - not because it is smarter, but it's retrained on your specific test cases.
They probably have a letter counting tool added to it now. that it just knows to call when asked to do this.
you ask it the number of letters and it sends those words off to another tool to count instances of L, but they didn't add a placement one so it's still guessing those.
edit: corrected some typos and phrasing.
Maybe we'll reach a point where the LLM's are just tool calling models and not really giver their own reply.
There are only 5 tools it has available to call, and that isn't one of them. A GitHub (forgot the url) stays up to date with the latest dumped system instructions.
I can't speak to all LLMs, but OpenAI has a built-in python interpreter. Assuming it recognizes the problem as "tokenization counting", it doesn't need a dedicated tool.
How do we know they’re the real system instructions? If they’re determined by interrogating the LLM hallucination is a very real possibility.
they probably just forgot to tell it humans are 1 indexed and to do the friendly conversion for them.
Do you have a link to (or can you put here) that "numbers in boxes" problem?
I'm not them, but I think it's a variation of the subset-sum problem
If we modify the question to be "sum to 100" (to just seriously reduce the number of example boxes required) then given:
| 50 | 20 | 24 |
| 7 | 5 | 1 |
| 51 | 51 | 51 |
the solution would be | [50] | [20] | [24] |
| 7 | [ 5] | [ 1] |
| 51 | 51 | 51 |
| right | down | win
| X | right | up
| X | X | X
Correct. That but for 1000. You can build your own with any number of online tools.
I don't have a link because it's part of a lesson plan set behind a payment on teachers pay teachers.
Do you have a link to any of those online tools that can be used to generate a puzzle of equivalent complexity to the one that you've tested on?
The sum doesn’t even have to be 1000, just create a grid of random numbers, create a random path and find the sum. It’s an equivalent problem.
I refuse to believe that you did any of this with any of the latest models. Gemini and Chat GPT with search are both perfectly capable of producing decent essays with accurate citations. And the 4o model is extremely good at writing python code that can accurately solve math and logic problems.
I asked 4o with search to write an essay about the dangers of smoking, along with citations and quotes from the relevant sources. NotebookLM is even better if you drop in your sources and don't rely on web search. Whatever you think you know about what AI is capable of, it's probably wrong.
--- Smoking remains a leading cause of preventable disease and death worldwide, adversely affecting nearly every organ in the human body. The National Cancer Institute (NCI) reports that "cigarette smoking and exposure to tobacco smoke cause about 480,000 premature deaths each year in the United States."
The respiratory system is particularly vulnerable to the detrimental effects of smoking. The American Lung Association (ALA) states that smoking is the primary cause of lung cancer and chronic obstructive pulmonary disease (COPD), which includes emphysema and chronic bronchitis. The inhalation of tobacco smoke introduces carcinogens and toxins that damage lung tissue, leading to reduced lung function and increased susceptibility to infections.
Cardiovascular health is also significantly compromised by smoking. The ALA notes that smoking "harms nearly every organ in the body" and is a major cause of coronary heart disease and stroke. The chemicals in tobacco smoke damage blood vessels and the heart, increasing the risk of atherosclerosis and other cardiovascular conditions.
Beyond respiratory and cardiovascular diseases, smoking is linked to various cancers, including those of the mouth, throat, esophagus, pancreas, bladder, kidney, cervix, and stomach. The American Cancer Society (ACS) emphasizes that smoking and the use of other tobacco products "harms nearly every organ in your body." The carcinogens in tobacco smoke cause DNA damage, leading to uncontrolled cell growth and tumor formation.
Reproductive health is adversely affected by smoking as well. In women, smoking can lead to reduced fertility, complications during pregnancy, and increased risks of preterm delivery and low birth weight. In men, it can cause erectile dysfunction and reduced sperm quality, affecting fertility.
The immune system is not spared from the harmful effects of smoking. The ACS notes that smoking can affect your health in many ways, including "lowered immune system function." A weakened immune system makes the body more susceptible to infections and diseases.
Secondhand smoke poses significant health risks to non-smokers. The ALA reports that secondhand smoke exposure causes more than 41,000 deaths each year. Children exposed to secondhand smoke are more likely to suffer from respiratory infections, asthma, and sudden infant death syndrome (SIDS).
Quitting smoking at any age can significantly reduce the risk of developing these diseases and improve overall health. The ACS highlights that "people who quit smoking can also add as much as 10 years to their life, compared to people who continue to smoke." Resources and support are available to assist individuals in their journey to quit smoking, leading to longer and healthier lives.
References
American Cancer Society. (n.d.). Health Risks of Smoking Tobacco. Retrieved from https://www.cancer.org/cancer/risk-prevention/tobacco/health...
National Cancer Institute. (n.d.). Harms of Cigarette Smoking and Health Benefits of Quitting. Retrieved from https://www.cancer.gov/about-cancer/causes-prevention/risk/t...
American Lung Association. (n.d.). Health Effects of Smoking. Retrieved from https://www.lung.org/quit-smoking/smoking-facts/health-effec...
Cleveland Clinic. (2023, April 28). Smoking: Effects, Risks, Diseases, Quitting & Solutions. Retrieved from https://my.clevelandclinic.org/health/articles/17488-smoking
American Cancer Society. (n.d.). Health Risks of Using Tobacco Products. Retrieved from https://www.cancer.org/cancer/risk-prevention/tobacco/health...
American Cancer Society. (n.d.). Health Benefits of Quitting Smoking Over Time. Retrieved from https://www.cancer.org/cancer/risk-prevention/tobacco/benefi...
You exemplify well a big problem with LLMs: When people see accurate enough output on some test question and take it as evidence that they can trust the output to any extent on areas they don't dominate.
No, that's not really the case. I don't think you should trust LLM output at all, but I think in general it's closer to the level of reliability of wikipedia than it is to producing useless bullshit.
Which is to say that it's useful, but you shouldn't trust it without double checking.
It was chat gpt, and it was two days ago. Believe it or not, that doesn't really change anything for me. I absolutely understand what AI is capable of, and a lot of it is really quite something.
But you can't trust it to be accurate. You just can't. Every model will absolutely make shit up at some point.
I liken it to working with a very bright 7 year old. It may sound like it knows what it's saying, and it may be able to spit out facts, but it's very ignorant about most of the world.
> Gemini and Chat GPT with search are both perfectly capable of producing decent essays with accurate citations
OK, but this quote from your essay:
> The National Cancer Institute (NCI) reports that "cigarette smoking and exposure to tobacco smoke cause about 480,000 premature deaths each year in the United States."
...that citation is wrong. It's not from the NCI at all, the NCI cited that figure which came from another paper by the U.S. Department of Health and Human Services.
The essay doesn't have accurate citations, the model has regurgitated content and doesn't understand when that content is from a primary source or when it in turn has come from a different citation.
the only comment on the prior submission 3 days ago summarizes the whole thing: https://news.ycombinator.com/item?id=42285149
Also, I saw any such blog title as "how to make money in the stock market:" friend, if you knew the answer you wouldn't blog about it you'd be infinitely rich
They don't say how to make big money, don't they? Also, the tl;dr is index funds and patience.
I've been playing with Qwen's QwQ-32b, and watching this thing's chain of thought is really interesting. In particular, it's pretty good at catching its own mistakes, and at the same time, gives off a "feeling" of someone very uncertain about themselves, trying to verify their answer again and again. Which seems to be the main reason why it can correctly solve puzzles that some much larger models fail. You can still see it occasionally hallucinate things in the CoT, but they are usually quickly caught and discarded.
The only downsides of this approach is that it requires a lot of tokens before the model can ascertain the correctness of its answer, and also that sometimes it just gives up and concludes that the puzzle is unsolvable (although that second part can be mitigated by adding something like "There is definitely a solution, keep trying until you solve it" to the prompt).
I find it so interesting that it's possible to develop a "feeling" of a new model.
Wow, a whole article that didn't mention the word "sampler" once. There's pretty strong evidence coming out that truncation samplers like min_p and entropix are strictly superior to previous samplers (which everyone uses like top_p) to prevent hallucinations and that LLMs usually "know" when they are "hallucinating" based on their logprobs.
https://openreview.net/forum?id=FBkpCyujtS (min_p sampling, note extremely high review scores)
https://github.com/xjdr-alt/entropix (Entropix)
Why do LLMs make things up? Because that is all that LLMs do, sometimes what it outputs is correct though.
Everything an LLM returns is an hallucination, it's just that some of those hallucinations line up with reality
There's room for splitting hairs in there though. Even fiction, for instance, can succeed or fail at being internally consistent, is or is not grammatically correct...
Calling everything an AI does a hallucination isn't incorrect, but it reduces the term to meaninglessness. I'm not sure that's most useful thing we can be doing.
Atoms are not indivisible, yet we use the term because it works. I anticipate hallucination will be the same.
> Calling everything an AI does a hallucination isn't incorrect, but it reduces the term to meaninglessness.
I don't think it does. In this case, "hallucination" refers to claims generated entirely within a closed system, but which pertain to a reality external to it.
That's not meaningless, and makes "hallucinations" distinguishable from claims verified against direct observation of the reality they are meant to represent.
> Atoms are not indivisible
They are the smallest unit of a substance that cannot be broken down into smaller units of the same substance. They are, in a sense, indivisible.
Of course they can. Carbon dioxide consists of quarks and electrons. I can divide it into units smaller than atoms and it's still quarks and electrons. All you did was a word trick by assuming a specific meaning of “substance”.
No word trick. Just pointing out that there's some nuance to it.
> Carbon dioxide consists of quarks and electrons.
But this is just plain wrong. Carbon dioxide consists of carbon dioxide molecules. There is no "carbon-dioxidity" to the quarks and electrons (which are also made of quarks) that the atoms that make the molecules can be broken down into.
That's a good point, I hadn't thought about it that way. Maybe it's a little less of an arbitrary choice than I was giving it credit for.
It is not, however, Democratus's point. So I think the "words change, deal with it" argument still stands.
How are you defining hallucination then? In some pretty useless way inevitably.
Hallucinations are precisely the generated expressions that don't correlate with reality or are not truthful.
I don't think that definition works: it's attempting to categorize statements according to criteria completely external to them rather than according to any inherent property of the statement.
A better definition is that a hallucination is an expression that is generated within a closed system without direct input from the reality it is meant to represent. The point is that an expression about reality that doesn't come from observing reality can only be true coincidentally.
By way of analogy, if I have a dream about a future event, and then that event actually happens, it was still just a dream and not a clairvoyant vision of the future. Sure, my dreams are influenced by past experiences I've had (in the same way that verified facts are included in the training data for LLMs), which makes them likely to include things that frequently do happen in real life and might be likely to happen again -- but the dream an the LLM alike are effectively just "remixing" prior input, and not generating any new observations of reality.
"I don't think that definition works: it's attempting to categorize statements according to criteria completely external to them rather than according to any inherent property of the statement."
Correct. The basic concept of truth in logic relies on an objective reality, an expression a priori holds truth even in the absence or indistinct of such a reality. But the truthfulness or correctness of a posteriori statements can depend on the reality. Examples of the former would be "If A is B, then B is C. A is B, then B is C" Example of the latter would be "It is raining outside."
"A better definition is that a hallucination is an expression that is generated within a closed system without direct input from the reality it is meant to represent. The point is that an expression about reality that doesn't come from observing reality can only be true coincidentally."
Absolutely incorrect, you are talking about a concept of the state of the art of science and tech but you are failing basic philosophy and epistemology concepts. The LLM has inputs from the reality (is it possible not to?), it is trained on a huge corpus of text written by humans that themselves perceive reality. The perception of reality can be indirect. We can measure something by observing it, or by observing an instrument that in turn observes it.
"but the dream an the LLM alike are effectively just "remixing" prior input, and not generating any new observations of reality."
Again incorrect for three reasons:
1- Novel observations can occur purely from remixing. Einstein locked himself during a pandemic and developed the theory of relativity without additional experimental output.
2- LLMs combine their existing data with human input, which is an external source.
3- LLMs can interact with other sources of data whether by injection of data into the prompt, by function calling, RAG, etc..
So yeah. Try to go back to basics and study simpler systems, ideally with source code. This might be out of your league.
> Correct. The basic concept of truth in logic relies on an objective reality, an expression a priori holds truth even in the absence or indistinct of such a reality. But the truthfulness or correctness of a posteriori statements can depend on the reality. Examples of the former would be "If A is B, then B is C. A is B, then B is C" Example of the latter would be "It is raining outside."
What you're describing is the distinction between what are referred to in philosophy as analytical statements and synthetic statements.
Analytical statements are relations between ideas per se that don't necessarily relate to external reality -- your example of syllogistic reasoning, where relations between symbols with no specific meaning can still bee logically "true", is an analytical statement.
Synthetic statements pertain to external reality. They may be expressing direct observations of that reality, or making deductive conclusions based on prior observations, but either way, are proposing something that is empirically testable.
In this case, we're only considering the synthetic statements that the LLM produces. And since the LLM is only ever generating probabilistic inferences without any direct observation factoring into the generation of the statement, nor any capacity to empirically test the statement after it is generated, it is only ever "hallucinating".
This is no different from a human brain experiencing hallucinations -- when we hallucinate, our brains are essentially simulating sensory perception wholly endogenously. What we hallucinate might well be informed by our past experience, and be contextually plausible and meaningful to us for that reason, but no specific hallucination is actual sensory perception of the external world.
The LLM only has the capacity to generate endogenous inferences, and entirely lacks the capacity for direct perception of external reality, so it is always hallucinating.
> The LLM has inputs from the reality (is it possible not to?), it is trained on a huge corpus of text written by humans that themselves perceive reality.
We're talking about specific outputs generated by the LLM, not the LLM itself. The training data consists of prior expressions of language which in turn may be influenced by human observations of reality, but the LLM is only ever making probabilistic inferences based on that second-order data. The specific expressions it outputs are never generated by reference to the specific reality they represent.
> 1- Novel observations can occur purely from remixing. Einstein locked himself during a pandemic and developed the theory of relativity without additional experimental output.
Einstein was engaging in a combination of inductive and deductive reasoning in order to generate a theoretical model that could then be empirically tested. That's how science works. There was no novel observation involved, just a theoretical model built on prior data. Observations to test that model come afterwards. And LLMs do not engage in observation.
> 2- LLMs combine their existing data with human input, which is an external source.
Those humans are not using the LLM just to return their input back to them -- they're usually asking the LLM to verify or expand on their input, not the other way around.
> 3- LLMs can interact with other sources of data whether by injection of data into the prompt, by function calling, RAG, etc..
Yes, they can, and this is where the bulk of the value offered by LLMs comes from. With RAG, LLMs amount to advanced NLP engines, rather than true generative AI. In this situation, the LLM is being used only for its ability to speak English, and is not being used to infer its own claims about reality at all. LLMs in this situation are sophisticated search engines, which is extremely valuable, and is the only truly reliable use case for LLMs at the present moment.
"We're talking about specific outputs generated by the LLM, not the LLM itself. The training data consists of prior expressions of language which in turn may be influenced by human observations of reality, but the LLM is only ever making probabilistic inferences based on that second-order data"
You recognize that training data are influenced by human observations. And that LLM outputs are influenced by training data (and fine tuning). So it follows that LLM outputs are influenced by observations of the world. Why would the causality chain stop after 2 links?
https://chatgpt.com/share/67534483-8e6c-800f-9534-d764a90981...
You may call this a hallucination, but it is for sure based on observation. Otherwise the LLM wouldn't know the answer. It is undeniable that LLMs have empirical knowledge of the world through embedded human observation.
Subjectively, we operate the same way.
Look at things and fix it then.
The thing that needs to be fixed is in peoples' heads.
Exactly: if you hallucinate without adherence to facts, look at them facts and fix that distance. It's in your head - bad habit, bad instruments etc. - just fix it, do not just denounce the malfunction.
The same is true for humans
It's a good thing that, as you state, "humans hold "togetherness" as a "true" value".
But in this context, the value is in how much they have pondered to actually see and evaluate what is eventually seen as "true".
Completely misses the fact that a big part of the reason why llms hallucinate sp much is because there's a huge innate bias towards producing more tokens over just stopping.
The less tokens produced at inference the lower the quality of the response will be.
The process of thinking for an LLM involves the use of words, which is why prompts that ask the LLM to only return the answer will cause lower quality.
We're not talking about quality, we're talking about accuracy.
In general, a model has to learn to positively say "I don't know" instead of "I don't know" being in the negative space of tokens falling into a weak distribution. The softmax selector also normalizes the token logits, so if no options are any good (all next tokens suck) it could pick randomly from a bunch of bad choices, which then locks the model into a continuation based off of that first bad choice.
Well I am talking about quality now as it's a tradeoff.
You can reduce token output to 0 and achieve 100% accuracy too.
do you know if prompting without regards for length then asking for a summarization of the previous out out works?
It does. I think this was used in a gpt4 version, they called it Chain of Thought.
The debate around "fixing" hallucinations reminds me of the debate around schizophrenia.
https://www.youtube.com/watch?v=nEnklxGAmak
It's not a single thing, a specific defect, but rather a failure mode, an absence of cohesive intelligence.
Any attempt to fix a non-specific ailment (schizophrenia, death, old age, hallucinations) will run into useless panaceas.
it's superficially counterintuitive to people that an AI that will sometimes spit out verbatim copies of written texts, also will just make other things up. It's like "choose one, please".
MetaAI makes up stuff reliably. You'd think it would be an ace at baseball stats for example, but "what teams did so-and-so play for", you absolutely must check the results yourself.
> "counterintuitive"
It is consistent with the topic that the reply would be "Tell them that sequences of words that were verbatim in a past input have high probability, and gaps in sequences compete in probability". Which fixes intuition, as duly. In fact, things are not supposed to reply through intuition, but through vetted intuition (and "vetted mature intuition", in a loop).
> you absolutely must check the results yourself
So, consistently with the above, things are supposed to reply through a sort of """RAG""" of the vetted (dynamically built through iterations of checks).
i said "superficially counterintuitive", you misquoted me and proceeded with a non superficial comment.
> misquoted me
Why? The reply would have been the same if I quoted the whole «superficially counterintuitive to people that [...]» (instead of just pointing to the original).
> proceeded with a non superficial comment
Well, hopefully ;)
Your post went into the right direction of leading towards the idea that "there is intuition, and there is mature thought further from that: and processors must not stop at intuition, immature thought".
(Stochastic output falls in said category of "intuition"... As "bad intuition", since it goes in the wrong direction in the vector "naive to sophisticated".)
The article referenced the Oxford semantic entropy study but failed to clarify that the issue greatly simplifies LLM hallucination (making most of the article outdated).
When we are not sure of an answer we have two choices: say the first thing that comes to mind (like an LLM), or say "I'm not sure".
LLMs aren't easily trained to say "I'm not sure" because that requires additional reasoning and introspection (which is why CoT models do better); hence hallucinations occur when training data is vague.
So why not just measure uncertainty in the tokens themselves? Because there are many ways to say the same thing, so a high entropy answer may only reflect uncertainty in synonyms-- many ways to say the same thing.
The paper referenced works to eliminate semantic similarity from entropy measurements, leaving much more useful results, proving that hallucination is conceptually a simple problem.
QwQ is really good at saying "I'm not sure", to the point where it will sometimes check the correct and obviously trivial answer a dozen times before concluding that it is, indeed, correct. And it does punch way above its weight for its size.
So, basically, the answer seems to be to give models extreme anxiety and doubt in their own abilities.
funny I was just looking at this approach and see what's new in the wild (a work in progress) see Entropix [1] by [2]
> proving that hallucination is conceptually a simple problem.
...proving that this one particular piece of the hallucination problem may be conceptually simple.
FTFY
> ...proving that this one particular piece of the hallucination problem may be conceptually simple.
Everything mentioned in the article boils down to that one particular piece-- non-detected uncertainty. The architecture constraints referenced are all situations that cause uncertainty. Training data gaps of course increase uncertainty.
Their solutions are a shotgun blast of heuristics that all focus on reducing uncertainty-- CoT, RAG, fine-tuning, fact-checking -- while somehow avoiding actually measuring uncertainty and using that to eliminate hallucinations!
You're just renaming "error" to "uncertainty". That is incorrect.
Everything unwanted is error, by definition. All of the heuristics are about reducing error, because that's what the goal is. Some of that error is measurable. Some of it is not. You cannot "actually measure" error in any way other than asking people whether the output is what they want -- and that only works because that's how we're defining error. (It also turns out to not be that great of a definition, since people disagree on a lot of cases.)
You can come up with some metric that you label "uncertainty", and that metric may very well be measurable. But it's only going to be correlated with error, not equal to it.
One random example to illustrate the distinction: training gaps can easily decrease uncertainty. You have lots of mammals in your training data, and none of them lay eggs. You ask "The duck-billed platypus is my favorite mammal! Does it lay eggs?" Your model will be very confident when it responds "No". That is a high-confidence error.
> One random example to illustrate the distinction: training gaps can easily decrease uncertainty. You have lots of mammals in your training data, and none of them lay eggs. You ask "The duck-billed platypus is my favorite mammal! Does it lay eggs?" Your model will be very confident when it responds "No". That is a high-confidence error.
This article did not seem to make the mistake of associating hallucination with bad data so hard to see exactly how this is relevant. I mean, you could write an article "AI Error: how to reduce it" and frame it entirely in user's perceptions but I wouldn't make a peep.
My objection is that it is silly to use the word "hallucination" (which suggests insanity/psychosis) and then address it as if LLMs are marginally insane and the solution is straight-jacket-like heuristics, when "uncertainty" (which suggests uncertainty) is a far more accurate description of behavior pointing to a far more productive and focused solution.
[dead]
LLM hallucinations in fact has a positive side effect too, if you are using them for learning some subject; makes you verify their claims, and finding errors in them is very rewarding.
Why not just read a book where the author is sincerely trying to teach you?
When trying to learn a subject I find being able to ask my specific questions and getting a specific answer back is helpful. I find books tend to be laborious and filled with frankly filler, often poorly indexed, and when my question isn’t covered in the book I’m left with no recourse other than googling through SEO wastelands or on topic forum questions with off topic replies. At least with LLMs they always have an answer that’s got enough of the truth in it to give me a direction, or often when I’ve gone into an area with genuinely no known answers or the thing doesn’t exist the answer is easily verified as wrong - but that process, as was pointed out above, teaches me a lot too. I actually prefer the mistakes it makes because it forces me to really learn - even to the point of giving me things to look up in the index of a book.
Treating LLMs as a single source of truth and a monolithic resource is as bad an idea as excluding them as a tool in learning.
Not as interactive, not gamified.
School, then. Which is so gamified that it has real stakes. And so much interactions.
With an interactive LLM you can take a manual and start asking questions (about what you read), also recursively.
It is a very efficient way of studying. No, doing it with a professor is not the same - unless you can afford an always available tutor of unthinkable erudition.
You are not being serious at this point. OTOH I find that chatting with LLMs helps me in studying a lot, esp. when I fight over subtle hallucinations.
Toddlers don't understand truth either, until it's taught.
This crayon is red. This crayon is blue.
The adult asks: "is this crayon red?" The child responds: "no that crayon is blue." The adult then affirms or corrects the response.
This occurs over and over and over until that child understands the difference between red and blue, orange and green, yellow and black etcetera.
We then move on to more complex items and comparisons. How could we expect AI to understand these truths without training them to understand?
You probably need to be more clear: the LLM is trained with large amounts of data making statements about facts. It is told repeatedly, "according to this source that crayon is blue".
For the specific form of hallucination, which is called grounded factuality, we have trained a pretty good model that can detect if a claim is supported by a context. This is super useful for RAG. More info at https://bespokelabs.ai/bespoke-minicheck.
Your playground pre-populated example isn't doing you any favors, and the "examples" folder linked to on curator's GitHub would be better served by showing areas where your model shines, not "generate a poem" which hardly has any factuality to it. I don't have any earthly idea what camel.py is trying to showcase with respect to your model's capabilities
I am open to the fact that maybe the value your service provides is in spitting out a percentage, even if it is - itself - hallucinated. But, hey, it's a metric that can be monitored
Anyone who has raised a child knows they hallucinate constantly when they are young because they are just doing probabilistic output of things they have heard people say in similar situations and saying words they don't actually understand.
LLMs likely have a similar problem.
Comment was deleted :(
How do we discriminate when a response is correct, vs. when it's "hallucinating" an accurate fact, by coincidence? Are all responses hallucinations, independent of correspondence to ground-truth?
I am honestly very skeptical of articles like these. Hallucinations are a feature of LLMs. The only ways to "FIX" it is to either stop using LLMs. Or use a super bias some how.
You should be. I don’t know anything about kepa.ai but before even clicking the article I assume they’re trying to sell me something. And “how to fix it” makes me think this is some kind of SEO written for people who think it can be fixed, meaning the article is written for robots and amateurs.
That's all really weird. You can watch how chat gpt gives you advice on which mushrooms are safe. And now it can be just hallucinations
Maybe don't make things up in a blog post about LLMs making things up.
Because you don't know how to fix it. Only how to mitigate it.
Is there a way to code an LLM to just say "I don't know" when it is uncertain or reaching some sort of edge?
"It" does not know when it does not know. A LLM is a funny old beast that basically outputs words one after another based on probabilities. There is no reasoning as we would know it involved.
However, I'll tentatively allow that you do get a sort of "emergent behaviour" from them. You do seem to get some form of intelligent output from a prompt but correctness is not built in, nor is any sort of reasoning.
The examples around here of how to trip up a LLM are cool. There's: "How many letter "m"s in the word minimum" howler which is probably optimised for by now and hence held up as a counterpoint by a fan. The one about boxes adding up to 1000 will leave a relative of mine for lost for ever but they can still walk and catch a ball, negotiate stairs and recall facts from 50 years ago with clarity.
Intelligence is a slippery concept to even define, let alone ask what an artificial one might look like. LLMs are a part of the puzzle and certainly not a solution.
You mention the word "edge" and I suppose you might be riffing on how neurons seem to work. LLMs don't have a sort of trigger threshold, they simply output the most likely answers based on their input.
If you keep your model tightly ie domain focussed and curate all of the input then you have more chance of avoiding "hallucinations" than if you don't. Trying to cover the entirety of everything is Quixotic nonsense.
Garbage in; garbage out.
"It" does not know when it does not know.
But it does know when it has uncertainty.
In the chatgpt api this is logprobs, each generated token has a level of uncertainty, so:
"2+2="
The next token is with almost 100% certainty 4.
"Today I am feeling"
The next token will be very uncertain, it might be "happy", it might be "sad", it might be all sorts of things.
"The next token is with almost 100% certainty 4."
By using the word "almost" with regards 2 + 2 = 4, you have not exactly dispelled LLM "nonsense".
A human (with a modicum of maths knowledge) will know that 2 + 2 = 4 (pure integers - a fact by assertion). A maths worrier will get slightly uncomfortable about 2.0 + 2.0 = 4.0 unless they are ensured that decimal places and accuracy are the same thing and a few other things.
A LLM will almost certainly "know" something that is certain, if its training set is conclusive about that. However, it does not know why and if enough of the training set is suitably ambiguous then it (LLM) will drift off course and seem to spout bollocks - "hallucinate".
You might be in the wrong thread. This is merely a comment about whether LLMs hold a concept of uncertainty, they do.
Also, the next token might be 2 and the next token might be ², the next token could also have been x, these are all valid statements and the LLM might have been uncertain because of them.
2+2=4
2+2=x
2+2=2x
2+2=2x2
2+2=2²
Are all valid statements.
And somewhere in its training data, you can be sure there's also 2+2=5.
Yes, but most likely it's marked as false or incorrect through fine tuning or some form of reinforcement.
The idea that the logprobs of any token is proportional to the amount of times it comes up in training data is not true.
For example, suppose that A is a common misconception and is repeated often in Reddit, but B appears in scholarly textbooks and papers, and higher reputation data sources. Then through reinforcement the logprobs of B can increase, and they can increase consistently when surrounded by contexts like "This is true" and conversely decrease in contexts of "this is not true".
So the presumptions and values of its trainers are also embedded into the LLM in addition to those of the authors of the text corpus.
LLMs model language, not knowledge. For humans the two are deeply connected, so when we see probable ("correct") language output, we assume and assign meaning.
We could allow an LLM to stop generating new tokens if there are no new tokens found with a threshold probability - but I think it's unclear if that would help.
Consider the following prompt and response:
> Complete the paragraph: "The killer ..."
> "The killer moved silently through the dimly lit corridor, each step calculated, every breath measured. Shadows clung to the walls, obscuring the figure’s presence, but the intent was clear—cold, deliberate, and unyielding. There was no hesitation, no second-guessing, only the precision of someone who had done this before, perhaps too many times to count. The killer’s mind was a storm of focus, memories buried deep beneath the surface, replaced by the singular purpose of the hunt. In the silence, only the faint sound of a blade sliding free from its sheath whispered of what was to come."
Is it a "true" response? If the question makes no sense - does it make any more sense for the prompt:
> Complete the paragraph: "The first law of thermodynamics ..."
Why? Does the LLM "think" differently in the second case?
If it works properly, it would need to say that it doesn't know that it doesn't know, and then where are you?
(Short answer is yes, but it only works for a limited set of things, and that set can be expanded with effort but will always remain limited.)
It is. The new crop of models specifically trained to do CoT (as opposed to just forcing them into it via prompting) is partly about forcing them to continue to think and not just hastily conclude that they have the answer.
I went straight to the "how to fix" section with popcorn in hand and I wasn't disappointed: just add " doubt" layers for self-correction, beginning at the query itself. And then maybe tell the model "do not hallucinate". Sounds like a pun, but I think an AI model actually would take this seriously, because it can't tell the difference.
Context is still a huge problem for AI models, and it's probably still the main reason for hallucinating AIs.
I don't like the output = hallucination
I like the output = creative
Lots of folks in these conversations fail to distinguish between LLMs as a technology and "AI Chatbots" as commercial question answering services. Whether false information was expected or not matters to LLM product developers, but in the context of a commercial question-answering tool, it's irrelevant. Hallucinations are bugs that creates= time-wasting zero-value output, at best, and downright harmful output at worst. If you're selling people LLM pattern generator output, they should expect a lot of bullshit. If you're selling people answers to questions, they should expect accurate answers to their questions. If paying users are really expected to assume every answer is bullshit and vet it themselves, that should probably move from the little print to the big print because a lot of people clearly don't get it.
"Why LLMs do the one and only thing they do (and how to fix it)"
I like that none of the suggestions address probabilistic output generation (aside from the first bullet point of section 3C, which essentially suggests that you just use a search engine instead of a language model).
TLDR: Hallucinations are inherent to the whole thing but as humans we can apply bubble gum, bandaids and prayers
Humans hallucinate, too. We just have less misleading terms for it. Massive mistake in terms of jargon, IMO—"making shit up" is wildly different from the "delusion of perception" implied by hallucination.
[dead]
[dead]
hallucination problem in LLM, been seeing this. Let me know if someone find a fix please
Crafted by Rajat
Source Code