Chat GTP/LLM's Improvement Ceiling

Ten years ago, Siri was a quite impressive artificial intelligence/personal assistant. It was amazing. But it is no longer impressive or amazing. why not?

Well, the approach they took — the basic architecture and technology behind Siri (and the others in its cohort) — turns out to kinda be a dead end. It intrinsically has some issues that cannot be overcome, and those issues make improvement elsewhere difficult when it’s not impossible. That approach could only take us so far — in spite of the fact that our phones and their servers are astronomically faster than they were 10 years ago. This is not a hardware issue. It’s just in the basic architecture. And this idea that some technological approach is limited and will hit a ceiling is not unique to that kind of AI. It is true of every technology, be it hardware or conceptual. 

So, now we have the hardware to do this LLM (large language model) approach, as best known in the form of Chat GPT. A different approach. 

So much of the hype around these LLMs is really centered on their improvement and the idea that they will get much better quite quickly. Don’t think about the limitations today, we are told. Instead, we are supposed to imagine improvement that overcomes those limitations.

Why should we imagine that? Are they suggesting there really are not any real limitation? 

I don’t buy that for a second. I really don’t. No one should. Of course there are limitations! Of course there are diminishing returns! Of course this LLM approach has intrinsic strengths and intrinsic weaknesses. LLM is not the route to a all powerful supreme being.

So, what do we know about all LLM instances? They call them “hallucinations.” They do not understand or care about what is true. They make up shit. I’m sure that there are other issues, too. But this is the one that really gets to me.

Is this "hallucination" problem intrinsic to LLM? Is this a problem that can be overcome? At this point, the burden is on the hypesters to explain clearly why it is not. 

And let me ask you: would you ever trust an AI personal assistant who could not be trusted to give you true answers? Who might make up directions or make up books? Who cannot be trusted to be honest and accurate? Who does not even care about being accurate? What useful role could such an AI play in your life? 

LLMs are a neat trick. I am sure they are useful for a wide range of things. I might even find some uses — perhaps mostly professional, though not entirely — but I do not  see that this approach is going to get us to where so many people so fervently would like it it.

Yeah, there exists an improvement ceiling, and I think we already know what it is.