Skip to content

hexameter

Does anyone else believe 'Large Language Models are not The Next Big Thing Now And Forever'?

Projects4 min read

proof

But first, will anyone even believe me when I say "Computers could write programs long before LLMs"?

If I had it my way I could just write a proof a little something like this

  1. The transformer which underlies ChatGPT-like large language models was invented in 2017

  2. Here is a paper that shows program synthesis: https://dl.acm.org/doi/abs/10.1145/2814270.2814310

  3. It could not use transformers because it was published in 2015 when transformers had not been invented yet.

  4. Therefore, computers could write programs before LLMs. So let’s think of maybe not using LLMs for everything.

But it’s "The Internet (TM)" so I’m told I have to be more sensational for anyone to click on my article let alone care about what this is or what it means.

Maybe I can oblige but I won’t try too hard. This is a personal essay, nothing exceptional. It’s not here to change the world. If you want an interesting perspective, feel free to keep reading.

It didn’t "end software engineering" then and it won’t "end the developer as we know it now" because software engineering has always been a race of increasing levels of abstraction. Assembly, C, Java, Python. But anyone can tell the difference between a Python expert and a novice. Making things more accessible has never led to the outright extinction of the experts.

In the far future, would we know what kicked off the "Let’s automate this whole ‘writing code’ thing?" I bet someone would argue that the life-like feeling of language models made that happen. Before, people said, "Oh computers can’t do this thing. It requires Intelligence (TM) and we don’t have those computers yet."

And today people say, "Ah these LLMs are great stuff. If they can do XYZ then surely they can do ABC" and we often get these hyperbolic claims of "Software Development won’t exist in 10 years". And I know I myself even believed that once. But what will exist, then?

I can’t predict the future and no one can, but one of those maybe silly maybe naive quotes that is always ringing around in my head is "The best way to predict the future is to create it."

  1. Human intelligence is not the bottleneck in software engineering. Indexing and code understanding feels like the biggest hurdle to conquer next. Open source code already automates so much for us. But mostly this code is so hard to find and understand if it will work for our usecases.

  2. We are understaffed. GPT-likes are undoubtedly going to convince a few people to study law and medicine and not computer science. And that’s a tragedy because there’s so much code to be written and the industry as a whole is just scratching the surface of what software could be.

  3. GPT-likes are so infuriating to work with, even today more than 6 months into their release. I feel qualified to talk about this since I have used it extensively on my side project, and I have just given up and written the code myself.

On point 3 someone might object, "You must not be prompting the model correctly." Why would I invest time to learn to prompt a model when I can actually be learning? The fact of the matter is any time we learn to prompt these sorts of models, a new update will come out and make all of that knowledge obsolete. Why wouldn’t I just learn to do it myself? After using GPT-like models I have just reverted to stack overflow.

Great, I tried writing something coherent, and out came this rant. Maybe that was inevitable…

At this point, I am waiting for something else. ChatGPT isn’t it?

What can we learn?

  1. If you are a professional developer, at some point, ChatGPT will hold you back eventually. Maybe you can use it to learn something or learn how to search for something. But GPT-likes will not be your teacher forever. You will surpass them one day and find them much less useful.

  2. People seem to be convinced by LLMs and willing to invest in them — who can blame anyone for that. They seem like a bright new toy and we want to solve every problem with them. I study NLP at Columbia, I was convinced they were interesting long before they went viral. But it’s not about me, the point is that people from all walks of life are captivated by this.

But don’t we have to invoke a control group to do science? How would we know that if we put hundreds of thousands of dollars of compute into using classical graph search for program synthesis with some function similarity heuristics — who’s to say we wouldn’t find a more compute-efficient way to generate code? Code that is type-aware and constantly up-to-date?

And don’t even get me started on "ChatGPT for system architecture." This one just infuriates me. We have modeling languages, we have constraint optimizers and satisfiers. We can model a system and compute the optimal transition between that system and the system prime that contains a new feature which satisfies all of the constraints. This isn’t new or remarkable or anything.

Why are GPT-likes so exciting and classical AI so boring? I don’t want to change it, I just want to understand it. Classical AI is so beautiful and interpretable and that’s how I fell in love with computer science. I want so desperately to be open-minded but some part deep inside me says people are falling in love with a simulacrum.

I feel alone. So alone. Large Language Models are not The Next Big Thing Now And Forever. Large Language Models are a beautiful step in our species’ journey to discover and understand the universe. We also have our search algorithms, we have our constraint solvers. There’s a whole Brooklyn Botanical Garden of algorithms, our flowers in problem solving and discovery.

I guess I’m just sad that LLMs get so much attention and so many of our problems in dire need of the brightest minds may take longer to fall in love with computer science, all sciences, and the thrill of discovery because the LLM gold rush saturates the internet, reduces intellectual diversity, and decreases the rate of science.

I sincerely hope in a few months I come back and laugh at all of this. But I am here waiting for the hype to die out and for things to "return to normal" but here I am writing things and it’s starting to feel dire.

If you’re out there and you think LLMs are not The Messiah, you're not alone.

🄯 2023 by Ralph 'Blake' Vente. All wrongs reversed.
This site uses cookies. Theme by LekoArts