Links #3: Large language models
A few interesting reads on the topic of generative AI and large language models:
- Op-ed by Naomi Klein: AI machines aren’t ‘hallucinating’. But their makers are (2023-05-08). I think Naomi Klein has some excellent points in this article, and "cuts the crap".
According to [the logic of Google CEO Eric Schmidt], the failure to “solve” big problems like climate change is due to a deficit of smarts. Never mind that smart people, heavy with PhDs and Nobel prizes, have been telling our governments for decades what needs to happen to get out of this mess: slash our emissions, leave carbon in the ground, tackle the overconsumption of the rich and the underconsumption of the poor because no energy source is free of ecological costs.
- Blog post by Kevin Lin: Lessons from Creating a VSCode Extension with GPT-4 (2023-05-25). The most important take-away from this blog post is that it accurately describes the current state of using GPT-models for programming. Here's the relevant quote from the introduction:
Lately, I've been playing around with LLMs to write code. I find that they're great at generating small self-contained snippets. Unfortunately, anything more than that requires a human to evaluate LLM output and come up with suitable follow-up prompts. Most examples of "GPT wrote X" are this - a human serves as a REPL for the LLM, carefully coaxing it to a functional result.
- Article by Baldur Bjarnason: Modern software quality, or why I think using language models for programming is a bad idea. A longer piece, but well worth the read. A lot of good observations that is important to bear in mind.
- Op-ed by Zainab Choudhry: AI tools like ChatGPT are built on mass copyright infringement (2023-05-25).