Adrien Foucart's newsletter

Share this post

#3 - The winter of large language models

adfoucart.substack.com

#3 - The winter of large language models

ChatGPT, Bing AI, Bard... Artificial Intelligence is in the news, and it's not looking so good.

Adrien Foucart
Mar 8
Share this post

#3 - The winter of large language models

adfoucart.substack.com

I like “AI” (even though I dislike the term).

It may not seem like it given everything I’ve written these past few months, but I do think there is a lot of interesting, innovative research with useful applications in the domain. Generative models — and large language models in particular — have unfortunately taken the spotlight, and the funding, and I fear many interesting and useful projects will not be made because all the money and energy went into the training of GPT-X.

It seems like everyday there is a new super impressive demonstration of the power of generative models — and everytime the demonstration turns to ridicule as soon as anyone actually looks at the produced output.

Thanks for reading Adrien Foucart's newsletter! Subscribe for free to receive new posts.

In “Can ChatGPT write an academic paper?”, I examined a paper written by ChatGPT in an experiment by Mashrin Srivastava (now working at Microsoft). The answer, by the way, is no, certainly not. More recently, I looked at the few examples given by Microsoft to demonstrate Bing AI’s capabilities : “Bing AI: has Microsoft lost its mind?”. As the title suggests, I wasn’t impressed.

Over on my french-speaking opinion blog, I also tried to find the source of a famous (false) rumour claiming that GPT-4 will have 500x the capabilities of GPT-3 (“Traquer une rumeur: GPT-4 et les 100.000 milliards de paramètres”). TL;DR: it seems to come from a single Wired interview of someone who “talked to OpenAI”, whatever that means, and possibly from some speculation by Lex Friedman back in 2020. But Sam Altman has repeatedly denied the rumour (and even if it was true that GPT-4 had 500x the number of parameters, it wouldn’t make it 500x more powerful — the law of diminishing returns very much apply to large neural networks, and GPT-3 is probably close to the saturation point).

I did write a bit about my own research, somewhere in the middle of all of that, with a “devlog” post on “Getting oriented in anatomical space”: how to find a good reference point in 3D CT images to get “anatomical coordinates” for any given voxel.

Next week I’ll be giving a lecture to ULB law students about generative language models and why using ChatGPT is probably not a great idea for a lawyer. It should normally be recorded, so I’ll be posting a link. It will be in French (which shouldn’t be a problem for most of my subscribers from what I can tell!) For a lawyer’s perspective on that topic, I’d recommend Devin Stone’s (aka LegalEagle) video, “Dont Hire a Robot Lawyer”.

Other interesting related reads that I recommend:

  • “ChatGPT is a blurry JPEG of the web” by Ted Chiang for The New Yorker

  • “Effective Altruism is Pushing a Dangerous Brand of ‘AI Safety’” by Timnit Gebru for Wired.

Until next time, have a nice day/morning/evening/night/afternoon !

Adrien.

Share this post

#3 - The winter of large language models

adfoucart.substack.com
Comments
TopNew

No posts

Ready for more?

© 2023 Adrien Foucart
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing