Hello everyone,
The big AI hype-train may be slowing done. After all the promises of life-changing super-intelligent AI, the reality of the very limited capabilities of generative AI is starting to become too obvious to ignore. This is not to say that generative AI is useless or that there aren’t some real advances that were made in the past few years. But the quasi-AGI sold by OpenAI and Silicon Valley venture capitalists was always just a fantasy, and now of course people are getting disappointed. Right now, the main use case for ChatGPT and the like is spam [amycastor.com], which was always the obvious real use case.
It does seem that there is less AI hype spam on social medias, so the obvious question is: where will all the hypers go? Before ChatGPT, most of them were peddling NFTs, but they can’t really go back there, NFTs are dead. Noone cares about web3 or the metaverse anymore, crypto has mostly run its course as well. I guess we may expect to see a new “x with blockchain” application making the round as the Next Big Thing soon, but I’m not sure that would get a lot of traction now. I’m not too worried, however: the past decades have shown that Silicon Valley venture capitalists always find something to sell, and that the media never fails to cover all of their promises.
Meanwhile, there is still some big money flowing into generative AI. Amazon just invested 4 billion dollars into Anthropic [The Verge], and OpenAI is trying to keep ChatGPT in the news by adding sound and image processing capabilities [OpenAI]. They provide even less information on how they evaluated the capabilities of this new system than they did with GPT-4, so I’m not expecting much real-world use.
On the blog: new preprint, authors v OpenAI
I have a new preprint for a paper that was accepted to the SIPAIM 2023 conference: “Finding the best channel for tissue segmentation in whole-slide images”. I explain what it’s about on the blog, and the preprint itself is available on my website as a PDF or directly as an HTML page.
Also on the blog, I talked a bit (back in July) about how some authors are suing OpenAI and Meta for copyright infrigement, why I wasn’t super convinced by their argument initially, but found the full complaint reasonably well thought of.
I don’t like PDF (for science communication)
This may be a weirdly specific rant, but at least I’m not the only one: back in 2009, Peter Murray-Rust was already deploring the use of PDF as the commonly used format for distribution scientific contributions [petrmr’s blog]. The core of the issue is that PDF is a format fundamentally designed for printing stuff, not for sharing stuff online. As such, it is very limited in terms of the type of information that can be presented within, the interactions that we can have with it, and the accessibility of the information (for instance, screen readers for low vision people often struggle with PDF files [Wang et al., 2021, ArXiV]).
Having articles published natively in HTML — with optionally a PDF version for those who really need to print it — would make a lot more sense today. To try it out for myself, I have converted my PhD dissertation into web pages [https://research.adfoucart.be/thesis/index.html], and I’m starting to convert all of my preprints as well (at the moment, in addition to the new tissue segmentation preprint, I have converted my 2018 paper on artifact detection and my 2020 unpublished report on imperfect annotations). This makes those science communications a lot more accessible (also on mobile devices and smaller screens, where PDF tend to be particularly annoying to read). Arthur Perret has some interesting thoughts as well on the topic (in french).
Until next time, have a nice day !
Adrien.