#1. November, 2022
Hello !
Welcome to the first real issue of this newsletter. I’m still figuring out exactly what shape it’s going to take, but at the moment my goal is to make a monthly recap of what I’ve published on the research blog, plus potentially some links to other interesting stuff that caught my eyes and aren’t directly linked to my research. If you have any feedback on what you’d like to see here, feel free to share.
So, without further ado, let’s get into the recap:
On the blog
November 7th: A new project - where I explained a bit the main goals of my new research.
November 10th: [Reading] Reviews of medical image registration - summarizing three big reviews of medical image registration, to get a sense of what’s going on in the domain.
November 16th: [Devlog] Coordinates and scale - experiments on managing the weird changes in coordinate systems that tend to introduce lots of bugs in my code.
November 19th: [Opinion] The Galactica debacle - thoughts on the short-lived “Galactica” experiment from Meta AI, which didn’t quite live up to their expectations.
November 24th: [Reading] Top ACROBATs - pathology registration - looking at the pipelines of the two top methods of the ACROBAT registration challenge.
November 27th: [Preprint] Panoptic Quality : not always a good metric - presenting the last preprint to come out of my thesis work, joining our review of digital pathology segmentation challenges and our analysis of the MoNuSAC challenge results into reviewer purgatory.
On the web
In case you’re interested into following the inevitable and long overdue collapse of the crypto space and the blockchain bullshit factory, my favorite sources at this point are Molly White (both on her newsletter Whitespace and on the web3isgoinggreat website) and the David Gerard / Amy Castor duo. I’m not going to link to their social media accounts, as those are changing too quickly at the moment!
Speaking of collapse, the Twitter meltdown has certainly made me rethink a bit how I use social media. Right now I’m mostly active on Mastodon, in French and in English, which I actually find quite enjoyable.
I may write a post about it at some point, but a very interesting study came out in Nature from Simon Ott and a University of Vienna team. I’ve been quite critical of the over-reliance on benchmarks results for some time, so it’s good to see a serious study on the topic. Quoting the abstract: “We curate data for 3765 benchmarks covering the entire domains of computer vision and natural language processing, and show that a large fraction of benchmarks quickly trends towards near-saturation, that many benchmarks fail to find widespread utilization, and that benchmark performance gains for different AI tasks are prone to unforeseen bursts”. I haven’t gone through their whole methods and results yet, but I don’t think I’m going to disagree with them there…
This is it for the first issue of this newsletter. Until the next one, have a nice end of autumn!
Adrien.