There's essentially a push to relabel/rename many "unsupervised" methods and "self-supervised". Yann Lecun is one of the more famous proponents of this (https://www.facebook.com/yann.lecun/posts/10155934004262143), but I've been seeing the term gain traction.
The reason for this is that people felt that unsupervised learning was a misleading name for many of the so-called unsupervised learning methods, such as language-modeling. They argue that there is a supervised training signal in these methods, the only difference is that the training signal comes from the model "input" itself, rather than an external label.
Ultimately, I'm not entirely sure if there is really a distinction between the two if you argue it all the way down to the details (is PCA unsupervised? or self-supervised since it's constructing a model with respect to its own inputs), but I think it's generally intuitive what self-supervised methods refer to and I'm on board for this renaming.
If you want to be really strict in the definitions there is a difference. There isn't really unsupervised learning, but there are unsupervised techniques - clustering etc.
In self-supervised training you use some kind of measurable structure to build a loss function against.
But in common usage people say "unsupervised" to mean "self-supervised". For example Word2Vec is usually referred to as unsupervised when it is technically self-supervised.
I think this is really because the self-supervised name was invented well after the techniques became common-place.
That reminds me of those neural network models that would learn to change themselves, a little like a ML algorithm that would learn about neural network that worked best.
I think google did something like this some years ago?
Neural Architecture Search with Reinforcement Learning[1]
This isn't really a unsupervised or self-supervised technique at all. It's a combination of supervised learning with reinforcement learning (which is a whole other thing too).