(Overview) A Survey on Deep Semi-supervised Learning
Semi-supervised learning methods are divided into four categories and some methods are selected and introduced respectively (2022/10/12)
Semi-supervised learning methods are divided into four categories and some methods are selected and introduced respectively (2022/10/12)
Propose a unified sampling framework, which significantly boosts the balancedness and accuracy of contrastive learning via strategically sampling additional data (2022/10/19)
Use unsupervised learning methods to solve the effective selection problem of labeled samples in semi-supervised learning (2022/11/01)
Enforce a reciprocal alignment on the distributions of the predictions from two classifiers predicting pseudo-labels and complementary labels on the unlabeled data (2022/11/17)
Supplement the infrequent classes with more pseudo-labels and frequent classes with less pseudo-labels after each training epoch (2022/12/06)
Conduct a series of studies on the performance of self-supervised contrastive learning and supervised learning methods over multiple datasets where training instance distributions vary from a balanced one to a long-tailed one (2022/12/27)
Propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training, effectively exploiting the unlabeled data (2023/02/14)
Improve the recently-proposed “MixMatch” semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation anchoring (2023/02/21)
Design two semantics-aware pseudo-labeling algorithms, prototype pseudo-labeling and reliable pseudo-labeling, which enable accurate and reliable self-supervision over clustering (2023/03/06)
Use multiple subclusters to represent each cluster with an automatic adjustment of the number of subclusters (2023/06/26)
Propose two novel techniques: Entropy Meaning Loss (EML) and Adaptive Negative Learning (ANL) to better leverage all unlabeled data (2023/07/31)
Propose an auxiliary feature perturbation stream as a supplement, leading to an expanded perturbation space. In addition, to sufficiently probe original image-level augmentations, this paper presents a dual-stream perturbation technique (2023/08/06)
Develop a data selection scheme to split a high-quality pseudo-labeled set. For low-quality pseudo-labels, this paper presents a regularization approach to learn discriminate information from them via injecting adversarial noises at the feature-level (2023/08/14)
Propose a propagation regularizer which can achieve efficient and effective learning with extremely scarce labeled samples by suppressing confirmation bias (2023/08/19)
Propose Class-Aware Propensity (CAP) score that exploits the unlabeled data to train an improved classifier using the biased labeled data. Furthermore, this paper proposes Class-Aware Imputation (CAI) that dynamically decreases (or increases) the pseudo-label assignment threshold for rare (or frequent) classes (2023/09/01)
Select informative unlabelled samples, improving training balance and allowing the model to work for both multi-label and multi-class problems, and to estimate pseudo labels by an accurate ensemble of classifiers (2023/09/04)
Explore the class-level guidance information obtained by the Markov random walk, which is modeled on a dynamically created graph built over the class tracking matrix (2023/09/16)
Adjust the confidence threshold in a self-adaptive manner according to the model’s learning status. Further, this paper introduces a self-adaptive class fairness regularization penalty to encourage the model for diverse predictions during the early training stage (2023/09/26)
A review of information theory and its different uses when used in loss functions (2023/10/15)