Abstract

Active learning is aimed to sample the most informative data from the unlabeled pool, and diverse clustering methods have been applied to it. However, the distance-based clustering methods usually cannot perform well in high dimensions and even begin to fail. In this paper, we propose a new active learning method combined with variational autoencoder (VAE) and density-based spatial clustering of applications with noise (DBSCAN). It overcomes the difficulty of distance representation in high dimensions and prevents the distance concentration phenomenon from occurring in the computational learning literature with respect to high-dimensional p-norms. Finally, we compare our method with four common active learning methods and two other clustering algorithms combined with VAE on three datasets. The results demonstrate that our approach achieves competitive performance, and it is a new batch mode active learning algorithm designed for neural networks with a relatively small query batch size.

1. Introduction

In practical issues, the amount of labeled data is relatively small and the vast majority is unlabeled. It is often unrealistic to consume a lot of human resources and expensive cost to annotate unlabeled data for the annotation budget is limited. Active learning [1] is born out of this problem, which is one field trying to address the difficulties in data labeling. It assumes that the collection of data is relatively easy, but the labeling process is costly. It tackles the problem of which samples we should label to result in the highest improvement in test accuracy under a fixed labeling budget. Existing active learning algorithms are mainly divided into two categories, query-synthesizing [24] and query-acquiring [5]. Query-synthesizing approaches use generative models to generate informative samples whereas query-acquiring algorithms use different sampling strategies to determine how to select the most informative samples.

In this paper, we mainly pay attention to query-acquiring methods. One of the query-acquiring algorithms is uncertainty-based method [6, 7]. Settles [8] mentioned that the classifier gives every unlabeled data a probability score which represents the uncertainty belonging to its class, and then we choose the data with the highest uncertainty. Lewis and Gale [9] argued that uncertainty-based methods perform well on a large and diverse set of datasets. Yarin et al. [10] also proposed a Bayesian active learning framework, in which Bayesian neural networks [11] are used to estimate uncertainty. Later, Gissin and Shai proposed a discriminative active learning method [12] in which an uncertainty idea is also used to sample the unlabeled data with the top-K highest possibility when the batch size is relatively large. The other type of query-acquiring methods is representation-based approach, which relies on selecting few examples by increasing diversity in a given batch. Ozan and Savarese [13] and Suyog and Grauman [14] adopted this idea in their experiments. However, it seems to be ineffective to use distance-based representation methods like Core Set for high-dimensional data because the distance concentration phenomenon appears in the computational learning literature with respect to p-norms in high dimensions [15].

To address the difficulty of high-dimensional distance representation, Sinha proposed a variational adversarial method [5] by learning a latent space using a VAE [16, 17]. VAE is called the perfect representation learning method of high-dimensional data, which has been proved effective. Therefore, we adopt VAE in our approach to solve the difficulty of distance representation especially for digital images [18]. VAE plays a great role in our method, which perfectly represents high-dimensional data into the low-dimensional representation in latent space, preventing the occurrence of the phenomenon of-norm distance concentration of high-dimensional data. And in common clustering approaches [1922], DBSCAN [23] is good at identifying the noise and discovering arbitrarily shaped clusters without knowing the number of cluster classes to be formed in advance. Therefore, we propose a new active learning strategy combined with VAE and DBSCAN clustering for considering their advantages. What is more, excellent active learning models are usually based on large query batch sizes. But they may ignore the performance of a relatively small query batch size based on small sample problems. So we conduct our experiment on a relatively small query batch size to see the performance.

The rest of this paper is organized as follows. Section 2 illustrates the problem setting and the brief definition of related methods. Then Section 3 introduces our proposed model and the algorithms. Subsequently, the experimental results based on a relatively small query batch size are provided in Section 4. In the end, we draw a conclusion in Section 5.

2.1. Problem Definition

We define the problem of active learning formally as follows. Given a labeled pool and a much larger unlabeled pool , we are aimed to sample the most informative unlabeled data from by iteratively querying a fixed sampling budget in order to train the most label-efficient model in an active learning task. During this progress, n number of unlabeled data will be sampled by an acquisition function and be annotated by the oracle. This is an iterative process which is done until a certain stopping criterion is reached, such as the desired amount of samples or test accuracy.

2.2. Related Methods

With respect to VAE, the parameters are trained by two loss functions if N training samples are given. One is the reconstruction loss , which forces the reconstructed sample to match the original input sample . Here, we use the cross entropy for measurement:

In addition, we are aimed to make as close as possible to itself. That is to say, it can retrieve as much of the original information as possible through decoding. The other one is regularization loss , which helps to learn latent space with good structure and reduces overfitting on the training data. The formula is shown as follows:

In the formula above, the mean and the variance of every input data are computed by an encoder. Then the encoder learns a low-dimensional latent space for the underlying distribution using a Gaussian prior. Therefore, we obtain the total objective function for VAE as follows:

Then, we minimize the total loss and a high efficient variational autoencoder model will be obtained after rounds of optimization. It is worth mentioning that the reparameterization trick is adopted by sampling a data from a standard normal distribution . Then, we could sample the low-dimensional data randomly in the latent space:in which is a very small random tensor and is the element-wise product.

In our approach, clusters in DBSCAN are defined as the largest set of points connected by density. A region with sufficient density can be divided into a cluster, and points that are not clustered are called noise. The basic process of DBSCAN is as shown in Figure 1.

3. Proposed Model

3.1. The Combination of VAE and DBSCAN Clustering

In this paper, we suggested a new active learning method based on VAE and DBSCAN clustering on a relatively small query batch size. We motivate our method with a simple idea. First, VAE learns a valid low-dimensional latent feature space for the underlying distribution using a Gaussian prior from the labeled and unlabeled pool, whose space is a mixture of the latent features. Then we adopt density-reachable clustering DBSCAN to remove noise in the initial clusters, and we sample the most valuable unlabeled data in high density also in different clusters from the latent space. The framework of our model is shown in Figure 2.

In detail, the VAE model in our experiment is combined by a convolutional neural network and a deconvolutional neural network. The convolutional one named encoder consists of four convolutional layers, one flatten layer, and three fully connected layers in order. And the deconvolutional one named decoder is relatively simple, consisting of a fully connected layer and two convolutional layers. We update VAE by descending stochastic gradients. After iteration and updating, we obtain the trained parameters also the trained model VAE. In the end, VAE learned a two-dimensional latent feature space efficiently.

The low-dimensional feature vectors learned by VAE serve as the input data for the following clustering methods. In DBSCAN, we start sampling unlabeled points in the clustered classes after an initial unlabeled core object is computed. Here, we note that we remove the noise at the first step. Suppose the needed amount of unlabeled data is C, the total number of all unlabeled density-reachable points in all clusters by DBSCAN is required to be as close to C as possible. The purpose is to ensure that different types of high-density unlabeled data can be retrieved as much as possible. After that, C number of corresponding original unlabeled data will be sampled by an acquisition function and to be annotated by Oracle. Then put them into the original labeled pool and use the task learner to test the mean accuracy of our proposed model.

3.2. Algorithms

For clarity, we describe our method in Algorithms 1 and 2.

 Input: labeled pool , unlabeled pool , and initialized model
 Input: hyperparameters: epochs and step size
(1) for e = 1 to epochs do
(2) Sample
(3) Sample
(4) Compute the reconstruction loss of VAE by using equation (1)
(5) Compute the regularization loss of VAE by using equation (2)
(6)
(7) Update VAE by descending stochastic gradients:
(8)end for
(9)return trained .
 Input: , , query batch size: , and parameters: Eps and MinPts, N: total budget, and n: the amount of miniquery
(1) Sample
(2) Sample from the underlying distribution by using equation (4)
(3) Sample as P randomly from and shuffle,
(4) Cluster by adjusting Eps and MinPts
(5) Remove noise
(6) Sample all density-reachable unlabeled set C in all the clusters
(7)fordo
(8)  Sample the needed amount of from C randomly, and find the corresponding original high-dimensional samples
(9)  
(10)  
(11)  
(12) end for
 Output: ,

4. Experiments

4.1. Dataset and Task Module

We evaluated our method on three typical image datasets MNIST [24, 25], Fashion-MNIST, and CIFAR-10. The first one MNIST contains 60,000/10,000 (train/test) handwritten digit images (0∼9) with the resolution of . And Fashion-MNIST is an alternative version of MNIST which has the same size as MNIST with frontal images of different items from 10 categories such as shirts, pants, sandals, and bags. They are also the single channel images with the same size of . CIFAR-10 contains 10 categories of images such as airplanes, automobiles, birds, and cats. Different from the two datasets above, images in CIFAR-10 are 3-channel color RGB images with the size of 32×32. Not only the noise is very large, but also the proportion and characteristics of the object are not the same, which brings great difficulties for recognition.

We used the classic LeNet architecture [26] as our task module for MNIST and Fashion-MNIST, while for CIFAR-10 task we used VGG-16 (Simonyan and Zisserman, 2014) as our architecture. And we took a simple convolutional network just similar to LeNet architecture as the second task module when comparing the different clustering active learning algorithms. This module consists of three convolutional layers, two maximum pooling layers, a flatten layer, and two fully connected layers in order.

4.2. Baseline Algorithms

To demonstrate the effectiveness of our method, we took the following four common active learning methods and also the random sampling as the baseline algorithms. They are briefly described as follows:(i)Random: the query batch is chosen uniformly at random(ii)Uncertainty: uncertainty sampling with minimal top confidence(iii)Core Set: in every round of query, sample the unlabeled data which are of the largest distance from the labeled set and then add to the labeled pool(iv)EGL: estimated gradient length(v)Bayesian: Bayesian uncertainty sampling with minimal top confidence

Furthermore, in order to comparing the performance of different clustering algorithms [27] in active learning task, we replaced DBSCAN by two classic and widely used clustering algorithms: K-means [28] and Mean shift [29]. And they are also compared with random sampling. These clustering methods in our experiment were based on a two-dimensional feature dataset learned by VAE which are extracted from the three datasets listed above. Their brief description is as follows:(i)K-means with VAE: after VAE, we sample the two-dimensional data around the cluster centers according to K-means clustering rule and a given radius.(ii)Mean shift with VAE: after VAE, solve for a vector that moves the center of the circle in the direction with the highest density of the dataset, and the average position of the points inside the circle is found as the new center position in each iteration. We choose the points around these centers.(iii)DBSCAN with VAE: our method, detailed above.

4.3. Implementation Details

In our VAE model, the learning rate was selected by default as . What is more, we chose Sigmoid as our optimizer. Based on a relative small query batch size, it shows that when epochs reach about 400, the learning loss almost does not decrease and the distribution of latent feature space learned by VAE comes to be stable. In other words, we set the number of training epochs to 400 to prevent overfitting. Figure 3 shows the two-dimensional feature points learned by VAE in different epochs. After VAE is trained, a certain number of low-dimensional data samples were extracted by its well-trained encoder. We checked some corresponding original images as shown in Figure 4 (taking Fashion-MNIST as an example).

In order to observe the performance of active learning algorithms on a relatively small query batch size, we tried to sample 100 low-dimension data points randomly from the latent space learned by encoder in VAE as our clustering dataset for DBSCAN, including 10 labeled data and 90 unlabeled data, and our query batch size is set to 5 which is relatively small. Finally, we obtained the needed amount of unlabeled density-reachable points in different clusters after removing the noise. It is particularly noted that we adjust the parameters to make the total number of unlabeled points in all density-reachable clusters to be as close as possible to the needed amount of data for annotations.

4.4. Results

In our experiment, we used the test accuracy as our metric to evaluate the performance. The results are averaged from 20 runs in order to ensure statistical validity. By the results in Table 1, we plot the test accuracy of experiments using four different active learning methods on MNIST in Figure 5. The results demonstrate that, in MNIST, our method, DBSCAN with VAE, performs better on small query batch size than the active learning methods listed above. Figure 5 also shows that some methods perform on par or worse than random sampling. This may be the reason that the sample size is relatively small and the randomness is strong, which reduces the gap between each sampling method and random sampling. It may also be because some methods are more suitable for large query batch sizes.

We also notice that, except for EGL performing well than random sampling on very small batch sizes below 20, most algorithms listed above including random sampling consistently outperform EGL. As for EGL, a possible explanation for the discrepancy between these results may be the architecture used for the different tasks because EGL uses the gradient relative to the model parameters as the score function [12]. Among all the methods above, Bayesian performs the poorest and it may be because Bayesian is more suitable for large training datasets.

Next, we did a comparison test on CIFAR-10. The result is in Table 2 and we plot it in Figure 6.

As can be seen from Figure 6 on CIFAR-10, Core Set often performs worse than random sampling, and it is also possibly because the random advantage is greater than Core Set when the sample size is small. And when the size is below 25, our approach shows little advantage than other methods, but our approach leads all the way obviously when the number of labeled samples is over 25.

Furthermore, we replaced DBSCAN by K-means and Mean shift in order to study the different effects caused by different clustering methods. We also combined them with VAE and did experiment on MNIST and Fashion-MNIST. Detailed experimental results are listed in Tables 3 and 4, respectively. And we plot their performance in Figures 7 and 8.

However, what we may notice that, in Figure 7, the result of K-means with VAE is better than our method when the number of labeled data reaches 30 on MNIST. DBSCAN and K-means perform similarly until size 35 and DBSCAN performs clearly better after size 40. The reason may be that, in this quantity case, a great part of samples sampled by K-means method are closer to the potential characteristics of the labeled data than the data sampled by DBSCAN. In other side, this situation is probably caused by the uneven distribution of the samples when mapping the high-dimensional image data into the two-dimensional latent space. After size 40, DBSCAN may be more sensitive to detect the outlier points in latent space and prevent them being clustered. So the performance of DBSCAN is better than that of K-means when size is greater than 40. Of course, this is a hypothetical explanation. Probably, it is difficult to know for sure why K-means starts performing worse after size 40. However, on CIFAR-10 in Figure 8, compared with random sampling and the other two clustering methods, DBSCAN with VAE behaves better and the performance of increasing accuracy is more stable. It is somewhat clear from the results that our proposed method performs well on the whole on MNIST and Fashion-MNIST.

In order to show our experimental results more significantly, the improvement percentage of different clustering methods over random sampling is listed in Tables 5 and 6, and we plot their performance in Figures 9 and 10. On MNIST, our method DBSCAN with VAE, achieves the best results when the size of labeled samples reaches 20, increasing the accuracy by 11.30% over random sampling. Generally speaking, our approach performs even better in the other labeled data quantities. However, on Fashion-MNIST, our approach performs clearly better than the other two clustering methods at every labeled size in our experiment. And it achieves the best improvement of 6.32% when the labeled samples size reaches 45.

In the end, we computed the average accuracy improvement of each clustering method combined with VAE to see their overall improvement over random sampling, and we plot them in Figure 11. On MNIST, K-means and Mean shift achieve the improvement of 3.26% and 1.86%, respectively. And our method achieves the best performance of 4.48%. And on Fashion-MNIST, the three clustering active learning methods achieve the improvement over random sampling by 3.23%, 1.36%, and 4.23% in order. In a word, our proposed algorithm, DBSCAN with VAE, finally shows its superiority than K-means and Mean shift when combined with VAE on a relatively small query batch size on MNIST and Fashion-MNIST.

5. Conclusion and Discussion

In this paper, we proposed a new active learning method based on VAE and DBSCAN designed for neural networks with a relatively small query batch size. It overcomes the difficulty of distance representation in high dimensions and prevents the distance concentration phenomenon from occurring in the computational learning literature with respect to high-dimensional p-norms. Based on the results on MNIST, Fashion-MNIST, and CIFAR-10, we empirically show that our method achieves competitive performance compared with four common active learning methods listed above, and it is also superior to the other two clustering active learning methods for image classification when the query batch size is relatively small. For active learning models are usually based on a relatively large query batch sizes, our approach of small query batch sizes can be regarded as a supplement to the previous research studies. In addition, our method is simple to implement and can conceptually be extended to other domains, and we see it as a welcome addition to the arsenal of methods in use today. Furthermore, the inadequacy of our method is that manual parameter adjustment is required to make the total number of unlabeled points in all density-reachable clusters to be as close as possible to the needed amount of data for annotations in DBSCAN. And we will still thoroughly study the related work presented in this paper in the future.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Science Foundation of China (61673006).