Research Article

Hybrid Fine-Tuning Strategy for Few-Shot Classification

Table 2

Comparison results under different pretraining methods on mini-ImageNet. “Pre-tra” and “Lay-fro” are short for the pretraining method and the layer frozen policy, respectively. We report the mean accuracy of 600 episodes and the 95% confidence intervals.

Pre-traLay-fro1-Shot5-Shot10-Shot20-Shot30-ShotAverage gain

R2D2FT-Last150.580.7466.150.3671.070.7175.630.8776.560.963.83
HFT-Last153.470.6170.130.5074.720.4579.670.4581.160.47
FT-all51.390.8168.630.4073.380.6179.430.6680.840.991.90
HFT-all53.470.6170.130.5075.490.5981.410.7082.660.38

SKD-GEN0FT-Last157.830.5373.910.5378.191.0385.030.7687.010.452.36
HFT-Last160.740.6877.450.4981.300.3886.310.4287.960.39
FT-all59.940.7774.340.5681.691.0986.960.3287.430.741.19
HFT-all60.740.6877.450.4982.341.0187.150.3888.630.52

RFS-simpleFT-Last156.990.6072.430.3276.270.2982.971.2984.020.941.38
HFT-Last158.410.7173.660.5178.850.4583.580.5685.100.55
FT-all57.100.2172.790.5979.310.2883.010.5185.230.910.86
HFT-all58.410.7173.660.5179.690.7583.810.7386.160.42

Average gain2.292.852.501.782.122.30