Research Article

Balanced Adversarial Tight Matching for Cross-Project Defect Prediction

Table 5

The results of the balanced accuracy comparison with nine methods.

Target projectLRNNFilterTCATCA+DBNDPCNNTCNNMANNADAOurs

Ant0.5470.5890.5900.5620.5250.5580.5430.6200.7720.873
Camel0.5070.5310.5120.5160.5090.5150.5110.6060.6920.739
Forrest0.5160.4840.4540.3990.4690.4850.4490.5610.4450.403
Ivy0.5110.5620.5330.5440.5080.5400.5370.5860.7430.896
Log4j0.4810.4860.5070.5010.5030.5060.5360.4610.4400.380
Lucene0.5710.5640.5750.5440.5360.5910.5820.5360.6820.711
Poi0.5210.5950.5540.5430.5550.6040.6020.6000.7040.745
Synapse0.5240.5980.5840.6200.5310.5740.5530.6400.7530.818
Velocity0.5230.5850.5600.5480.5290.5710.5490.6030.6260.780
Xalan0.4380.4390.4810.4930.4880.5140.5210.5040.6510.796
Xerces0.5390.5810.5630.5400.5330.5810.5530.5060.6340.712
Average0.5160.5470.5380.5280.5170.5490.5400.5660.6490.714
W (T) (L)9/0/29/0/29/0/210/0/19/0/29/0/29/0/29/0/29/0/2
Improvement38.3%30.6%32.8%35.2%38.1%30.1%32.3%26.2%10.1%
p-Value0.0080.0080.0080.0060.0080.0100.0130.0130.026