Research Article

Balanced Adversarial Tight Matching for Cross-Project Defect Prediction

Table 6

The results of the AUC comparison with nine methods.

Target projectLRNNFilterTCATCA+DBNDPCNNTCNNMANNADAOurs

Ant0.6080.6320.6350.6120.5560.6160.6000.6320.7420.800
Camel0.5280.5450.5400.5430.5240.5480.5490.6160.6320.678
Forrest0.4570.6270.4990.5400.5440.6040.5520.5790.4870.480
Ivy0.6190.6200.6150.6110.5630.6200.6210.6350.8170.869
Log4j0.4220.4930.4240.4500.4750.4770.4490.5120.5180.526
Lucene0.4770.5700.5740.5600.5330.5870.5790.4900.6310.672
Poi0.6010.5990.5630.5510.5530.5890.5900.6160.6710.705
Synapse0.5230.6100.6060.6360.5430.6020.5780.6300.7020.775
Velocity0.5460.5910.5770.4960.5410.5950.5760.6070.6550.736
Xalan0.6090.6690.6400.6760.6540.7070.6600.6240.6870.743
Xerces0.5340.5940.5730.5610.5310.5760.5410.6310.6340.689
Average0.5390.5950.5680.5670.5470.5930.5720.5970.6520.698
W (T) (L)11/0/010/0/110/0/110/0/110/0/110/0/110/0/110/0/110/0/1
Improvement29.5%17.3%22.9%23.1%27.6%17.8%21.9%16.8%7.1%
p-Value0.0030.0260.0040.0040.0060.0160.0040.0130.004