Research Article
Leveraging Pretrained Language Models for Enhanced Entity Matching: A Comprehensive Study of Fine-Tuning and Prompt Learning Paradigms
Table 3
F1 scores of different EM models on structured datasets.
| Datasets | DeepER | Magellan | DeepMatcher | MCA | Our model | ΔF1 |
| BeerAdvo-RateBeer | 72.7 | 78.8 | 72.7 | 80.0 | 86.7 | +6.7 | iTunes-Amazon1 | — | 91.2 | 88.0 | — | 92.9 | +1.7 | DBLP-ACM1 | 97.6 | 98.4 | 98.4 | 98.9 | 99.2 | +0.3 | DBLP-Scholar1 | 92.3 | 92.3 | 94.7 | 95.2 | 95.7 | +0.4 | Amazon-Google | 62.1 | 49.1 | 69.3 | 71.4 | 76.2 | +4.8 | Walmart-Amazon1 | 39.0 | 71.9 | 66.9 | 74.7 | 84.1 | +9.4 | Abt-Buy | 36.1 | 43.6 | 62.8 | 69.3 | 80.0 | +10.7 |
|
|