Research Article
Leveraging Pretrained Language Models for Enhanced Entity Matching: A Comprehensive Study of Fine-Tuning and Prompt Learning Paradigms
Table 4
F1 scores of different EM models on dirty datasets.
| Datasets | DeepER | Magellan | DeepMatcher | MCA | Our model | ΔF1 |
| iTunes-Amazon2 | — | 46.8 | 79.4 | — | 92.6 | +13.4 | DBLP-ACM2 | 94.9 | 91.9 | 98.1 | 98.5 | 99.4 | +0.9 | DBLP-Scholar2 | 92.3 | 92.3 | 94.7 | 95.2 | 95.3 | +0.1 | Walmart-Amazon2 | 33.6 | 37.4 | 59.2 | 74.7 | 84.6 | +9.9 |
|
|