Research Article
Research on Cross-Platform Image Recommendation Model Fusing Text Information
Table 2
Benchmarking checklist for methods of image recommendation.
| Checklist issues references | Text information | Deep feature information | Image information |
| Sejal et al. [18] | (1) Recommendation based on the match of queried text and image description text. | — | — | (2) A synonym term dictionary is used for extending text queries to improve recommendation. | Sejal et al. [19] | Recommendation based on the match of queried text and image annotation text. | — | — |
| Zhu et al. [20] | — | (1) Recommendation based on the match of deep features of queried images and candidate images. | — | (2) Deep features of images are generated by using convolutional neural network. | Bo and Peng [21] | — | — | Recommendation based on the match of color histogram features or Gabor texture features of queried images and candidate images. | Widisinghe et al. [22] | (1) The user-generated tag information is applied to collaborative filtering for top-n recommendation. | — | c |
| Our method | (1) Single features of images are the image class labels generated by using convolutional neural network. | (1) Recommendation based on the match of word embeddings of queried text and fusion matrix. | — | (2) Single features of text are keyword vectors generated by using keyword extraction method. | (2) The fusion matrix and queried text is transformed to the word embedding matrices by using Word2Vec. | (3) Single features of images and text are fused to generate a fusion matrix with elements of “image class label — keyword” pairs. | |
|
|