Research Article

EAWNet: An Edge Attention-Wise Objector for Real-Time Visual Internet of Things

Table 1

Our model is attention-wise. The LRF also uses learnable strategies and acts more lightweight and fast; however, the accuracy is much lower than that of EAWNet. CenterMask learns in an efficient way by searching from the object center and achieves balanced performance. However, EAWNet showed a more significant improvement in learning strategy (adding attention module and rotated tight bounding boxes makes a significant progress on reconstructed deep learning models) and does achieve comparable results to similar algorithms such as LRF, RFBNet, CenterMask, EfficientDet, and YOLOv3. We can conclude that EAWNet outperforms most existing methods in terms of both accuracy and speed. The percentage of average precision on category on DOTA shows our model performs well on unbalanced and anomaly data categories.

MethodPLBDBRGTFSVLVSHTCBCSTSBFRAHA

RFBNet [20]40.5710.211.6814.121.321.432.1917.2228.5710.3428.2610.114.12
LRF [15]40.5921.2937.7424.209.932.195.8645.4439.4535.7217.2238.7348.34
CenterMask [21]90.6081.976.5767.0871.1279.6679.1691.8186.2685.4262.9164.7769.12
EfficientDet [22]90.0282.3147.1172.8672.9678.3480.5491.9685.1485.6257.6962.1365.25
YOLOv4 [19]91.1382.1350.2872.6472.7880.4380.4791.8985.7685.7360.1262.6468.09
EAWNet90.0886.5654.0174.9476.7582.5281.3291.8387.9686.3465.1461.8570.17