|
Quality assessment criteria | % of “yes” scores (%) |
|
Computational approaches’ bias | |
1. Was the method adequately used/developed and described for the involved tissue behavior? | 82 |
2. Was the verification well performed for the used/developed method? | 76 |
3. Was the validation systematically performed for the used/developed method? | 89 |
4. Did the method really satisfy the real-time constraints? | 65 |
Interaction devices’ bias | |
5. Was the devices well selected for the system? | 49 |
6. Was the device accuracy adequate for the real-time constraint? | 47 |
7. Was the device easy enough to use for a clinical routine practice? | 47 |
8. Is the device price suitable for a clinical setting? | 47 |
System architectures’ bias | |
9. Was the system adequately described? | 65 |
10. Was the system developed with the participation of the end users? | 15 |
11. Was the system scalable? | 53 |
12. Were the system frameworks adequately selected for implementing the system of interest? | 45 |
Clinical applications bias | |
13. Was the study adequately validated with in vitro data? | 33 |
14. Was the study adequately validated with in vivo data? | 13 |
15. Was the study adequately validated with patient data? | 18 |
16. Was the level of validation suitable for translating the outcomes into clinical routine practices? | 29 |
17. Was the user acceptability performed for patients? | 4 |
18. Was the user acceptability performed for clinical experts? | 7 |
|