![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
![]() Performance evaluation for learning systemsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Pietro Lio. In the talk we review the need to revisit performance evaluation in Machine Learning, as long as the existing mainstream options (accuracy, f-measure, MSE /MAE) provide a too narrow insight on method performance. Some of the topics discussed in the talk relate to the interpretation and modelling of data in a dataset, including multiple ground truth, classes of equivalence, and area-based interpretation of input population. A later part of the talk reviews alternatives to accuracy and f-measure in literature, mostly leaning towards inclusion of explainability or truthworthiness. This talk is part of the Foundation AI series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsEpigenetics, Stem Cells, and Trophoblast OpenCoffee Cambridge Geometry in ScienceOther talksOn extensions of Verma modules Title TBC Lecture 1 Volume, Complexity and Torsion Break Title TBC |