Quantitative Analysis of Machine Learning Model Performance and the need to consider explainability in it
December 30 @ 9:00 pm - 11:00 pm CST
[]
Free Registration (with a Zoom account; you can get one for free if you don't already have it):
https://sjsu.zoom.us/meeting/register/tZcsc-CoqjwpG9aPDHfg6Axqvn90i4uQRmqr
Synopsis:
For a long time, the AI/ML community relied on traditional evaluation metrics such as the confusion matrix, accuracy, precision, and recall for assessing the performance of machine learning models. However, the rapidly evolving field has been raising several ethical concerns, which calls for a more comprehensive evaluation scheme. In easy-to-understand language, this talk will delve into the quantitative analysis of model performance, emphasizing the critical importance of explainability. As ML models become increasingly complex and pervasive, understanding their decision-making processes is paramount. We'll explore various performance metrics, their limitations, and the growing need for transparency. Topics covered include Cohen’s Kappa Statistic, Matthew's correlation coefficient (MCC), Confusion Matrix, Precision, Recall, G-measure, ROC Curve, Youden's J statistic, Type II Adversarial attack, R-squared, LIME, SHAP, and more.
Speaker(s): Dr. Vishnu S. Pendyala
Virtual: https://events.vtools.ieee.org/m/442073