fbpx
Saturday, November 26, 2022 | 02:44 pm

Bhusan Chettri Has Just Released a Series of Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, and the Interpretability of These Topics

0
Bhusan Chettri Has Just Released a Series of Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, and the Interpretability of These Topics

Dr. Bhusan Chettri who earned his Ph.D. from the Queen Mary University of London aims at providing an overview of Machine Learning and AI interpretability. For the same Bhusan Chettri has Launched the Tutorials Series of AI, Machine Learning, Deep Learning, and Their Interpretability.

In his first tutorial, Bhusan Chettri is focused on providing an in-depth understanding of IML from multiple standpoints taking into consideration different usages (use-cases), different application domains and emphasizing why it is important to understand how a machine learning model demonstrating impressive results makes their decisions. The tutorial also discusses if such impressive results are trustworthy to be adopted by humans for use in various safety-critical businesses for example medicine, finance, and security. Visiting the first part of this tutorial series on AI, Machine Learning, Deep Learning and their Interpretability on his official website will give a better idea.

Bhusan Recently published his second tutorial, where he seems to provide an overview of Interpretable Machine Learning (IML) a.k.a Explainable AI (XAI) taking into account safety-critical application domains such as medicine, finance, and security. The tutorial talks about the need for explanations from AI and Machine Learning (ML) models by providing two examples in order to provide a good context about the IML topic. Finally, it describes some of the important concepts a.k.a criteria that any ML/AI model in safe-critical applications must satisfy for their successful adoption in a real-world setting. But, before getting deeper into this edition. It is worth revisiting briefly the first part of this tutorial series on AI, Machine Learning, Deep Learning, and their Interpretability.

Part 1 mainly focused on providing an overview of various aspects related to AI, Machine Learning, Data, Big-Data, and Interpretability. It is a well-known fact that data is the driving fuel behind the success of every machine learning and AI application. The first part described how vast amounts of data are generated (and recorded) every single minute from different mediums such as online transactions, the use of different sensors, video surveillance applications, and social media such as Twitter, Instagram, Facebook, etc. Today’s fast-growing digital age that leads to the generation of such massive data, commonly referred to as Big Data, has been one of the key factors towards the apparent success of current AI systems across different sectors.

The tutorial also provided a brief overview of AI, Machine Learning, and Deep Learning and highlighted their relationship: deep learning is a form of machine learning which involves the use of an artificial neural network with more than one hidden layer for solving a problem by learning patterns from training data; machine learning involves solving a given problem by discovering patterns within the training data but it does not involve use of neural networks (PS: machine learning using neural networks is simply referred as deep learning); AI is a general terminology that encompasses both machine learning and deep learning. For example, a simple chess program that involves a sequence of hard-coded if-else rules defined by a programmer can be regarded as an AI which does not involve the use of data i.e there is no data-driven learning paradigm. To put it in simple terms, deep learning is a subset of machine learning and machine learning is a subset of AI.

The tutorial also briefly talked about the back-propagation algorithm which is the engine of neural networks and deep learning models. Finally, it provided a basic overview of IML stressing their need and importance in understanding how a model makes a judgment about a particular outcome. It also briefly discussed a Post-hoc IML framework (that takes a pre-trained model to understand their behavior) showcasing an ideal scenario with a human in a loop for making the final decision of whether to accept or reject the model prediction or a particular outcome.

In a recent tutorial, Bhusan Chettri provided insight on Xai and IML taking into consideration safe-critical application domains such as medicine, finance, and security where the deployment of ML or AI requires satisfaction of certain criteria (such as fairness, trustworthiness, reliability, etc). To that end, Dr. Bhusan Chettri who earned his Ph.D. in Machine Learning and AI for Voice Technology from QMUL, London described why there is a need for interpretability on today’s state-of-the-art ML models that offer impressive results as governed by a single evaluation metric (for example classification accuracy). Bhusan Chettri elaborates this in detail by taking two simple use cases of AI systems: wild-life monitoring (a case of dog vs wolf detector) and automatic tuberculosis detector. He further detailed how biases in training data can affect models from being adopted in real-world scenarios and that understanding training data and performing initial data exploratory analysis is equally crucial so as to ensure models behave reliably in the end during deployment. Stay tuned for more on the topics of explainable AI. The next edition of this series shall discuss different taxonomies of interpretable machine learning. Furthermore, various methods of opening black boxes: towards explaining the behavior of ML models shall be described. Stay Tuned to his website for more updates.

98,656FansLike
643FollowersFollow
9,151FollowersFollow