Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera Project Network

Interpretable machine learning applications: Part 3

Coursera Project Network via Coursera

Overview

Prepare for a new career with $100 off Coursera Plus
Gear up for jobs in high-demand fields: data analytics, digital marketing, and more.
In this 50 minutes long project-based course, you will learn how to apply a specific explanation technique and algorithm for predictions (classifications) being made by inherently complex machine learning models such as artificial neural networks. The explanation technique and algorithm is based on the retrieval of similar cases with those individuals for which we wish to provide explanations. Since this explanation technique is model agnostic and treats the predictions model as a 'black-box', the guided project can be useful for decision makers within business environments, e.g., loan officers at a bank, and public organizations interested in using trusted machine learning applications for automating, or informing, decision making processes.

The main learning objectives are as follows:

Learning objective 1: You will be able to define, train and evaluate an artificial neural network (Sequential model) based classifier  by using keras as API for TensorFlow. The pediction model will be trained and tested with the HELOC dataset for approved and rejected mortgage applications.

Learning objective 2: You will be able to generate explanations based on similar profiles for a mortgage applicant predicted either as of "Good" or "Bad" risk performance.

Learning objective 3: you will be able to generate contrastive explanations based on feature and pertinent negative values, i.e., what an applicant should change in order to turn a "rejected" application to an "approved" one.

Syllabus

  • Interpretable/Explainable machine learning applications: Part 3
    • By the end of this project, you will be able to generate prototypical explanations, in the form of selecting similar user profiles, for predictions being made by an Artificial Neural Network (ANN) as a machine learning (ML) model. Given that ANNs add a considerable complexity, which makes predictions even more difficult to explain or interpret, you will also learn how to work around this challenge and still provide some explanations to end users. As a use case, we will be using mortgage applications on the basis of the HELOC data of applicants being accepted or rejected for such an application. We will also be using IBM’s AIX360 Protodash algorithm for this purpose. For example, as an explanation to a loan application being rejected, a bank loan officer may argue that this is justifiable because the number of satisfactory trades the applicant performed were low, similar to another rejected user, or because his/her debts were too high similar to a different rejected user. In this sense, the project will boost your career not only as ML developer and modeler finding a way to explain and justify the behaviour of complex ML models such as ANNs, but also as a decision-maker in a business environment.

Taught by

Epaminondas Kapetanios

Reviews

4.6 rating at Coursera based on 12 ratings

Start your review of Interpretable machine learning applications: Part 3

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.