ideas on machine learning interpretability · lost profits. wasted marketing. “for a one unit...
Post on 06-Oct-2020
0 Views
Preview:
TRANSCRIPT
IdeasonMachineLearningInterpretability
PatrickHall,WenPhan,SriSatish AmbatiandtheH2O.ai team
BigIdeas
Learning from data …
Adaptedfrom:LearningfromData.https://work.caltech.edu/textbook.html
EXPLAINHYPOTHESIS
h ≈ g, βj g(x(i)j), g(x(i)
(-j))
(explainpredictionswithreasoncodes)
Learning from data …transparently.
Adaptedfrom:LearningfromData.https://work.caltech.edu/textbook.html
Increasingfairness,accountability,andtrustbydecreasingunwantedsociologicalbiases
Source:http://money.cnn.com/,AppleComputers
Increasingtrustbyquantifyingpredictionvariance
Source:http://www.vias.org/tmdatanaleng/
7
AframeworkforinterpretabilityComplexity of learned functions:• Linear, monotonic• Nonlinear, monotonic• Nonlinear, non-monotonic
(~ Number of parameters/VC dimension)
Enhancing trust and understanding: the mechanisms and results of an interpretable model should be both transparent AND dependable.
Understanding ~ transparency Trust ~ fairness and accountability
Scope of interpretability:Global vs. local
Application domain:Model-agnostic vs. model-specific
BigChallenges
LinearModelsStrongmodellocality
Usuallystablemodelsandexplanations
MachineLearningWeakmodellocality
Sometimesunstablemodelsandexplanations(a.k.a.TheMultiplicityofGoodModels)
Age
Numbe
rofP
urchases
Lostprofits.
Wastedmarketing.
“Foraoneunitincreaseinage,thenumberofpurchasesincreasesby0.8 onaverage.”
𝑔 𝑥 = 0.8𝑥
LinearModels
MachineLearning
Exact explanationsforapproximatemodels.
Approximate explanationsforexactmodels.
Age
“Slopebeginstodecreasehere.Acttooptimizesavings.”
“Slopebeginstoincreaseheresharply.Acttooptimizeprofits.”
Numbe
rofP
urchase
𝑔 𝑥 ≈ 𝑓(𝑥)
AFewofOurFavoriteThings
Partialdependenceplots
Source:http://statweb.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf
HomeValue ~ MedInc + AveOccup + HouseAge + AveRooms
Surrogatemodels
Localinterpretable model-agnosticexplanations
Source:https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime
VariableimportancemeasuresGlobalvariableimportanceindicatestheimpactofavariableonthemodelfortheentiretrainingdataset.
Localvariableimportancecanindicatetheimpactofavariableforeachdecisionamodelmakes– similartoreasoncodes.
Resources
MachineLearningInterpretabilitywithH2ODriverlessAIhttps://www.h2o.ai/wp-content/uploads/2017/09/MLI.pdf(ORcomebythebooth!!)
IdeasonInterpretingMachineLearninghttps://www.oreilly.com/ideas/ideas-on-interpreting-machine-learning
FAT/MLhttp://www.fatml.org/
Questions?
top related