Person: Krakovna, Viktoriya
Loading...
Email Address
AA Acceptance Date
Birth Date
Research Projects
Organizational Units
Job Title
Last Name
Krakovna
First Name
Viktoriya
Name
Krakovna, Viktoriya
1 results
Search Results
Now showing 1 - 1 of 1
Publication Building Interpretable Models: From Bayesian Networks to Neural Networks(2016-09-13) Krakovna, Viktoriya; Liu, Jun; Doshi-Velez, Finale; Harrington, DavidThis dissertation explores the design of interpretable models based on Bayesian networks, sum-product networks and neural networks. As briefly discussed in Chapter 1, it is becoming increasingly important for machine learning methods to make predictions that are interpretable as well as accurate. In many practical applications, it is of interest which features and feature interactions are relevant to the prediction task. In Chapter 2, we develop a novel method, Selective Bayesian Forest Classifier (SBFC), that strikes a balance between predictive power and interpretability by simultaneously performing classification, feature selection, feature interaction detection and visualization. It builds parsimonious yet flexible models using tree-structured Bayesian networks, and samples an ensemble of such models using Markov chain Monte Carlo. We build in feature selection by dividing the trees into two groups according to their relevance to the outcome of interest. In Chapter 3, we show that SBFC performs competitively on classification and feature selection benchmarks in low and high dimensions, and includes a visualization tool that provides insight into relevant features and interactions. This is joint work with Prof. Jun Liu. Sum-Product Networks (SPNs) are a class of expressive and interpretable hierarchical graphical models. In Chapter 4, we improve on LearnSPN, a standard structure learning algorithm for SPNs that uses hierarchical co-clustering to simultaneously identifying similar entities and similar features. The original LearnSPN algorithm assumes that all the variables are discrete and there is no missing data. We introduce a practical, simplified version of LearnSPN, MiniSPN, that runs faster and can handle missing data and heterogeneous features common in real applications. We demonstrate the performance of MiniSPN on standard benchmark datasets and on two datasets from Google's Knowledge Graph exhibiting high missingness rates and a mix of discrete and continuous features. This is joint work with Moshe Looks (Google). In Chapter 5, we turn our efforts from building interpretable models from the ground up to making neural networks more interpretable. As deep neural networks continue to revolutionize various application domains, there is increasing interest in making these powerful models more understandable, and narrowing down the causes of good and bad predictions. We focus on recurrent neural networks (RNNs), state of the art models in speech recognition and translation. Our approach to increasing interpretability is to combine an RNN with a hidden Markov model (HMM), a simpler and more transparent model. We explore different combinations of RNNs and HMMs: an HMM trained on LSTM states, and a hybrid model where an HMM is trained first, then a small LSTM is given HMM state distributions and trained to fill in gaps in the HMM's performance. We find that the LSTM and HMM learn complementary information about the features in the text, and we also apply the hybrid models to medical time series. This is joint work with Prof. Finale Doshi-Velez.