interacting with an inferred world: the challenge of machine learning for humane computer...
TRANSCRIPT
Interacting with an Inferred World: The Challenge of Machine Learning for Humane Computer Interaction + Aarhus 2015
- Alan F. Blackwell
/김민준
x 2016 Fall
Alan Blackwell
• Visual Representation • End-User Development • Interdisciplinary Design • Tangible, Augmented and Embodied Interaction • Psychology of Programming • Computer Music • Critical Theory
1975-1985-1995-2005 — the decennial Aarhus conferences have traditionally been instrumental for setting new agendas for critically engaged thinking about information technology. The conference series is fundamentally interdisciplinary and emphasizes thinking that is firmly anchored in action, intervention, and scholarly critical practice.
Aarhus Conference
Summary4
4
1. Classic theories of user interaction have been framed in relation to symbolic models of planning and problem solving.But…
2. Modern machine-learning systems is determined by statistical models of the world rather than explicit symbolic descriptions.Therefore…
3. We must explore the ways in which this new generation of technology raises fresh challenges for the critical evaluation of interactive systems. — Humane Interaction
Presentation Contents5
Background
The New Critical Landscape
Case Study to Critical Questions
Towards Humane Interaction
1
2
3
4
5 Conclusion
6
Background6
6
“Good Old-Fashioned AI” and Human Computer Interaction
“GOFAI has long had a problematic relationship with HCI — as a kind of quarrelsome sibling”
• Both fields brought together knowledge from Psychology and Computer Science • In the early days of HCI, it was difficult to distinguish HCI from AI or Cognitive Science
Background7
7
Expert Systems Boom of the 1980s and Critical Reactions
The possibility of a Strong AI vs.
Symbolic problem-solving algorithms neglect issues central in HCI
• Social context • Physical embodiment • Action in the world argued by Winograd, Flores, Gill, Suchman
Situated Cognition — The failure of formal computational models of planning and action to deal with the complexity of the real world
The Critical Landscape8
8
“Good Old-Fashioned AI” vs. Modern Machine Learning
GOFAI vs ML
• symbols were not grounded • the cognition was not situated • no interaction with social context
• operate purely on ‘grounded’ data • ‘cognition’ is based wholly on information
collected from the real world • ML systems interact with their social context
through data — eg. SNS data
9
9
“Good Old-Fashioned AI” vs. Modern Machine Learning
GOFAI vs ML
• symbols were not grounded • the cognition was not situated • no interaction with social context
• operate purely on ‘grounded’ data • ‘cognition’ is based wholly on information
collected from the real world • ML systems interact with their social context
through data — eg. SNS data
Turing Tests
The Critical Landscape
GOFAI vs ML
• symbols were not grounded • the cognition was not situated • no interaction with social context
• operate purely on ‘grounded’ data • ‘cognition’ is based wholly on information
collected from the real world • ML systems interact with their social context
through data — eg. SNS data
Turing Tests
The Critical Landscape“Good Old-Fashioned AI” vs. Modern Machine Learning
10
“What if the human and computer cannot be distinguished because the human has become too much like a computer?”
Background11
11
Brieman and ‘Two Cultures’ of Statistical Modeling
1. The Traditional Practice
Predictive Accuracy > Interpretability
2. ML Techniques in which the model is inferred directly from data
Occam’s Razor — “The models that best emulate nature in terms of predictive accuracy are also the most complex and inscrutable
Case Study: Reading the Mind12
12
Reconstructing visual experiences from brain activity — Jack Gallant
https://www.youtube.com/watch?v=nsjDnYxJ0bo
A blurred average of the 100 film library scenes most closely fitting the observed EEG signal
Critical Questions13
13
Question 1: Authorship
The Behavior of ML systems is derived from data (through a statistical model)
Statistical models as an index of the content ex) Library of Babel
A library that contains every possible book in the universe that could be written in an alphabet of 25 characters
This is possible right now..!
Critical Questions14
14
Question 1: Authorship
The Behavior of ML systems is derived from data (through a statistical model)
Statistical models as an index of the content ex) Library of Babel
A library that contains every possible book in the universe that could be written in an alphabet of 25 characters
Is every digital citizen an ‘author’ of their own identity?
who makes the data?
Critical Questions15
15
Question 2: Attribution
Content of the original material captured in an ML model or index should still be traced to the authors
Digital Copyright?
Critical Questions16
16
Question 2: Attribution
Counter-example: EDM Music Industry
Content of the original material captured in an ML model or index should still be traced to the authors
Digital Copyright?
Sampled Chopped and Mashed New Song
Critical Questions17
17
Question 2: Attribution
Counter-example: EDM Music Industry
Content of the original material captured in an ML model or index should still be traced to the authors
Digital Copyright?
Sampled Chopped and Mashed New Song
In symbolic systems, the user can apply a semiotic reading in which the user interface acts as the ‘designer’s deputy’
If the system behavior is encoded in a statistical model, then this humane foundation of the semiotic system is undermined
Critical Questions18
18
Question 3: Reward
“If you are not paying for it, you’re not the customer; you’re the product being sold”
Ecosystem Players (Apple, Google, Facebook, Microsoft) are attempting to establish their control through a combination of storage, behavior, and authentication services that are starting to rely on indexed models of other people’s data
“The primary mechanism of control over users comes through statistical index models that are not currently inspected or regulated”
Critical Questions19
19
Question 4: Self-Determination
1. Sense of Agency
ML-based Systems
2. Construction of Identity
“In control of one’s own actions”
• system behavior becomes perversely more difficult for the user to predict
• some classes of users may be excluded from opportunities to control the system ex) Kinect
• Submitting to a comparison between the statistical mean
“The construction of one’s personal identity”
Narratives of Digital Media / SNS
• behavior of these systems becomes a key component of self-determination
• users “curate their lives” • what about moments that I don’t want?
“Regression to the Mean”
Critical Questions20
20
Question 5: Designing for Control
If a Machine Learning-based System is wrongly trained, how do we “fix” it?
Critical Questions21
21
Question 5: Designing for Control
“Re-train” by more correct inputs
If a Machine Learning-based System is wrongly trained, how do we “fix” it?
Critical Questions22
22
Question 5: Designing for Control
“Re-train” by more correct inputs
If a Machine Learning-based System is wrongly trained, how do we “fix” it?
Towards Humane Interaction23
23
Features
Many very small features are often a reliable basis for inferred classification models*
“How would a machine vision system might recognize a chair?”
* but, the result is that it becomes difficult to account for decisions in a manner recognizable from human
• Judgements are made in relation to sets of features, and • Accountability for a judgement is achieved by reference to those features
how many legs? people sit on it etc
Towards Humane Interaction24
24
Features
Many very small features are often a reliable basis for inferred classification models*
“How would a machine vision system might recognize a chair?”
* but, the result is that it becomes difficult to account for decisions in a manner recognizable from human
• Judgements are made in relation to sets of features, and • Accountability for a judgement is achieved by reference to those features
how many legs? people sit on it etc
The semiotic structure of interaction with inferred worlds can only be well-designed if feature encodings are integrated into the structure
Towards Humane Interaction25
25
Labeling
The inferred model, however complex, is essentially a summary of expert judgements
• ‘ground truth’ implies a degree of objectivity (may or may not be justified) • experts may have a different approach compared to normal users • what about “Amazon Mechanical Turk?” > cultural imperialism
Towards Humane Interaction26
26
Confidence and Errors
99% Likelihood 5% Error Rate
Problems • Many inferred judgements obscure the fact of its varying degrees of confidence • An action based on 51% likelihood may be more beneficial to the user than 99% likelihood
Towards Humane Interaction27
27
Confidence and Errors
99% Likelihood 5% Error Rate
Problems • Many inferred judgements obscure the fact of its varying degrees of confidence • An action based on 51% likelihood may be more beneficial to the user than 99% likelihood
Confidence should be given as a choice
User’s experience of models should be determined by the consequence of errors, not the occasions
Towards Humane Interaction28
28
Deep Learning
Challenges 1. It is difficult for a Deep Learning algorithm to gain information about the world that is unmediated by
features of one kind or another 2. If the judgements are not made by humans, they must be obtained from an other source
Critical Questions 1. What is the ontological status of the model world in which the Deep Learning system acquires its
competence? 2. What are the technical channels by which data is obtained? 3. What ways do each of these differ from the social and embodied perceptions of human observers?
Conclusion29
29
1. Classic theories of user interaction have been framed in relation to symbolic models of planning and problem solving.But…
2. Modern machine-learning systems is determined by statistical models of the world rather than explicit symbolic descriptions.Therefore…
3. We must explore the ways in which this new generation of technology raises fresh challenges for the critical evaluation of interactive systems. — Humane Interaction by…
1. Features 2. Labeling 3. Confidence 4. Errors 5. Deep Learning (Machine-based judgement)