chapter 4
DESCRIPTION
Chapter 4. Methods of Inference 知識推論法. 4.1 Deductive and Induction (演繹與歸納). Deduction( 演繹 ): Logical reasoning in which conclusions must follow from their premises. Induction( 歸納 ): Inference from the specific case to the general. Intuition( 直觀 ): No proven theory. - PowerPoint PPT PresentationTRANSCRIPT
Chapter 4
Methods of Inference
知識推論法
4. 知識推論法 S.S. Tseng & G.J. Hwang 2
4.1 Deductive and Induction 4.1 Deductive and Induction (演繹與歸(演繹與歸納) 納)
• Deduction( 演繹 ): Logical reasoning in which conclus
ions must follow from their premises.
• Induction( 歸納 ): Inference from the specific case to t
he general.
• Intuition( 直觀 ): No proven theory.
• Heuristics( 啟發 ): Rules of thumb ( 觀測法 ) based up
on experience.
• Generate and test: Trial and error.
4. 知識推論法 S.S. Tseng & G.J. Hwang 3
• Abduction( 反推 ): Reasoning back from a true c
onclusion to the premises that may have caused the co
nclusion.
• Autoepitemic( 自覺、本能 ): Self-knowledge
• Nonmonotonic( 應變知識 ): previous knowledge
may be incorrect when new evidence is obtained
• Analogy( 類推 ): based on the similarities to anot
her situation
4. 知識推論法 S.S. Tseng & G.J. Hwang 4
Syllogism (三段論)• Syllogism (三段論) is simple, well-understood
branch of logic that can be completely proven.– Premise( 前提 ): Anyone who can program is intellige
nt
– Premise( 前提 ): John can program
– Conclusion( 結論 ): Therefore, John is intelligent.
• In general, a syllogism is any valid deductive argument having two premises and a conclusion.
4. 知識推論法 S.S. Tseng & G.J. Hwang 5
Categorical Syllogism( 定言三段論 )
形態 概要 意思AEIO
所有 S為 P沒有 S為 P某些 S為 P某些 S不為 P
完全肯定完全否定部分肯定部分否定
定言命題的型態
4. 知識推論法 S.S. Tseng & G.J. Hwang 6
•三段論的標準形態( Standard form ) 大前提:所有 M 為 P
小前提:所有 S 為 M
結論:所有 S 為 P
- P- P 代表結論的「謂詞」代表結論的「謂詞」 (Predicate)(Predicate) ,又稱為「大詞」(,又稱為「大詞」( Major termMajor term ))- S- S 代表結論的「主詞」代表結論的「主詞」 (Subject)(Subject) ,又稱作「小詞」(,又稱作「小詞」( Minor termMinor term )。)。- - 含有大詞的前提稱為「大前提」(含有大詞的前提稱為「大前提」( Major premiseMajor premise ););- - 含有小詞的前提稱為「小前提」(含有小詞的前提稱為「小前提」( Minor premiseMinor premise )。)。- M- M 稱為「中詞」(稱為「中詞」( Middle termMiddle term ))
4. 知識推論法 S.S. Tseng & G.J. Hwang 7
Mood( 模式 )• Patterns of Categorical Statement
• 4 種 AAA 模式
Figure-1 Figure-2 Figure-3 Figure-4
大前提 MP PM MP PM
小前提 SM SM MS MS
形態 AAA-1 AAA-2 AAA-3 AAA-4
大前提 MP PM MP PM
小前提 SM SM MS MS
4. 知識推論法 S.S. Tseng & G.J. Hwang 8
• ex: AAA-1– 所有 M 為 P
所有 S 為 M ∴ 所有 S 為 P
• We use decision procedure( 決策程序 ) to prove the validity of syllogistic argument
• The decision procedure for syllogisms can be done using Venn Diagrams( 維思圖 )
4. 知識推論法 S.S. Tseng & G.J. Hwang 9
• ex: Decision procedure for Syllogism AEE-1
所有 M 為 P
沒有 S 為 M
∴ 沒有 S 為 P
S P
M
S P
M M
S P
(a)維思圖 (b)大前提後 (c)小前提後
4. 知識推論法 S.S. Tseng & G.J. Hwang 10
General Rule under “some” quantifiers
1. If a class is empty, it is shaded.2. Universal statement, A and E, are always drawn
before particular ones.3. If a class has at least one member, mark it with a *.4. If a statement does not specify in which of two
adjacent classed an object exists, place a * on the line between the classes.
5. If an area has been shaded, no * can be put in it.
4. 知識推論法 S.S. Tseng & G.J. Hwang 11
ex: Decision procedure for Syllogism IAI-1
某些 P 為 M
所有 M 為 S
∴ 某些 S 為 PS P
M(b) P M一些 是
S P
M
*
(a) M S所有 是
4. 知識推論法 S.S. Tseng & G.J. Hwang 12
4.2 State and problem spaces( 狀態與問題空間 )
• Tree (樹狀結構) : nodes, edges• Directed or undirected
• Digraph (雙向圖) : a graph with directed edges
• Lattice (晶格) : a directed acyclic graph
4. 知識推論法 S.S. Tseng & G.J. Hwang 13
• A useful method of describing the behavior of an object is to define a graph called the state space. [state( 狀態 ) and action( 行動 )]– Initial state
– Operator
– State space
– Path
– Goal test
– Path cost
4. 知識推論法 S.S. Tseng & G.J. Hwang 14
Finite State Machine(有限狀態機器 )
• Determining valid strings WHILE,WRITE, and BEGIN
開始
H I LE
ITR
W
G IEB
not N
not Inot G
not E
錯誤
成功N
4. 知識推論法 S.S. Tseng & G.J. Hwang 15
Finding solution in problem space
• State space (狀態空間) can be thought as a problem space (問題空間) .
• Finding the solution to a problem in a problem space involves finding a valid path from start to success( answer).
• The state space for the Monkey and Bananas Problem
• Traveling salesman problem (旅行推銷員問題)
• Graph algorithms, AND-OR Trees, etc.
4. 知識推論法 S.S. Tseng & G.J. Hwang 16
Ex: Monkey and Bananas Problem
• 假設:– 房子裡有一懸掛的香蕉– 房子裡只有一張躺椅跟一把梯子– 猴子無法直接拿到香蕉
• 指示:– 跳下躺椅– 移動梯子– 把梯子移到香蕉下的位置– 爬上梯子– 摘下香蕉
• 初始狀態:– 猴子在躺椅上
猴子在躺椅上
猴子在地板上
躺椅位於香蕉下
躺椅不位於香蕉下
猴子不在梯子上
跳下躺椅 觀察到躺椅在香蕉下
觀察到躺椅不在香蕉下
觀察到猴子不在梯子上
移動躺椅
猴子在梯子上
梯子不位於香蕉下觀察到梯子
不在香蕉下
觀察到猴子不在梯子上
梯子在香蕉下
觀察到梯子在香蕉下
移動猴子
移動梯子
猴子在梯子上 摘下香蕉
猴子成功得到香蕉
爬上梯子
4. 知識推論法 S.S. Tseng & G.J. Hwang 17
Ex: Travel Salesman Problem(旅行推銷員問題) A
BD
C
(a) 旅行推銷員的問題描述
C
B
A
D
A
C
A BDB
B C DBA
C
C
DA B
A BDC
B C
D
CA
(b) ( )搜尋路徑 粗線是解答路徑
4. 知識推論法 S.S. Tseng & G.J. Hwang 18
Ill-structured problem( 非結構化問題 )
• Ill-structured problems (非結構化問題) have uncertainties associated with it.– Goal not explicit– Problem space unbounded– Problem space not discrete– Intermediate states difficult to achieve– State operators unknown– Time constraint
4. 知識推論法 S.S. Tseng & G.J. Hwang 19
Ex: 旅遊代理人特徵 客戶的反應
目標不明顯 我在想到底要去哪裡
問題空間範圍未被介定 我不確定要去哪裡
問題狀態不是離散的 我只是想去旅遊,目的地並不重要
中間的狀態不易實行 我沒有足夠的錢去
狀態的可用運算元未知 我不知道怎麼可以籌到錢
時間限制 我必須儘快出發
4. 知識推論法 S.S. Tseng & G.J. Hwang 20
4.3 Rules of Inference( 規則式推論 )
• Syllogism (三段論) addresses only a small portion of the possible logic statements.
• Propositional logicp q
p______
q
Inference is called direct reasoning ( 直接推論 ), modus ponens ( 離斷率 ), law of detachment ( 分離律 ) , and assuming the antecedent ( 假設前提 ).
4. 知識推論法 S.S. Tseng & G.J. Hwang 21
p q p→q (p→q)p (p→q) p→q
T T T T T
T F F F T
F T T F T
F F T F T
Truth table for Truth table for Modus PonenseModus Ponense ((離斷率))
4. 知識推論法 S.S. Tseng & G.J. Hwang 22
Law of Inference Schemata
1.Law of Detachment
2.Law of the Contrapositive
3. Law of Modus Tollens
4.Chain Rules(Law of the Syllogism)
5.Law of Disjunctive Inference
6.Law of the Double Negation
p→qp ∴q
p→q∴~q→~p
p→q~q∴~p
pq~q∴p
P→qq→r∴p→r
~(~p)∴p
pq~p∴q
4. 知識推論法 S.S. Tseng & G.J. Hwang 23
7.De Morgan’s Law
8.Law of Simplification
9.Law of Conjunction
10.Law of Disjunctive Addition
11. Law of Conjunctive Argument
Table 3.8 Some Rules of Inference for Propositional Logic
~(pq)∴~p ~q
~(pq)∴~p ~q
~(pq)∴~q
pq∴p
pq∴pq
p∴pq
~(pq)p∴~q
~(pq)q∴~p
4. 知識推論法 S.S. Tseng & G.J. Hwang 24
Resolution in propositional LogicResolution in propositional Logic(命題邏輯(命題邏輯分解)
F: Rules or facts known to be TRUE
S: A conclusion to be Proved
• Convert all the propositions of F to clause form.
2. Negate S and convert the result to clause form. Add it
to the set of clauses obtained in step 1.
3. Repeat until either a contradiction is found or no
progress can be made:
(1) Select two clauses.
Call these the Parent clauses.
4. 知識推論法 S.S. Tseng & G.J. Hwang 25
(2) Resolve them together.
The resulting clause, called the resolvent, will be the
disjunction of all of the literals of both of the parent
clauses with the following exception : If there are any
pairs of literals L and ~L. Such that one of the parent
clauses contains L and the other contaions ~L, then
eliminate both L and ~L from the resolvent.
(3) If the resolvent is the empty clause, then a
contradiction has been found. If it is not, then add it
to the set of clauses available to the procedure.
4. 知識推論法 S.S. Tseng & G.J. Hwang 26
Given Axioms Converted to Clause Form
p p
(p q) r ~ p ~ q r
(s t) q ~ s q
~ t q
t t
1.
2.
3.
4.
5.
p = 下雨 q = 騎車s = 路線熟悉t = 路途遠r = 穿雨衣
4. 知識推論法 S.S. Tseng & G.J. Hwang 27
~p ~qr
~ t q
Resolution in Propositional Logic
~ p ~q
~ r
p
~ q
~ tt
4. 知識推論法 S.S. Tseng & G.J. Hwang 28
Resolution with quantifiersResolution with quantifiers
Example ( from Nilsson ):
Whoever can read (R) is literate (L).
Dolphins (D) aren’t literate (~L).
Some dolphins (D) are intelligent (I).
To prove : Some who are intelligent (I) can’t read (~R).
4. 知識推論法 S.S. Tseng & G.J. Hwang 29
Translating :
To prove: x [ I ( x ) & ~ R ( x ) ]
x [ R ( x ) → L ( x ) ]
x [ D ( x ) → ~L ( x ) ]
x [ D ( x ) & I ( x ) ]
4. 知識推論法 S.S. Tseng & G.J. Hwang 30
(1) - (4) :
x [~ R ( x ) OR L ( x ) ] & y [ ~ D ( y )
OR ~ L ( y ) ] & D ( A ) & I ( A ) &
z [~ I ( z ) OR R ( z ) ]
(5) - (9) :
C1=~R(x) OR L(x)
C2=~D(y) OR ~L(y)
C3=D(A)
C4=I(A)
C5= ~I(z) OR R(z)
4. 知識推論法 S.S. Tseng & G.J. Hwang 31
• The second order logic can have quantifiers that range over function and predicate symbols
• If P is any predicate of one document– Then– x =y (for every P [P(x) P(y) ]
4. 知識推論法 S.S. Tseng & G.J. Hwang 32
4.4 Inference Chain ( 推斷鏈 ) D3
A2 D2
A1 B C D1 E Solution
inference + inference +… + inference
Chain
Infer from initialfacts to solutions
Infer from initialfacts to solutions
Assume that some solution is true, and try to provethe assumption by findingthe required facts
Assume that some solution is true, and try to provethe assumption by findingthe required facts
forwardchaining
backwardchaining
Initial facts
4. 知識推論法 S.S. Tseng & G.J. Hwang 33
Rule1: elephant(x) mammal(x)
Rule2: mammal(x) animal(x)
Fact : John is an elephant.
elephant (John) is true X=John (Unification)
elephant(x) mammal(x)
X’=X=John
mammal(x’) animal(x’)
• Forward Chaining (前向鏈結) :
Mammal(John) is trueMammal(John) is true
animal(John) is trueanimal(John) is true• Unification (變數替代)
The process of finding substitutions for variables to make arguments match.
4. 知識推論法 S.S. Tseng & G.J. Hwang 34
Forward Chaining (前向推論)Rule1 : A1 and B1 C1
Rule2 : A2 and C1 D2
Rule3 : A3 and B2 D3
Rule4 : C1 and D3 G
Facts : A1 is true, A2 is true , A3 is true, B1 is true, B2 is true
{A1, A2, A3, B1, B2} match {r1, r3}
fire r1 {A1, A2, A3, B1, B2, C1} match {r1, r2, r3}
fire r2 {A1, A2, A3, B1, B2, C1, D2} match{r1, r2, r3}
fire r3 {A1, A2, A3, B1, B2, C1 D2, D3} match{r1, r2, r3, r4}
fire r4 {A1, A2, A3, B1, B2, C1 D2, D3, G }GOAL
4. 知識推論法 S.S. Tseng & G.J. Hwang 35
Backward Chaining ( 反向推論 )
2. Assume G is true
Verify C1 and D3
Verify A3 and B2
Verify A1 and B1
1. Assume G’ is true
Verify C1 and D4
Verify A1 and B1
rule1 : A1 and B1 C1
rule2 : A2 and C1 D2
rule3 : A3 and B2 D3
rule4 : C1 and D3 Grule5 : C1 and D4 G’
OK OK
D4 is unknown, ask user.
If D4 is FALSE, give up.
D4 is unknown, ask user.
If D4 is FALSE, give up.
R5
R1
R4
R3
OKOK
OK OK
facts : A1, A2, B1, B2, A3
4. 知識推論法 S.S. Tseng & G.J. Hwang 36
AA22 AA33
CC11
AA11 BB11
BB22
DD22
GG
DD33
?
GOALGOAL
4. 知識推論法 S.S. Tseng & G.J. Hwang 37
• Good application of forward chainingGood application of forward chaining (前向鏈結)(前向鏈結)
Goal
Broad and Not Deep ortoo many possible goals
Facts
4. 知識推論法 S.S. Tseng & G.J. Hwang 38
• Good application of backward chainingGood application of backward chaining (後向鏈(後向鏈結)結)
Narrow and Deep
Facts
GOALS
4. 知識推論法 S.S. Tseng & G.J. Hwang 39
• Forward Chaining (前向鏈結) Planning
Monitoring
Control
Data-driven
Explanation not facilitated
• Backward chaining (後向鏈結) Diagnosis
Goal-driven
Explanation facilitated
4. 知識推論法 S.S. Tseng & G.J. Hwang 40
Analogy• Try to relate old situations as guides to new
ones• Consider tic-tac-toe with values as a magic
square (15 game)» 6 1 8» 7 5 3» 2 9 4
• 18 game from set {2,3,4,5,6,7,8,9,10}• 21 game from set {3,4,5,6,7,8,9,10,11}
4. 知識推論法 S.S. Tseng & G.J. Hwang 41
Nonmonotonic reasoning
• In nonmonotonic system, the theorems do not necessarily increase as the number of axioms increases.
• As a very simple example, suppose there is a fact that asserts the time. As soon as time changes by a second, the old fact is no longer valid.
4. 知識推論法 S.S. Tseng & G.J. Hwang 42
4.5 Reasoning Under Uncertainty( 不確定性推論 )
• Uncertainty can be considered as the lack of adequate infor
mation to make a decision.
• Classical probability, Bayescian probability, Dempster-S
hafer theory, and Zadeh’s fuzzy theory.
• In the MYCIN and PROSPECTOR systems conclusion are
arrived at even when all the evidence needed to absolutely
prove the conclusion is not known.
4. 知識推論法 S.S. Tseng & G.J. Hwang 43
Many different types of error can contribute to uncertainty
ExampleTurn the value off Turn value-1Turn value-1 offValue is stuck Value is not stuckTurn value-1 to 5Turn value-1 to 5.4Turn value-1 to 5.4 or 6 or 0Value-1 setting is 5.4 or 5.5 or 5.1Value-1 setting is 7.5Value-1 is not stuck because it’s never been stuck beforeOutput is normal and so value-1 is in good condition
ErrorAmbiguousIncompleteIncorrectFalse positive ( 接受錯誤值 )False negative ( 拒絕正確值 )ImpreciseInaccurateUnreliableRandom errorSystematic errorInvalid induction
Invalid deduction
ReasonWhat value?Which way?Correct is onValue is not stuckValue is stuckCorrect is 5.4Correct is 9.2Equipment errorStatistical fluctuation ( 波動 )Mis-calibration ( 刻度 )
Value is stuck
Value is stuck in open position
4. 知識推論法 S.S. Tseng & G.J. Hwang 44
• A hypothesis is an assumption to be tested.• Type 1 error (false positive) means acceptance of a hypothes
is when it is not true.• Type 2 error (false negative) means rejection of a hypothesis
when it is true.
• Error of measurement– Precision
• The millimeter( 公釐 ) ruler is more precise than centimeter ruler.
– accuracy
4. 知識推論法 S.S. Tseng & G.J. Hwang 45
Error & Induction
The process of induction is the opposite of deduction
The fire alarm goes off ( 響起 )
∴ There is a fire.
An even stronger argument is
The fire alarm goes off & I smell smoke
∴ There is a fire.
Although this is a strong argument, it is not proof that there is a fire.
My clothes are burning
4. 知識推論法 S.S. Tseng & G.J. Hwang 46
Deductive errors
p→q
q
∴ p
If John is a father, than John is a man
John is a man
∴ John is a father
4. 知識推論法 S.S. Tseng & G.J. Hwang 47
Baye’s Theorem (貝氏定理)• Conditional probability (條件機率) , P(A | B) , states the
probability of event B occurred. Crash= Brand X(0.6)+ Not
X(0.1)=0.7
• P( X|C) =
P( C | X) P(X) = (0.75)(0.8) = 6
P(C) 0.7 7
• Suppose you have a drive and don’t know its brand, what is t
he probability that if it crashes, it is Brand X? non-Brand X?
4. 知識推論法 S.S. Tseng & G.J. Hwang 48
Don’t ChooseBrand XP(X’)=0.2
No CrashP(C’ | X’)=0.5
CrashP(C | X’)=0.5
P(C’∩X’)=0.1
CrashP(C | X)=0.75
No CrashP(C’ | X)=0.25
ChooseBrand XP(X)=0.8
P(C ∩X’)=0.1 P(C’ ∩ X)=0.2 P(C ∩X)=0.6
P(X’ | C’) =0.1
0.1+0.2
= 1 / 3
P(X’ | C) =0.1
0.1+0.6
= 1 / 7
P(X | C’) =0.2
0.2+0.1
= 2 / 3
P(X | C) =0.6
0.6+0.1
= 6 / 7
Prior
P(Hi)
Conditional
P(E | Hi )
Joint -P(E ∩ Hi )
=P(E | Hi ) P( Hi )
Posterior
P(H i | E) = P (E ∩ Hi)
iP(E∩Hi)
Decision Tree for the Disk drive Crashes
Act
4. 知識推論法 S.S. Tseng & G.J. Hwang 49
P(E∩Hi) P(E | H i) P(Hi)
P(E ∩Hj) P(E | Hj) P(Hj)
P(E | Hi)P(Hi)
P(E)
• Bayes’ Theorem (貝氏定理) is commonly used for decision tree analysis of business and the social science.
• Used in Prospector expert system to decide favorite sites of mineral exploration
P(Hi | E) = =
=
j j
4. 知識推論法 S.S. Tseng & G.J. Hwang 50
Hypothetical Reasoning and Backward Induction.
P(+)=P(+∩O)+P(+∩O’)=0.48+0.04=0.52
P(-)=P(-∩O)+P(-∩O’)=0.12+0.36=0.48
No Oil
P(O’)=0.4
Oil
P(O)=0.6
-Test
P(- | O’)
=0.9
+Test
P(+ | O’)
=0.1
-Test
P(- | O)
=0.2
+Test
P(+ | O)
=0.8
P(-∩O’)
=0.36
P(+∩O’)
=0.04
P(-∩O)
=0.12
P(+∩O)
=0.48Joint -P(E∩H)
=P(E | Hi) P(Hi)
Probabilities
Prior
Subjective Opinion
of Site - P (Hi)
Conditional
Seismic Test Result
4. 知識推論法 S.S. Tseng & G.J. Hwang 51
-Test
P(-)=0.48
No Oil
P(O’|-)
= (9)(4)
0.48
= 3/4
P(-∩O)
=0.36
P(+∩O)
=0.04
P(-∩O)
=0.12
P(+∩O)
=0.48
Joint -P(E∩H)
=P(Hi | E) P(E)
Probabilities
Unconditional
P (E)
Posterior
of Site - P(Hi | E)
P ( E| Hi) P (Hi)
P(E)
+Test
P(+)=0.52
Oil
P(O|-)
= (2)(6)
0.48
= 1/4
No Oil
P(O’|+)
= (1)(4)
0.52
= 1/13
Oil
P(O|+)
= (8)(6)
0.52
= 12/13
=
4. 知識推論法 S.S. Tseng & G.J. Hwang 52
• Oil release , if successful $1250000• Drilling expense -$200000• Seismic survey -$50000• Expected payoff (success)
• 846153=1000000 *12/13 – 1000000*1/13
• Fail• -500000= 1000000*1/4- 1000000*3/4
• Expected payoff (total)• 416000= 846153*0.52 – 50000 * 0.48
4. 知識推論法 S.S. Tseng & G.J. Hwang 53
Temporal reasoning and Markov chain
• Temporal reasoning: reasoning about events that depend on time
• Temporal logic
• The system’s progression through a sequence of status is called a Stochastic process if it is probabilistic.
4. 知識推論法 S.S. Tseng & G.J. Hwang 54
– P11 P12
– P21 P22
– Where Pmn is the probability of a transition from state m to state n.
S = { P1,P2, …, Pn} where P1+P2+…+Pn= 1
S2 = S1 T
S2 = [0.8,0.2] = [0.2,0.8]0.1 0.9
0.6 0.4
4. 知識推論法 S.S. Tseng & G.J. Hwang 55
• Assume 10 percent of all people who now use Brand X drive will buy another Brand X when needed. 60 percent of people who don’t use Brand X will buy Brand X when they need a new drive. Over a period of time, how many people will use Brand X?S3 = [0.5,0.5], S4 = [0.35,0.65],S5 = [0.425,0.575], S6 = [0.3875,0.6125]S7 = [0.40625,0.59375], S8 = [0.396875,0.602125]
Steady state matrix
4. 知識推論法 S.S. Tseng & G.J. Hwang 56
The odds of belief
• “The patient is covered with red spots”
• Proposition A: “The patient has measles”
• P(A|B) :(degree of belief that A is true, given B)is not necessarily a probability if the events and propositions can not be repeated or has a math basis.
4. 知識推論法 S.S. Tseng & G.J. Hwang 57
• The odds on A against B given some event C are odds =P(A|C)/ P(B|C)
• If B = A’ – odds =P(A|C)/ P(A’|C) =P(A|C)/ (1-P(A|C) )
• Likelihood of P = 0.95– Odds = .95/(1-.95) = 19 to 1
4. 知識推論法 S.S. Tseng & G.J. Hwang 58
Sufficiency and necessity
Bayes’ Theorem is
P(H|E) =
Negation P(H’|E) =
P(E | H)P(H)
P(E)
P(E | H’)P(H’) P(E)
4. 知識推論法 S.S. Tseng & G.J. Hwang 59
P(H | E) P(E | H) P(H)
P(H’ | E) P(E | H’) P(H’)
Defining the prior odds on H as
P(H)
P(H’)
P(H | E)
P(H’ | E)
O(H) =
=
O(H | E) =
4. 知識推論法 S.S. Tseng & G.J. Hwang 60
Likelihood ratio
P(E | H)
P(E | H’)
O(H | E) = LS O(H)
odds-likelihood form of Bayes’ Theorem.
The factor LS is also called likelihood of sufficiency
because if LS =∞ then the evidence E is logically s
ufficient for concluding that H is true.
LS=
4. 知識推論法 S.S. Tseng & G.J. Hwang 61
Likelihood of necessity, LN, is defined similarly to LS as
O(H | E’) P(E’ | H) P(H’ | E’)
O(H) P(E’ | H’) P(H)
O(H | E’) = LN O(H)
LN=0,then P(H | E’) = 0. This means that H must be false
when E’ true. Thus if E is not present then H is false,
which means that E is necessary for H.
P(H’)
P(H | E’)
= =LN=
4. 知識推論法 S.S. Tseng & G.J. Hwang 62
LS Effect on Hypothesis
0 H is false when E is true or
E’ is necessary for concluding H
Small(0<LS<<1) E is unfavorable for concluding H
1 E has no effect on belief of H
Large(1<<LS) E is favorable for concluding H
E is logically sufficient for H or
Observing E means H must be true
4. 知識推論法 S.S. Tseng & G.J. Hwang 63
LN Effect on Hypothesis
0 H is false when E is true or E is necessary for H
small(0<LN<<1) Absence of E is unfavorable for concluding H
1 Absence of E has no effect on H
large(1<<LN) Absence of E is favorable of H
Absence of E is logically sufficient for H
4. 知識推論法 S.S. Tseng & G.J. Hwang 64
Uncertainty in inference chains
• Uncertainty may be present in rules, evidence used by the rules, or both.
4. 知識推論法 S.S. Tseng & G.J. Hwang 65
Expert Inconsistency
If LS > 1 then P(E | H’) < P(E | H)
1 – P(E | H’) > 1 – P(E | H)
1-P(E | H)
1-P(E | H’)
Case 1 : LS>1 and LN <1
Case 2 : LS<1 and LN >1
Case 3 : LS= LN = 1
LN= < 1
4. 知識推論法 S.S. Tseng & G.J. Hwang 66
Exercise• 考慮以下的事實與規則,試以前向鏈結和後向鏈結描述其推論過程。
事實 : A1, A2, A3, A4, B1, B2
規則 : R1: A1 and A3 --> C2
R2: A1 and B1 --> C1
R3: A2 and C2 --> D2
R4: A3 and B2 --> D3
R5: C1 and D2 --> G1
R6: B1 and B2 --> D4
R7: A1 and A2 and A3 --> D2
R8: C1 and D3 --> G2
R9: C2 and A4 --> G3
目標 : G1, G2 and G3