文档介绍:... ... ....
Artificial Intelligence 170 (2006) 653–666
ate/artint
On the patibility of faithfulness and
monotone DAG faithfulness
David Maxwell Chickering ∗, Christopher Meek
Microsoft Research, One Microsoft Way, Redmond, WA 98052, USA
Received 19 November 2002; received in revised form 21 December 2005; accepted 17 March 2006
Abstract
Cheng, Greiner, Kelly, Bell and Liu [Artificial Intelligence 137 (2002) 43–90] describe an algorithm for learning Bayesian
networks that—in a domain consisting of n variables—identifies the optimal solution using O(n4) calls to a mutual-information
oracle. This result relies on (1) the standard assumption that the generative distribution is Markov and faithful to some directed
acyclic graph (DAG), and (2) a new assumption about the generative distribution that the authors call monotone DAG faithfulness
(MDF). The MDF assumption rests on an intuitive connection between active paths in a work structure and the
mutual information among variables. The assumption states that the (conditional) mutual information between a pair of variables
is a monotonic function of the set of active paths between those variables; the more active paths between the variables the higher
the mutual information. In this paper, we demonstrate the unfortunate result that, for any realistic learning scenario, the monotone
DAG faithfulness assumption is patible with the faithfulness assumption. Furthermore, for the class of work
structures for which the two assumptions patible, we can learn the optimal solution using standard approaches that require
only O(n2) calls to an independence oracle.
© 2006 Published by Elsevier .
Keywords: works; Grafical models; Learning; Structure search; Complexity
1. Introduction
Learning works from data has traditionally been considered a hard problem by most researchers.
Numerous papers have demonstrated that, under a number of different scenarios, identifying the “best” Bayesian-
network structure