Bayesian network

From Free net encyclopedia

(Redirected from Bayesian networks)

A Bayesian network or Bayesian belief network or just belief network is a form of probabilistic graphical model. Specifically, a Bayesian network is a directed acyclic graph of nodes representing variables and arcs representing dependence relations among the variables. If there is an arc from node A to another node B, then we say that A is a parent of B. If a node has a known value, it is said to be an evidence node. A node can represent any kind of variable, be it an observed measurement, a parameter, a latent variable, or a hypothesis. Nodes are not restricted to representing random variables; this is what is "Bayesian" about a Bayesian network.

A Bayesian network is a representation of the joint distribution over all the variables represented by nodes in the graph. Let the variables be X(1), ..., X(n). Let parents(A) be the parents of the node A. Then the joint distribution for X(1) through X(n) is represented as the product of the probability distributions <math>\Pr[X(i)\mid\operatorname{parents}(X(i))]</math> for i = 1 to n. If X has no parents, its probability distribution is said to be unconditional, otherwise it is conditional.

Questions about incongruent dependence among variables can be answered by studying the graph alone. It can be shown that the graphical notion called d-separation corresponds to the notion of conditional independence: if nodes X and Y are d-separated (given specified evidence nodes), then variables X and Y are independent given the evidence variables. The set of all other nodes on which node X can directly depend is given by X<i>'s Markov blanket.

In order to fully specify the Bayesian network and to carry out numerical calculations, it is necessary to further specify for each node X the probability distribution for X conditional on its parents. The distribution of X given its parents may have any form. However, it is common to work with discrete or Gaussian distributions, since that simplifies calculations. Sometimes only constraints on a distribution are known; one commonly uses the principle of maximum entropy to specify the distribution with the greatest information entropy given these constraints. In this way the distribution makes the fewest additional assumptions beyond what is known to be true as expressed by the constraints. (Analogously, in the specific context of a dynamic Bayesian network, one commonly specifies the conditional distribution for the hidden state's temporal evolution to maximize the entropy rate of the implied stochastic process.) Often these conditional distributions depend on unknown parameters which must be estimated from data, sometimes using maximum likelihood. direct maximization of the likelihood (or of the posterior probability) is often complex when there are unobserved variables. A classical approach to this problem is the expectation-maximization algorithm which alternates computing expected values of the unobserved variables conditional on observed data, with maximizing the complete likelihood (or posterior) assuming that previously computed expected values are correct. Under mild regularity conditions this process converges on maximum likelihood (or maximum posterior) values for parameters. Fully Bayesian approach to parameters is to treat parameters as simply more unobserved variables, compute a full posterior distribution over all nodes conditional on observed data, then integrate out the parameters. This approach can be expensive and lead to large dimension models, so in practise classical parameter setting approaches are more common.

The goal of inference is typically to find the conditional distribution of a subset of the variables, conditional on known values for some other subset (the evidence), and integrating over any other variables. This conditional distribution is known as the <i> posterior distribution of the subset of the variables given the evidence. The posterior gives a universal sufficient statistic for detection applications, when one wants to choose values for the variable subset which minimize some expected loss function, for instance the probability of decision error. A Bayesian network can thus be considered a mechanism for automatically constructing extensions of Bayes' theorem to more complex problems. The most common exact inference methods are variable elimination that eliminates (by integration or summation) the non-observed non-query variables one by one by distributing the sum over the product, clique tree propagation that caches the computation so that the many variables can be queried at one time, and new evidence can be propagated quickly, recursive conditioning that allows for a space-time tradeoff but still allowing for the efficiency of variable elimination when enough space is used - all of these methods have complexity that is exponential in tree width. The most common approximate inference algorithms are stochastic MCMC simulation, mini-bucket elimination (which generalizes loopy belief propagation) and variational methods.

Bayesian networks are used for modelling knowledge in gene regulatory networks, medicine, engineering, text analysis, image processing, data fusion, and decision support systems.

Learning the structure of a Bayesian network is a very important part of machine learning. Given the information that the data is being generated by a Bayesian network and that all the variables are visible in every iteration, the following methods are used to learn the structure of the acyclic graph and the conditional probability table associated with it. The elements of a structure finding algorithm are a scoring function and a search strategy. An exhaustive search returning back a structure that maximizes the score is one implementation which is superexponential in the number of variables. A local search algorithm makes incremental changes aimed at improving the score of the structure. A global search algorithm like Markov chain Monte Carlo can avoid getting trapped in local minima. Friedman et. al. talk about using mutual information between variables and finding a structure that maximizes this. They do this by restricting the parent candidate set to k nodes and exhaustively searching therein.

See also

Links and software

References

de:Bayessches Netz es:Red bayesiana fr:Réseau bayésien he:רשת בייסיאנית su:Jaringan Bayes