Bayesian network

A Bayesian network (also known as a Bayes network, belief network, or decision network) is a statistical hypothesis that, through a directed acyclic graph (DAG), represents a set of variables and their conditional dependencies. Bayesian networks are perfect for taking an occurrence that has happened and estimating the probability that any one of many potential known factors might have been a contributing factor.

In Bayesian networks, effective algorithms can carry out inference and learning. Dynamic Bayesian networks are called Bayesian networks that model sequences of variables (e.g. voice signals or protein sequences). Bayesian network generalizations that under ambiguity can describe and solve decision problems are called effect diagrams.

Bayes networks are formally oriented acyclic graphs (DAGs) whose nodes reflect variables in the Bayesian sense like numbers, latent variables, uncertain parameters or theories can be measurable. Conditional dependencies are represented by edges; nodes that are not related (no path ties one node to another) represent variables that are conditionally independent of each other. Each node is associated with a probability function that takes, as input, a basic set of values for the parent variables of the node and gives the probability (or probability distribution, if applicable) of the node-represented variable (as output).

Three primary inference tasks conducted by Bayesian networks

1.    Inferring unobserved factors. Since a Bayes network is a full model with its variables and their relationships, probabilistic questions about them can be addressed using it. For instance, when other variables (the proof variables) are detected, the network may be used to update knowledge of the state of a subset of variables. This method of estimating the posterior distribution of evidence-given variables is called probabilistic inference. When selecting values for the vector subset that minimizes any predicted loss function, such as the likelihood of decision error, the posterior provides a universal sufficient statistic for detection applications. Thus a Bayesian network may be considered a method for applying the theorem of Bayes to complex problems automatically

2.    Parameter Learning. In order to completely define the Bayes network and hence fully reflect the joint probability distribution, the probability distribution for X conditional on X’s parents must be specified for each node X. There may be some form of distribution of X conditional upon its parents. As it simplifies equations, it is normal to deal with discrete or Gaussian distributions. Only limits on a distribution are often known; one may then use the theory of maximum entropy to evaluate a single distribution, given the constraints, the one with the greater entropy. (Analogously, the conditional distribution for the temporal evolution of the hidden state is typically defined in the particular sense of a dynamic Bayesian network to optimize the entropy rate of the implied stochastic process.)

3.    Structure Learning.  A Bayes network is defined by an expert in the simplest case and then used for inference results. The role of identifying the network in other applications is too difficult for humans. In this case it is important to learn from data the network structure and the parameters of the local distributions. A difficulty pursued within machine learning is automatically learning the graph structure of a Bayes network.

 

Show More
Back to top button