Glossary

Artificial intelligence

Artificial Intelligence (AI) involves machines that can perform tasks that are characteristic of human intelligence. Programs with the ability to learn and reason like humans. There are two types of artificial intelligence: Artificial General Intelligence; a computer capable of doing anything a human can. That is in contrast to Artificial Narrow Intelligence, in which a computer does what a human can do, but only within narrow bounds. 

Source: Wikipedia

Click here for more information

Artificial neural network

Artificial Neural Networks (ANNs) are algorithms that mimic the biological structure of the brain. A neural network is a parallel-distributed processor made up of simple processing units that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the human brain in two respects:

  • Knowledge is acquired by the network from its environment through a learning process;

  • Interneuron connection strengths, known as synaptic weights, are used to store the acquired knowledge.

There are three types of layers of neurons in a neural network: The Input Layer, the Hidden Layer(s), and the Output Layer.

Source: Wikipedia

Click here for more information

Backpropagation

Backpropagation is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network. In other words; The idea of trying to compute how the total error is generated from each node in each layer, and how the total error would change by changing the weights. Having the derivatives, we can use an optimisation method to improve the weight layer by layer.

Source: Rojas, R. (1996).

Click here for more information

Bayesian network

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a Directed Acyclic Graph (DAG). Bayesian networks aim to model conditional dependence, and therefore causation, by representing conditional dependence by edges in a directed graph. Through these relationships, one can efficiently conduct inference on the random variables in the graph through the use of factors. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor.


Source: Ben‐Gal, I. (2008).

Click here for more information

Bias node

The bias node in a neural network is a node that is always 'on'. That is, its value is set to 1 without regard for the data in a given pattern. It is analogous to the intercept in a regression model and serves the same function. If a neural network does not have a bias node in a given layer, it will not be able to produce output in the next layer that differs from 0 (on the linear scale, or the value that corresponds to the transformation of 0 when passed through the activation function) when the feature values are 0.

Click here for more information

Bias variance trade-off

The Bias-Variance trade-off is the property of a set of predictive models whereby models with a lower bias in parameter estimation have a higher variance of the parameter estimates across samples, and vice versa. The Bias-Variance trade-off is the conflict in trying to simultaneously minimize these two sources of error that prevent supervised learning algorithms from generalizing beyond their training set.

 

Source: Domingos, P. (2000).

Click here for more information

Black box

A black box is a device, system or object which can be viewed in terms of its inputs and outputs without any knowledge of its internal workings. (Deep) Artificial neural networks are often referred to as ‘black box.


Source: Wikipedia

Click here for more information

Classification

Classification is a technique for determining which class the dependent variable belongs to based on one or more independent variables. Classification is used for predicting discrete responses.

 

Source: Wikipedia

Click here for more information

Clustering

Clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). Cluster analysis itself is not one specific algorithm, but the general task to be solved. It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them.

Source: Wikipedia

Click here for more information

Confusion matrix

A confusion matrix, also known as an error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one (in unsupervised learning it is usually called a matching matrix). Each row of the matrix represents the instances in a predicted class while each column represents the instances in an actual. The name stems from the fact that it makes it easy to see if the system is confusing two classes (i.e. commonly mislabelling one as another).

 

Source: Wikipedia

Click here for more information

Cost function

A loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a cost function.


Source: Wikipedia

Click here for more information

Cross-entropy

The cross-entropy between two probability distributions p and q over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set if a coding scheme used for the set is optimized for an estimated probability distribution q, rather than the true distribution p. In classification problems maximizing the likelihood is the same as minimizing the cross-entropy.


Source: Wikipedia

Click here for more information

Database normalization

Database normalization is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. Normalization entails organising the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design).


Source: Wikipedia

Click here for more information

Decision tree learning

Decision tree learning uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). In Decision Tree Learning, a new example is classified by submitting it to a series of tests that determine the class label of the example. These tests are organised in a hierarchical structure called a decision tree. Constructing a decision tree is all about finding attribute that returns the highest information gain.


Source: Wikipedia

Click here for more information

Deep learning

Deep Learning is a subset of machine learning in which artificial neural networks adapt and learn from vast amounts of data. Deep Learning uses a Neural Network to imitate intelligence. The “Deep” in Deep Learning refers to having more than one hidden layer in the Neural Network.


Source: Wikipedia

Click here for more information

Deep neural network

A deep neural network (DNN) is an artificial neural network (ANN) with multiple layers between the input and output layers. The DNN finds the correct mathematical manipulation to turn the input into the output. The network moves through the layers calculating the probability of each output. Each mathematical manipulation is considered a layer, and a complex DNN has many layers, hence the name "deep" networks.


Source: Schmidhuber, J. (2015).

Click here for more information

Entropy

Entropy is a measure of unpredictability of the state, or equivalently, of its average information content.


Source: Wikipedia

Click here for more information

Feature

A feature is an individual measurable property or characteristic of a phenomenon being observed.


Source: Wikipedia

Click here for more information

Feed Forward Multi-Layer Perceptron (MLP)

In a feedforward network, the information moves in only one direction – forward – from the input nodes, through the hidden nodes and to the output nodes. There are no cycles or loops in the network.


Source: Svozil, D., Kvasnicka, V., & Pospichal, J. (1997). 

Click here for more information

Generalisation error

In supervised learning applications generalization error (also known as the out-of-sample performance) is a measure of how accurately an algorithm is able to predict outcome values for previously unseen data. Because learning algorithms are evaluated on finite samples, the evaluation of a learning algorithm may be sensitive to sampling error. As a result, measurements of prediction error on the current data may not provide much information about predictive ability on new data. Generalization error can be minimized by avoiding overfitting in the learning algorithm. The goal is to find an input-output relation that does not overfit, yet is sufficiently complex to capture the particular characteristics of the data.

Source: Wikipedia

Click here for more information

Gradient descent

Gradient descent is an optimization algorithm used to minimize some function by iteratively moving in the direction of steepest descent as defined by the negative of the gradient. In machine learning, gradient descent is typically used to find the optimal model parameters.


Source: Wikipedia

Click here for more information

Hidden layer

A hidden layer in an artificial neural network is a layer in between input layers and output layers, where artificial neurons take in a set of weighted inputs and produce an output through an activation function. The hidden layer consists of a finite number of hidden nodes. If there is one hidden layer, we speak of a ‘shallow neural network’. If there are 2 or more hidden layers, we speak of a Deep ANN. The artificial neuron in the hidden layer works like a biological neuron in the brain – it takes in its probabilistic input signals, works on them and converts them into an output corresponding to the biological neuron’s axon.


Source: Wikipedia

Click here for more information

Hyperparameter

A hyperparameter is a parameter whose value is set before the learning process begins. By contrast, the values of other parameters are derived via training.


Source: Wikipedia

Click here for more information

Imbalanced data

In the case of imbalanced data, majority classes dominate over minority classes, causing the machine learning classifiers to be more biased towards majority classes. This causes poor classification of minority classes.


Source: Chawla, N.V (‎2005).

Click here for more information

Information gain

The information gain (IG) measures how much “information” a feature gives us about the class. The information gain is based on the decrease in entropy after a dataset is split on an attribute in a decision tree.


Source: Quinlan, J.R. (2007).

Click here for more information

Input layer

The input layer of a neural network is composed of input neurons, and brings the initial data into the network for further processing by subsequent layers, consisting of artificial neurons. The input layer consists of the independent (explanatory) variables and is the very beginning of the workflow for the artificial neural network.


Source: Wikipedia

Click here for more information

Inverse reinforcement learning

In traditional Reinforcement Learning (RL) the goal is to learn a decision process (policy) to produce behaviour that maximizes some predefined reward function. This function – which needs to be predefined by the analyst – drives the behaviour of the agent as it assigns rewards to the actions it takes. However, in many real-life situations defining the reward function is troublesome, or even impossible. Motivated by this fact, Inverse Reinforcement Learning (IRL) has been proposed. Unlike RL, IRL is concerned with recovering the reward function that explains the observed behaviour.

Source: Ng, A. Y. & Russell, S. J. (2000).

Click here for more information

k-fold cross-validation method

Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross-validation. For example, setting k = 2 results in 2-fold cross-validation. In 2-fold cross-validation, we randomly shuffle the dataset into two sets d1 and d2, so that both sets are equal size (this is usually implemented by shuffling the data array and then splitting it in two). We then train on d1 and validate on d2, followed by training on d2 and validating on d1.


Source: Christensen, R. (2015).

Click here for more information

Learning rate

The learning rate or step size is a hyperparameter which determines to what extent newly acquired information overrides old information. With a high learning rate we can cover more ground each step, but we risk overshooting the lowest point since the slope of the hill is constantly changing. With a very low learning rate, we can confidently move in the direction of the negative gradient since we are recalculating it so frequently. A low learning rate is more precise, but calculating the gradient is time-consuming, so it will take us a very long time to get to the bottom.


Source: Zulkifli, H. (2018).

Click here for more information

Machine Learning

Machine learning algorithms have the ability to learn without being explicitly programmed where to “look”. Machine learning is a way of “training” an algorithm so that it can learn. “Training” involves feeding it data and allowing the algorithm to adjust itself and improve.


Source: Wikipedia

Click here for more information

Network architecture

Network architecture is the specific arrangement of the layers and nodes in the network, come in many different architectures.


Source: Tch, A (2017).

Click here for more information

Node

A node, also called a neuron, is a computational unit that has one or more weighted input connections, a transfer function that combines the inputs in some way, and an output connection. Nodes are organised into layers to comprise a network.


Source: Wikipedia

Click here for more information

Output layer

The output layer in an artificial neural network is the last layer of neurons that produces given outputs for the program. The output layer consists of the dependent variable. Though they are made much like other artificial neurons in the neural network, output layer neurons may be built or observed in a different way, given that they are the last “actor” nodes on the network.


Source: Wikipedia

Click here for more information

Perceptron

The perceptron is an algorithm for supervised learning of binary classifiers. The perceptron consists of an activation function, a summation processor and weights.


Source: Wikipedia

Click here for more information

Regularisation

Regularisation is a technique to avoid overfitting in machine learning methods. 


Source: Wikipedia

Click here for more information

Reinforcement Learning

Reinforcement Learning is a branch of machine learning in which the goal is to learn a decision process (policy) to produce behaviour that maximizes some predefined reward function.


Source: Wikipedia

Click here for more information

Shallow artificial neural network

A shallow artificial neural network is an artificial neural network with only one or two hidden layers (in contrast to deep neural network (DNN) which usually work with up to 30 hidden layers).


Source: Wikipedia

Click here for more information

Supervised Learning

Supervised Learning is a type of machine learning, in which labelled data are used to train the algorithms. In supervised learning, algorithms are trained using marked data, where the input and the output are known.


Source: Wikipedia

Click here for more information

Test data

Test data is the sample of data used to provide an unbiased evaluation of a final model performance. It is only used once a model is completely trained (using the train and validation sets). A test dataset is independent of the training dataset, but follows the same probability distribution as the training dataset. If a model fits to the training dataset also fits the test dataset well, minimal overfitting has taken place. A model that better fits the training dataset than the test dataset usually indicates overfitting.


Source: Wikipedia

Click here for more information

Training

Training a model involves finding the weights that minimise the cost function.


Source: Wikipedia

Click here for more information

Training data

Training data is the sample of data used to fit the model. The training data often consist of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), which is commonly denoted as the target (or label).

Source: Wikipedia

Click here for more information

Transfer function (a.k.a. activation function)

The transfer function, or activation function, is the function that maps the input of a node, or "neuron", onto its output.


Source: Wikipedia

Click here for more information

Universal approximation theorem

The universal approximation theorem states that a feed-forward neural network with a single hidden layer containing a finite number of neurons can approximate continuous functions, under mild assumptions on the activation function. The theorem thus states that simple neural networks can represent a wide variety of interesting functions when given appropriate parameters. However, it does not touch upon the algorithmic learnability of those parameters.


Source: Wikipedia

Click here for more information

Unsupervised learning

Unsupervised Learning is a type of machine learning, in which unlabelled data are used. The algorithm makes clusters of data.  It is learning underlying relationships or structure in data where no specific ‘answer’ or output is expected.


Source: Wikipedia

Click here for more information

Validation data

Validation data is the sample of data used to provide an unbiased evaluation of a model fit on the training dataset while tuning model hyperparameters (i.e. the architecture).


Source: Wikipedia

Click here for more information

Weights

Weights are the estimable parameters, e.g. of the artificial neural network.


Source: Wikipedia

Click here for more information