A Practical Approach To Stochastic gradient descent(SGD)

Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It is called stochastic because the method uses randomly selected (or shuffled) samples to evaluate the gradients, hence SGD can be regarded as a stochastic approximation of gradient descent optimization. Background Both statistical estimation and machine learning consider the … Continue reading A Practical Approach To Stochastic gradient descent(SGD)