About 25,200 results
Open links in new tab
  1. Backpropagation - Wikipedia

    Backpropagation refers only to the method for computing the gradient, while other algorithms, such as stochastic gradient descent, is used to perform learning using this gradient."

  2. 反向传播(Backpropagation ) - 知乎

    反向传播(Backpropagation) BP算法主要用在 神经网络 (深度学习)中,大多数情况下,神经网络求损失函数对中间层参数的导数是一件十分困难的事情,但BP算法能很好的解决这个问题。 BP算法最 …

  3. 一文彻底搞懂深度学习 - 反向传播(Back Propagation)-CSDN博客

    反向传播(Back Propagation,简称BP)算法是 深度学习 中最为核心和常用的优化算法之一,广泛应用于神经网络的训练过程中。它通过计算损失函数关于网络参数的梯度来更新参数,从而最小化损失函 …

  4. 人工智能之深度学习基础——反向传播(Backpropagation

    Nov 22, 2024 · 反向传播(Backpropagation) 反向传播是神经网络的核心算法之一,用于通过误差反传调整网络参数,从而最小化损失函数。 它是一种基于链式法则的高效梯度计算方法,是训练神经网 …

  5. Backpropagation in Neural Network - GeeksforGeeks

    Oct 6, 2025 · Backpropagation, short for Backward Propagation of Errors, is a key algorithm used to train neural networks by minimizing the difference between predicted and actual outputs.

  6. 14 Backpropagation – Foundations of Computer Vision

    This is the whole trick of backpropagation: rather than computing each layer’s gradients independently, observe that they share many of the same terms, so we might as well calculate each shared term …

  7. Backpropagation Step by Step - datamapu.com

    Mar 31, 2024 · In this post, we discuss how backpropagation works, and explain it in detail for three simple examples. The first two examples will contain all the calculations, for the last one we will only …

  8. 反向传播算法 - 维基百科,自由的百科全书

    A Gentle Introduction to Backpropagation - An intuitive tutorial by Shashi Sathyanarayana The article contains pseudocode ("Training Wheels for Training Neural Networks") for implementing the algorithm.

  9. What is backpropagation? - IBM

    Backpropagation is a machine learning technique essential to the optimization of artificial neural networks. It facilitates the use of gradient descent algorithms to update network weights, which is …

  10. Backpropagation | Brilliant Math & Science Wiki

    Backpropagation, short for "backward propagation of errors," is an algorithm for supervised learning of artificial neural networks using gradient descent. Given an artificial neural network and an error …