Datamining数据挖掘-感知机

  1. 感知机-梯度下降

[清华大学-数据挖掘-2.6 感知机][https://www.bilibili.com/video/BV1ZJ411b7Te?p=26] 清华大学-数据挖掘-2.7 多层感知机

感知机-梯度下降

==批处理学习== (Batch Learning) \[ E(\vec{w}) \equiv \frac{1}{2} \sum_{d \in D}\left(t_{d}-o_{d}\right)^{2} \\ eg:线性的时候 o(x)=\vec{w}・\vec x \] ita是学习率;负号是为了,随着w增加减少误差E; \[ w_{i} \leftarrow w_{i}+\Delta w_{i} \quad \text { where } \quad \Delta w_{i}=-\eta \frac{\partial E}{\partial w_{i}} \] \[ \begin{array}{l} \nabla E(\vec{w}) \equiv\left[\frac{\partial E}{\partial w_{0}}, \frac{\partial E}{\partial w_{1}}, \ldots, \frac{\partial E}{\partial w_{n}}\right] \end{array} \]

==梯度== (Gradient) \[ \begin{aligned} \Delta w_{i}=-\eta \frac{\partial E}{\partial w_{i}} &=-\eta \frac{\partial}{\partial w_{i}} \frac{1}{2} \sum_{d \in D}\left(t_{d}-o_{d}\right)^{2} \\ &=-\eta \sum_{d \in D}\left(t_{d}-o_{d}\right) \frac{\partial}{\partial w_{i}}\left(t_{d}-w \cdot x_{d}\right) \\ &=\eta \sum_{d \in D}\left(t_{d}-o_{d}\right)\left(x_{i d}\right) \end{aligned} \\ \]

==随机梯度下降== (Stochastic Learning) \[ w_{i} \leftarrow w_{i}+\Delta w_{i} \quad \text { where } \quad \Delta w_{i}=\eta(t-o) x_{i} \]

==多层感知机== (Multilayer Perceptron)

==Sgmoid激活函数== (Sgmoid Threshold Unit) image20200616210133251


转载请注明来源 https://tianweiye.github.io