- fluid.regularizer
- L1Decay
- L1DecayRegularizer
- L2Decay
- L2DecayRegularizer
fluid.regularizer
SourceEnglish
L1Decay
SourceEnglish
paddle.fluid.regularizer.
L1Decay
L1DecayRegularizer
的别名
L1DecayRegularizer
SourceEnglish
- class
paddle.fluid.regularizer.
L1DecayRegularizer
(regularization_coeff=0.0) - 实现 L1 权重衰减正则化。
L1正则将会稀疏化权重矩阵。
- 参数:
- regularization_coeff (float) – 正则化系数代码示例
- import paddle.fluid as fluid
- main_prog = fluid.Program()
- startup_prog = fluid.Program()
- with fluid.program_guard(main_prog, startup_prog):
- data = fluid.layers.data(name='image', shape=[3, 28, 28], dtype='float32')
- label = fluid.layers.data(name='label', shape=[1], dtype='int64')
- hidden = fluid.layers.fc(input=data, size=128, act='relu')
- prediction = fluid.layers.fc(input=hidden, size=10, act='softmax')
- loss = fluid.layers.cross_entropy(input=prediction, label=label)
- avg_loss = fluid.layers.mean(loss)
- optimizer = fluid.optimizer.Adagrad(
- learning_rate=1e-4,
- regularization=fluid.regularizer.L1DecayRegularizer(
- regularization_coeff=0.1))
- optimizer.minimize(avg_loss)
L2Decay
SourceEnglish
paddle.fluid.regularizer.
L2Decay
L2DecayRegularizer
的别名
L2DecayRegularizer
SourceEnglish
- class
paddle.fluid.regularizer.
L2DecayRegularizer
(regularization_coeff=0.0) - 实现L2 权重衰减正则化。
较小的 L2 的有助于防止对训练数据的过度拟合。
- 参数:
- regularization_coeff (float) – 正则化系数代码示例
- import paddle.fluid as fluid
- main_prog = fluid.Program()
- startup_prog = fluid.Program()
- with fluid.program_guard(main_prog, startup_prog):
- data = fluid.layers.data(name='image', shape=[3, 28, 28], dtype='float32')
- label = fluid.layers.data(name='label', shape=[1], dtype='int64')
- hidden = fluid.layers.fc(input=data, size=128, act='relu')
- prediction = fluid.layers.fc(input=hidden, size=10, act='softmax')
- loss = fluid.layers.cross_entropy(input=prediction, label=label)
- avg_loss = fluid.layers.mean(loss)
- optimizer = fluid.optimizer.Adagrad(
- learning_rate=1e-4,
- regularization=fluid.regularizer.L2DecayRegularizer(
- regularization_coeff=0.1))
- optimizer.minimize(avg_loss)