pytorch动态网络以及权重共享实例

yipeiwu_com6年前Python基础

pytorch 动态网络+权值共享

pytorch以动态图著称,下面以一个栗子来实现动态网络和权值共享技术:

# -*- coding: utf-8 -*-
import random
import torch


class DynamicNet(torch.nn.Module):
  def __init__(self, D_in, H, D_out):
    """
    这里构造了几个向前传播过程中用到的线性函数
    """
    super(DynamicNet, self).__init__()
    self.input_linear = torch.nn.Linear(D_in, H)
    self.middle_linear = torch.nn.Linear(H, H)
    self.output_linear = torch.nn.Linear(H, D_out)

  def forward(self, x):
    """
    For the forward pass of the model, we randomly choose either 0, 1, 2, or 3
    and reuse the middle_linear Module that many times to compute hidden layer
    representations.

    Since each forward pass builds a dynamic computation graph, we can use normal
    Python control-flow operators like loops or conditional statements when
    defining the forward pass of the model.

    Here we also see that it is perfectly safe to reuse the same Module many
    times when defining a computational graph. This is a big improvement from Lua
    Torch, where each Module could be used only once.
    这里中间层每次向前过程中都是随机添加0-3层,而且中间层都是使用的同一个线性层,这样计算时,权值也是用的同一个。
    """
    h_relu = self.input_linear(x).clamp(min=0)
    for _ in range(random.randint(0, 3)):
      h_relu = self.middle_linear(h_relu).clamp(min=0)
    y_pred = self.output_linear(h_relu)
    return y_pred


    # N is batch size; D_in is input dimension;
    # H is hidden dimension; D_out is output dimension.
    N, D_in, H, D_out = 64, 1000, 100, 10

    # Create random Tensors to hold inputs and outputs
    x = torch.randn(N, D_in)
    y = torch.randn(N, D_out)

    # Construct our model by instantiating the class defined above
    model = DynamicNet(D_in, H, D_out)

    # Construct our loss function and an Optimizer. Training this strange model with
    # vanilla stochastic gradient descent is tough, so we use momentum
    criterion = torch.nn.MSELoss(reduction='sum')
    optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9)
    for t in range(500):
      # Forward pass: Compute predicted y by passing x to the model
      y_pred = model(x)

      # Compute and print loss
      loss = criterion(y_pred, y)
      print(t, loss.item())

      # Zero gradients, perform a backward pass, and update the weights.
      optimizer.zero_grad()
      loss.backward()
      optimizer.step()

这个程序实际上是一种RNN结构,在执行过程中动态的构建计算图

References: Pytorch Documentations.

以上这篇pytorch动态网络以及权重共享实例就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持【听图阁-专注于Python设计】。

相关文章

python如何统计序列中元素

本文实例为大家分享了python统计序列中元素的具体代码,供大家参考,具体内容如下 问题1:        随机数列[12,5...

python pygame实现五子棋小游戏

今天学习了如何使用pygame来制作小游戏,下面是五子棋的代码,我的理解都写在注释里了 import pygame # 导入pygame模块 print(pygame.ver) #...

用Python的Django框架编写从Google Adsense中获得报表的应用

 我完成了更新我们在 Neutron的实时收入统计。在我花了一周的时间完成并且更新了我们的PHP脚本之后,我最终认决定开始使用Python进行抓取,这是值得我去花费我的时间和精...

浅谈python脚本设置运行参数的方法

浅谈python脚本设置运行参数的方法

正在学习Django框架,在运行manage.py的时候需要给它设置要监听的端口,就是给这个脚本一个运行参数。教学视频中,是在Eclipse中设置的运行参数,网上Django大部分都是在...

Python用字典构建多级菜单功能

相关知识点: #key-value #字典是无序的,因为他没有下标,通过key找 info={ 'stu01':"liuhaolai", 'stu02':"wangshulin"...