pytorch动态网络以及权重共享实例

yipeiwu_com6年前Python基础

pytorch 动态网络+权值共享

pytorch以动态图著称,下面以一个栗子来实现动态网络和权值共享技术:

# -*- coding: utf-8 -*-
import random
import torch


class DynamicNet(torch.nn.Module):
  def __init__(self, D_in, H, D_out):
    """
    这里构造了几个向前传播过程中用到的线性函数
    """
    super(DynamicNet, self).__init__()
    self.input_linear = torch.nn.Linear(D_in, H)
    self.middle_linear = torch.nn.Linear(H, H)
    self.output_linear = torch.nn.Linear(H, D_out)

  def forward(self, x):
    """
    For the forward pass of the model, we randomly choose either 0, 1, 2, or 3
    and reuse the middle_linear Module that many times to compute hidden layer
    representations.

    Since each forward pass builds a dynamic computation graph, we can use normal
    Python control-flow operators like loops or conditional statements when
    defining the forward pass of the model.

    Here we also see that it is perfectly safe to reuse the same Module many
    times when defining a computational graph. This is a big improvement from Lua
    Torch, where each Module could be used only once.
    这里中间层每次向前过程中都是随机添加0-3层,而且中间层都是使用的同一个线性层,这样计算时,权值也是用的同一个。
    """
    h_relu = self.input_linear(x).clamp(min=0)
    for _ in range(random.randint(0, 3)):
      h_relu = self.middle_linear(h_relu).clamp(min=0)
    y_pred = self.output_linear(h_relu)
    return y_pred


    # N is batch size; D_in is input dimension;
    # H is hidden dimension; D_out is output dimension.
    N, D_in, H, D_out = 64, 1000, 100, 10

    # Create random Tensors to hold inputs and outputs
    x = torch.randn(N, D_in)
    y = torch.randn(N, D_out)

    # Construct our model by instantiating the class defined above
    model = DynamicNet(D_in, H, D_out)

    # Construct our loss function and an Optimizer. Training this strange model with
    # vanilla stochastic gradient descent is tough, so we use momentum
    criterion = torch.nn.MSELoss(reduction='sum')
    optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9)
    for t in range(500):
      # Forward pass: Compute predicted y by passing x to the model
      y_pred = model(x)

      # Compute and print loss
      loss = criterion(y_pred, y)
      print(t, loss.item())

      # Zero gradients, perform a backward pass, and update the weights.
      optimizer.zero_grad()
      loss.backward()
      optimizer.step()

这个程序实际上是一种RNN结构,在执行过程中动态的构建计算图

References: Pytorch Documentations.

以上这篇pytorch动态网络以及权重共享实例就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持【听图阁-专注于Python设计】。

相关文章

浅谈使用Python变量时要避免的3个错误

Python编程中经常遇到一些莫名其妙的错误, 其实这不是语言本身的问题, 而是我们忽略了语言本身的一些特性导致的,今天就来看下使用Python变量时导致的3个不可思议的错误, 以后在编...

浅谈Python单向链表的实现

浅谈Python单向链表的实现

链表由一系列不必在内存中相连的结构构成,这些对象按线性顺序排序。每个结构含有表元素和指向后继元素的指针。最后一个单元的指针指向NULL。为了方便链表的删除与插入操作,可以为链表添加一个表...

python检测远程udp端口是否打开的方法

本文实例讲述了python检测远程udp端口是否打开的方法。分享给大家供大家参考。具体实现方法如下: 复制代码 代码如下:import socket import threading i...

python赋值操作方法分享

一、序列赋值: x,y,z = 1,2,3 我们可以看作:x = 1,y = 2,z = 3 二、链接赋值: x = y = 1print id(x)print id(y) 大家可以看下...

python正则匹配查询港澳通行证办理进度示例分享

复制代码 代码如下:import socketimport re '''广东省公安厅出入境政务服务网护照,通行证办理进度查询。分析网址格式为 http://www.gdcrj.com/w...