weixin_47872887 2022-09-27 17:53 采纳率: 52.5%
浏览 264
已结题

paddle定义了一个简单网络,如何用paddle.summary查看网络结构??

paddle定义了一个简单网络,如何用paddle.summary查看网络结构??

class LSTM(paddle.nn.Layer):
    def __init__(self, input_size=1, hidden_size=16): 
        super().__init__()
        self.rnn = paddle.nn.LSTM(input_size=input_size, hidden_size=hidden_size,  num_layers=3)
        self.linear = paddle.nn.Linear(hidden_size, 1)

    def forward(self, inputs):
        y, (hidden, cell) = self.rnn(inputs)
        output = self.linear(hidden[-1])
   
     
        return output         

  • 写回答

3条回答 默认 最新

  • 脚踏南山 2022-09-30 09:12
    关注
    获得3.45元问题酬金
    import paddle
    
    class LSTM(paddle.nn.Layer):
        def __init__(self, input_size=1, hidden_size=16):
            super().__init__()
            self.rnn = paddle.nn.LSTM(input_size=input_size, hidden_size=hidden_size, num_layers=3)
            self.linear = paddle.nn.Linear(hidden_size, 1)
    
        def forward(self, inputs):
            y, (hidden, cell) = self.rnn(inputs)
            output = self.linear(hidden[-1])
            return output
    
    input_size = 12  # 任意可修改
    time_steps = 2  # 任意可修改
    model = LSTM(input_size)
    summary = paddle.summary(model, (None, time_steps, input_size))
    print(summary)
    

    img

    评论
  • Elwin Wong 2022-09-27 19:18
    关注
    获得1.05元问题酬金

    summary第一个参数为网络模型,第二个参数为模型输入形状,用元组表示,包括batch_size(用1就行)

    import paddle
    import paddle.nn as nn
    
    class LeNet(nn.Layer):
        def __init__(self, num_classes=10):
            super(LeNet, self).__init__()
            self.num_classes = num_classes
            self.features = nn.Sequential(
                nn.Conv2D(
                    1, 6, 3, stride=1, padding=1),
                nn.ReLU(),
                nn.MaxPool2D(2, 2),
                nn.Conv2D(
                    6, 16, 5, stride=1, padding=0),
                nn.ReLU(),
                nn.MaxPool2D(2, 2))
    
            if num_classes > 0:
                self.fc = nn.Sequential(
                    nn.Linear(400, 120),
                    nn.Linear(120, 84),
                    nn.Linear(
                        84, 10))
    
        def forward(self, inputs):
            x = self.features(inputs)
    
            if self.num_classes > 0:
                x = paddle.flatten(x, 1)
                x = self.fc(x)
            return x
    
    lenet = LeNet()
    
    params_info = paddle.summary(lenet, (1, 1, 28, 28))
    print(params_info)
    
    评论 编辑记录
  • 万里鹏程转瞬至 人工智能领域优质创作者 2022-09-28 16:23
    关注
    获得2.70元问题酬金

    参考paddle官网链接:

    import paddle
    import paddle.nn as nn
    
    class LeNet(nn.Layer):
        def __init__(self, num_classes=10):
            super(LeNet, self).__init__()
            self.num_classes = num_classes
            self.features = nn.Sequential(
                nn.Conv2D(
                    1, 6, 3, stride=1, padding=1),
                nn.ReLU(),
                nn.MaxPool2D(2, 2),
                nn.Conv2D(
                    6, 16, 5, stride=1, padding=0),
                nn.ReLU(),
                nn.MaxPool2D(2, 2))
    
            if num_classes > 0:
                self.fc = nn.Sequential(
                    nn.Linear(400, 120),
                    nn.Linear(120, 84),
                    nn.Linear(
                        84, 10))
    
        def forward(self, inputs):
            x = self.features(inputs)
    
            if self.num_classes > 0:
                x = paddle.flatten(x, 1)
                x = self.fc(x)
            return x
    
    lenet = LeNet()
    
    params_info = paddle.summary(lenet, (1, 1, 28, 28))
    print(params_info)
    # ---------------------------------------------------------------------------
    # Layer (type)       Input Shape          Output Shape         Param #
    # ===========================================================================
    # Conv2D-11      [[1, 1, 28, 28]]      [1, 6, 28, 28]          60
    #     ReLU-11       [[1, 6, 28, 28]]      [1, 6, 28, 28]           0
    # MaxPool2D-11     [[1, 6, 28, 28]]      [1, 6, 14, 14]           0
    # Conv2D-12      [[1, 6, 14, 14]]     [1, 16, 10, 10]         2,416
    #     ReLU-12      [[1, 16, 10, 10]]     [1, 16, 10, 10]           0
    # MaxPool2D-12    [[1, 16, 10, 10]]      [1, 16, 5, 5]            0
    # Linear-16         [[1, 400]]            [1, 120]           48,120
    # Linear-17         [[1, 120]]            [1, 84]            10,164
    # Linear-18         [[1, 84]]             [1, 10]              850
    # ===========================================================================
    # Total params: 61,610
    # Trainable params: 61,610
    # Non-trainable params: 0
    # ---------------------------------------------------------------------------
    # Input size (MB): 0.00
    # Forward/backward pass size (MB): 0.11
    # Params size (MB): 0.24
    # Estimated Total Size (MB): 0.35
    # ---------------------------------------------------------------------------
    # {'total_params': 61610, 'trainable_params': 61610}
    
    
    评论

报告相同问题?

问题事件

  • 系统已结题 10月5日
  • 创建了问题 9月27日

悬赏问题

  • ¥20 ERR_CACHE_MISS 确认重新提交表单
  • ¥20 关于vba使用HTMLfile执行js函数问题
  • ¥60 悬赏求解,通过实时现场摄像头的视频图像识别其他对家打出的麻将牌,识别麻将牌,识别牌墙位置,通过识别对家打出了什么牌
  • ¥15 关于#GPU jetson#的pcie驱动开发问题,如何解决?
  • ¥15 stm32f103zet6 串口5无法收发数据
  • ¥15 关于C语言使用线程队列实现多线程并发
  • ¥15 这个运行,错误在哪里呀,大家看看吧,教导我
  • ¥15 advanceinstaller对话框设置
  • ¥100 正常上网,内部网页无法打开
  • ¥15 组件库引入并使用在若依框架未展示