Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
258 views
in Technique[技术] by (71.8m points)

python - Pytorch ValueError: optimizer got an empty parameter list

When trying to create a neural network and optimize it using Pytorch, I am getting

ValueError: optimizer got an empty parameter list

Here is the code.

import torch.nn as nn
import torch.nn.functional as F
from os.path import dirname
from os import getcwd
from os.path import realpath
from sys import argv

class NetActor(nn.Module):

    def __init__(self, args, state_vector_size, action_vector_size, hidden_layer_size_list):
        super(NetActor, self).__init__()
        self.args = args

        self.state_vector_size = state_vector_size
        self.action_vector_size = action_vector_size
        self.layer_sizes = hidden_layer_size_list
        self.layer_sizes.append(action_vector_size)

        self.nn_layers = []
        self._create_net()

    def _create_net(self):
        prev_layer_size = self.state_vector_size
        for next_layer_size in self.layer_sizes:
            next_layer = nn.Linear(prev_layer_size, next_layer_size)
            prev_layer_size = next_layer_size
            self.nn_layers.append(next_layer)

    def forward(self, torch_state):
        activations = torch_state
        for i,layer in enumerate(self.nn_layers):
            if i != len(self.nn_layers)-1:
                activations = F.relu(layer(activations))
            else:
                activations = layer(activations)

        probs = F.softmax(activations, dim=-1)
        return probs

and then the call

        self.actor_nn = NetActor(self.args, 4, 2, [128])
        self.actor_optimizer = optim.Adam(self.actor_nn.parameters(), lr=args.learning_rate)

gives the very informative error

ValueError: optimizer got an empty parameter list

I find it hard to understand what exactly in the network's definition makes the network have parameters.

I am following and expanding the example I found in Pytorch's tutorial code.

I can't really tell the difference between my code and theirs that makes mine think it has no parameters to optimize.

How to make my network have parameters like the linked example?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Your NetActor does not directly store any nn.Parameter. Moreover, all other layers it eventually uses in forward are stored as a simple list is self.nn_layers.
If you want self.actor_nn.parameters() to know that the items stored in the list self.nn_layers may contain trainable parameters, you should work with containers.
Specifically, making self.nn_layers to be a nn.ModuleList instead of a simple list should solve your problem:

self.nn_layers = nn.ModuleList()

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...