Hi!
How to correctly support multiple output neurons in your models (for example for multiclass classification task)?
-
armnet
and armnet_1h
support multiple output neurons via noutput
parameter - there is no problem.
-
For some models, I added support for multiple output neurons by explicitly specifying the noutput
parameter in the last MLP-layer, or by adding support for multiple output neurons for the Linear as follows:
class Linear(nn.Module):
def __init__(self, nfeat, noutput=1):
super().__init__()
self.noutput = noutput
self.weight = nn.Embedding(nfeat, noutput)
self.bias = nn.Parameter(torch.zeros((1,)))
def forward(self, x):
"""
:param x: {'id': LongTensor B*F, 'value': FloatTensor B*F}
:return: linear transform of x
"""
wights = self.weight(x['id'])
linear = []
for i in range(self.noutput):
a_i = wights[:, :, i]
a_i_mul_b_i = torch.mul(a_i, x['value'])
linear.append(a_i_mul_b_i)
linear = torch.stack(linear, dim=2)
val = torch.sum(linear, dim=1) + self.bias
return val
The following models can be adjusted in this way: lr
, dnn
, afn
, gc_arm
, dcn
, cin
, nfm
, xdfm
, ipnn
, kpnn
, wd
, gat
, gcn
, dcn+
, sa_glu
.
- It is not quite clear how to add support for multiple output neurons for the following models:
dfm
, fm
, hofm
, afm
because of it is unclear how to modify FactorizationMachine for multiple output.
Could you please comment on the correctness of changing the Linear
layer from (2), and how to add support for multiple output neurons for models from (3)?