Code Monkey home page Code Monkey logo

Comments (9)

hbusul avatar hbusul commented on July 30, 2024

Hi @wangyexiang, we are developing a solution in this area. However, I must say I'm really happy to hear there is a demand in this direction. If that's OK could you give some more information, I guess your net is a feed-forward network with a scalar output where you are including that scalar output in the objective function. If that's the case with the new stuff that's being developed it should be pretty-straightforward to do it.

from gamspy.

wangyexiang avatar wangyexiang commented on July 30, 2024

Dear sir, thank you for your reply. Yes, the net is a feed-forward network with a scalar output and it describes a scalar regression problem.

from gamspy.

hbusul avatar hbusul commented on July 30, 2024

Hi, the solution needs proper documentation and testing etc. The timeline looks like end of may, although it is not sure. Since, I really think this would be a good learning experience for us as well, I can offer you some help if you like. Here is what we can do, I once did a bit proof of concept example, totally not a production ready code, I can provide you with that and give you some guidelines if you like. For that, I would need the architecture of the network, you do not need the share the weights if that's confidential. Or if you are OK with waiting a bit, you can try the new stuff then, totally up to you.

from gamspy.

wangyexiang avatar wangyexiang commented on July 30, 2024

Dear sir, here is my code:

import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np

torch.manual_seed(42)


class NNModel(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(NNModel, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(hidden_size, output_size)

    def forward(self, x):
        out = self.fc1(x)
        out = self.relu(out)
        out = self.fc2(out)
        return out


input_size = 2
hidden_size = 64
output_size = 1

model = NNModel(input_size, hidden_size, output_size)


def generate_data(n_samples):
    x1 = np.random.uniform(-10, 10, size=(n_samples, 1)).astype(np.float32)
    x2 = np.random.uniform(-10, 10, size=(n_samples, 1)).astype(np.float32)
    x = np.concatenate((x1, x2), axis=1)
    y = x[:, 0] ** 2 + x[:, 1] ** 2
    return x, y


learning_rate = 0.001
n_epochs = 10000
n_samples = 1000

x_train, y_train = generate_data(n_samples)
x_train = torch.from_numpy(x_train)
y_train = torch.from_numpy(y_train).view(-1, 1)


criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)


for epoch in range(n_epochs):
    outputs = model(x_train)
    loss = criterion(outputs, y_train)

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    if (epoch + 1) % 100 == 0:
        print("Epoch [{}/{}], Loss: {:.4f}".format(epoch + 1, n_epochs, loss.item()))

torch.save(model.state_dict(), "model.pth")

As you can see, this neural network model fits a simple bivariate quadratic function, theoretically, its minimum value is f(0,0)=0.

from gamspy.

hbusul avatar hbusul commented on July 30, 2024

Like I said before, this is only because I really want people to do what you do I'm providing a sample code. This was a code that I did some proof of concept so it is not in the best shape, but I guess we can keep the issue open and when the merge request is done, I can write another sample code for this that looks much better:

Since we do not have the matrix facilities until the MR is finished let's hack something for the moment,
matrix.py will contain:

from typing import Tuple

import gamspy as gp
import numpy as np


class ParameterMatrix:
    def __init__(
        self,
        model: gp.Container,
        name: str,
        data: np.ndarray,
        generate: bool = True,
        **kwargs
    ):
        self.model = model
        self.name = name
        self.shape = data.shape
        self.kwargs = kwargs
        self._domain = None
        self._symbol = None

        if generate:
            self._domain = []
            for size in self.shape:
                domain_name = f"md{size}"
                domain_symbol = None
                if domain_name not in self.model:
                    domain_symbol = gp.Set(model, name=domain_name, records=range(size))
                else:
                    domain_symbol = self.model[domain_name]

                self._domain.append(domain_symbol)

            self._symbol = gp.Parameter(model, name=name, domain=self._domain, records=data, **kwargs)

    def __matmul__(self, other):
        assert self.shape[1] == other.shape[0], "Matrix dims must match"
        ldomain = self._domain[0]
        mdomain = self._domain[1]
        rdomain = other._domain[1]
        return gp.Sum(
            mdomain,
            self._symbol[ldomain, mdomain] * other._symbol[mdomain, rdomain]
        )


class VariableMatrix:
    def __init__(
        self,
        model: gp.Container,
        name: str,
        shape: Tuple[int, int],
        generate: bool = True,
        **kwargs
    ):
        self.model = model
        self.name = name
        self.shape = shape
        self.kwargs = kwargs
        self._domain = None
        self._symbol = None

        if generate:
            self._domain = []
            for size in self.shape:
                domain_name = f"md{size}"
                domain_symbol = None
                if domain_name not in self.model:
                    domain_symbol = gp.Set(model, name=domain_name, records=range(size))
                else:
                    domain_symbol = self.model[domain_name]

                self._domain.append(domain_symbol)

            self._symbol = gp.Variable(model, name=name, domain=self._domain, **kwargs)

    def __matmul__(self, other):
        assert self.shape[1] == other.shape[0], "Matrix dims must match"
        ldomain = self._domain[0]
        mdomain = self._domain[1]
        rdomain = other._domain[1]
        return gp.Sum(
            mdomain,
            self._symbol[ldomain, mdomain] * other._symbol[mdomain, rdomain]
        )

I'm also assuming your code ran and created a file called model.pth for the network.

So, we can integrate this simple NN using the temporary workaround:

import torch
import torch.nn as nn

import uuid
import gamspy as gp
import gamspy.math
import numpy as np

from matrix import ParameterMatrix, VariableMatrix

def get_auto_name(prefix):
    name = str(uuid.uuid4()).split("-")[0]
    return f"{prefix}_{name}"


def ReLU(m: gp.Container, expr, expr_domain, expr_shape):
    name = get_auto_name("ReLU")
    var = gp.Variable(m, name=name, domain=expr_domain, type="positive")

    eq1_name = get_auto_name("calc_ReLU_1")
    eq1 = gp.Equation(m, name=eq1_name, domain=expr_domain)
    eq1[...] = var[...] >= expr[...]

    binary_var_name = get_auto_name("b_ReLU")
    binary_var = gp.Variable(m, name=binary_var_name, domain=expr_domain, type="binary")

    eq2_name = get_auto_name("calc_ReLU_2")
    eq2 = gp.Equation(m, name=eq2_name, domain=expr_domain)
    eq2[...] = var[...] <= expr[...] * binary_var[...]

    var_matrix = VariableMatrix(m, name=name, shape=expr_shape, generate=False)
    var_matrix._symbol = var
    var_matrix._domain = var.domain

    return var_matrix


class NNModel(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(NNModel, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(hidden_size, output_size)

    def forward(self, x):
        out = self.fc1(x)
        out = self.relu(out)
        out = self.fc2(out)
        return out

input_size = 2
hidden_size = 64
output_size = 1

model = NNModel(input_size, hidden_size, output_size)
model.load_state_dict(torch.load("model.pth"))

m = gp.Container()

x1 = VariableMatrix(m, "x_input", shape=(input_size, 1))

# Create a parameter matrix for w1.T
w1 = ParameterMatrix(m, "w1t", model.fc1.weight.detach().numpy())

# Create par for bias 1
b1 = ParameterMatrix(m, "b1", model.fc1.bias.detach().numpy())

# Create a parameter matrix for w2.T
w2 = ParameterMatrix(m, "w2t", model.fc2.weight.detach().numpy())

# Create par for bias 2
b2 = ParameterMatrix(m, "b2", model.fc2.bias.detach().numpy())

out1 = VariableMatrix(m, "out1", shape=(hidden_size, 1)) # after fc1
out3 = VariableMatrix(m, "out3", shape=(output_size, 1)) # after fc2

eq_calc_out_1 = gp.Equation(m, name="calc_out_1", domain=out1._domain)
eq_calc_out_1[...] = out1._symbol[...] == w1 @ x1 + b1._symbol[...]


out2 = ReLU(m, out1._symbol, out1._domain, (hidden_size, 1))

eq_calc_out_3 = gp.Equation(m, name="calc_out_3", domain=out3._domain)
eq_calc_out_3[...] = out3._symbol[...] == w2 @ out2 + b2._symbol[...]


# Let's set x1 to something to see if it works:
x1._symbol.fx[...] = 2

z = gp.Variable(m, name="fake_obj")

eq_obj = gp.Equation(m, name="calc_obj")
eq_obj[...] = z == out3._symbol["0", "0"]


model_fit = gp.Model(
    m,
    name="fit",
    equations=m.getEquations(),
    problem="MINLP",
    sense="min",
    objective=z,
)
model_fit.solve()


val = torch.tensor([2.0, 2.0])
print(model(val))
print(z.records)

Where I set to input to [2, 2] and compare what GAMSPy outputs and what the PyTorch outputs:

tensor([7.5770], grad_fn=<AddBackward0>)
      level  marginal  lower  upper  scale
0  7.576993       0.0   -inf    inf    1.0

from gamspy.

hbusul avatar hbusul commented on July 30, 2024

@wangyexiang if some clarification is required, I'm more than happy to help.

from gamspy.

wangyexiang avatar wangyexiang commented on July 30, 2024

Dear Sir, thank you for your response. Can I understand it this way, to reconstruct the neural network using the trained neural network weights and biases to form the objective function, instead of directly using the PyTorch-trained neural network as the objective function?@hbusul

from gamspy.

hbusul avatar hbusul commented on July 30, 2024

@wangyexiang yes, since all the information of this network is embedded in its weights and biases you can reconstruct it using those. If you use a smooth activation function, you might get an NLP model instead of MINLP but I guess ReLU would require a binary variable.

from gamspy.

wangyexiang avatar wangyexiang commented on July 30, 2024

I see, sir, thanks for the answer.

from gamspy.

Related Issues (8)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.