Code Monkey home page Code Monkey logo

pyqubo's People

Contributors

29rou avatar 84monta avatar j-i-k-o avatar kainung avatar kotarot avatar kotarotanahashi avatar mashiyatz avatar tail-island avatar tnksh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyqubo's Issues

robin_hoodライブラリのバージョンアップ

いつも利用させていただいております。

Describe the bug
先日、pyqubo1.2.0を利用中にpyquboが落ちてしまう事象に遭遇しました。
詳しい原因はわからないですが、出力されたcoreを見るとrobin-hood-hashingのライブラリでオーバーフローにより落ちていました。

To Reproduce
(詳しい制約条件式は提示できませんが)ある特定の制約条件式でのみ発生します。

Expected behavior
https://github.com/martinus/robin-hood-hashing/releases/tag/3.11.3
で、bugfixがされているようです。
robin-hood-hashingのライブラリをバージョンアップすると事象は解消しました。
最新のrobin-hood-hashingを利用する予定はありますか?

Jupyter notebookでのTSPサンプルについて

大変便利なツールの開発ありがとうございます.

TSP.ipynb(英語/日本語)の5番目セルの最後の行に関してですが,添字に誤りがあると思います.

for j in range(n_city):
    city_const += Constraint((Sum(0, n_city, lambda i: x[i, j]) - 1)**2, label="city{}".format(i))

における一番最後のiは誤りで,正しくは

for j in range(n_city):
    city_const += Constraint((Sum(0, n_city, lambda i: x[i, j]) - 1)**2, label="city{}".format(j))

だと思います.
実際,元のコードのままの場合,都市を2回通ってしまうような解であるにも関わらず,制約違反として
判定されていないような解が出てくることがあります.

また,もしよろしければSAだけではなく,D-Waveのようなマシンに投入するための例も見てみたいです.
(D-Wave社製Oceanライブラリとの連携など)

Got ”RuntimeError: QUBO was not created correctly. ...” Error

Describe the bug
I got Error when I decode_sampleset with Constraint and multiplication of more than 3 variables.
Without Constraint or multiplication of 2variables, I did not get the error.

Example Code

from dwave.system import EmbeddingComposite,DWaveSampler
from pyqubo import Array,Constraint

x = Array.create('x',shape=(3),vartype="BINARY")
H = Constraint(x[0]*x[1]*x[2],label="Constraint") #RuntimeError
#H = x[0]*x[1]*x[2] #No RuntimeError

model = H.compile()
Q,Offset = model.to_qubo()

sampler = EmbeddingComposite(DWaveSampler())
responses = sampler.sample_qubo(Q,num_reads=10)

solutions = model.decode_sampleset(responses)

for sol in solutions:
    print(sol.sample)

Error Message

Traceback (most recent call last):
  File "XXXXXX PYTHONCODENAME", line 14, in <module>
    solutions = model.decode_sampleset(responses)
RuntimeError: QUBO was not created correctly. Please report the bug to the developer.

Sparse matrices

Is your feature request related to a problem? Please describe.
I would like to describe a graph using a sparse adjacency matrix. The multiplication between Spin/Binary arrays and a numpy matrix is very simple with the @ operator, but does not seem to work with sparse scipy matrices.

Describe the solution you'd like
Support for e.g. scipy csr matrices

Describe alternatives you've considered
Writing the matrix multiplication by myself. I'm sure other users would benefit from this enhancement.

Additional context
I'm working on graph partitioning in k > 2 classes.

Support dimod~=0.11.0

We recently released dimod 0.11.0, it would be good to have a version of pyqubo that supports it. Thanks!

decode_solution() error

Describe the bug
If there are binary product of three or more variables in an expression, decode_solution() outputs an error due to 'KeyError'.

To Reproduce

>>> from pyqubo import Sum, Matrix, Vector, Constraint, Placeholder, utils
>>> import functools
>>> import operator
>>> n = 10
>>> x = Matrix('x', n_row=1, n_col=n)
>>> prod = functools.partial(functools.reduce, operator.mul)
>>> exp = prod((1.0 - x[0,j]) for j in range(0, n))
>>> model = exp.compile()
>>> q, offset = model.to_qubo()
>>> s = utils.solve_qubo(q)
>>> decoded_solution, broken, energy = model.decode_solution(s, vartype='BINARY')
KeyError: 'x[0][8]*x[0][3]'

Expected behavior
Key should be generated as 'x[0][3]*x[0][8]'.

PyQUBO generates wrong QUBO model?

I used PyQUBO with D-Wave SDK, and I got weird result.

The source code is here.

from pprint import pprint as prt
from pyqubo import Binary, Constraint
import dimod

x = Binary("x")
y = Binary("y")
z = Binary("z")

sub = (x * y) + (y * z)
c = Constraint((sub - 1) ** 2, label="c")

H = -(x + y + z) + c * 100
model = H.compile()

# solve
bqm = model.to_bqm()
sampler = dimod.ExactSolver()
sampleset = sampler.sample(bqm)

# result
dec = model.decode_sample(sampleset.first.sample, vartype="BINARY")
prt(dec.sample)
prt(dec.constraints())
prt(dec.energy)

The result is here.

{'0*1': 0, 'x': 1, 'y': 1, 'z': 1}
{'c': (False, 1.0)}
-98.0

The output says that the lowest energy is -98 with x = y = z = 1. But with x = y = z = 1, the "c" will be 1, and then the "H" will be 97. Am I misunderstanding something?

My environment is below.

$ pip3 freeze
Deprecated==1.2.13
dimod==0.10.7
dwave-neal==0.5.8
dwave-preprocessing==0.3.1.post0
numpy==1.19.5
pyparsing==2.4.7
pyqubo==1.0.13
six==1.16.0
wrapt==1.13.2

How is OR supposed to work?

How is logic.OR supposed to be used? The energy of the expression Or(x, y), where x and y are binary, is minimized iff both x = 0 and y = 0. Assuming 0 represents False, this is exactly the opposite of what I would expect, namely that Or(x,y) is minimized by any one of

  • x=0, y=1
  • x=1, y=0
  • x=1, y=1.

Steps to reproduce

import neal
from pyqubo.logic import Or
from pyqubo import Binary

from dimod import SampleSet, ExactSolver


def build_model():
    x, y = Binary("x"), Binary("y")
    model = Or(x, y)
    return model.compile()


def solve(model):
    sa = neal.SimulatedAnnealingSampler()
    return sa.sample(model.to_bqm(), num_reads=1)


def solve_exact(model):
    solver = ExactSolver()
    return solver.sample(model.to_bqm())


def interpret(sampleset, model):
    return model.decode_sampleset(sampleset)


if __name__ == "__main__":
    model = build_model()
    samples = solve_exact(model)
    decoded_samples = interpret(samples, model)

    print("Exact Samples")
    for decoded_sample in decoded_samples:
        print(f"Energy: {decoded_sample.energy}")
        print(f"sample: {decoded_sample.sample}")

    print("")
    print("Simulated Annealing Samples")
    samples = solve(model)
    decoded_samples = interpret(samples, model)
    for decoded_sample in decoded_samples:
        print(f"Energy: {decoded_sample.energy}")
        print(f"sample: {decoded_sample.sample}")
    print("--------------------------------")
Exact Samples
Energy: 0.0
sample: {'y': 0, 'x': 0}
Energy: 1.0
sample: {'y': 1, 'x': 0}
Energy: 1.0
sample: {'y': 0, 'x': 1}
Energy: 1.0
sample: {'y': 1, 'x': 1}

Simulated Annealing Samples
Energy: 0.0
sample: {'y': 0, 'x': 0}

Incorrect QUBOs from large combinations of And, Or, and Not

Describe the bug

I'm trying to represent a truth table as a QUBO by penalizing all combinations of variables that do not correspond to a truth-table row. For example, I represent the row

1 1 0

as

And(x0, And(x1, Not(x2)))

and the row

0 1 0

as

And(Not(x0), And(x1, Not(x2)))

I then penalize all rows that are Not the first row Or the second row:

Not(Or(And(x0, And(x1, Not(x2))),
       And(Not(x0), And(x1, Not(x2)))))

When I pass PyQUBO-compiled versions of such expressions to dimod.ExactSolver I expect to see the original truth table in the lowest-valued energy states. I usually do, especially for smaller truth tables, but I don't always.

To Reproduce

The following code—let's call it bad-pyqubo.py—reproduces the problem:

#! /usr/bin/env python

import pyqubo
import dimod

# This example works.
good = [[0, 0, 0, 0, 0],
        [0, 0, 1, 0, 1],
        [0, 1, 0, 1, 0],
        [0, 1, 1, 1, 1],
        [1, 0, 0, 0, 0],
        [1, 0, 1, 1, 0],
        [1, 1, 0, 0, 1],
        [1, 1, 1, 1, 1]]

# This example doesn't work.
bad = [[0, 0, 0, 0, 1, 1],
       [0, 0, 0, 1, 0, 1],
       [0, 0, 0, 1, 1, 0],
       [0, 0, 1, 0, 0, 1],
       [0, 0, 1, 0, 1, 0],
       [0, 0, 1, 1, 0, 0],
       [0, 1, 0, 0, 0, 1],
       [0, 1, 0, 0, 1, 0],
       [0, 1, 0, 1, 0, 0],
       [0, 1, 1, 0, 0, 0],
       [1, 0, 0, 0, 0, 1],
       [1, 0, 0, 0, 1, 0],
       [1, 0, 0, 1, 0, 0],
       [1, 0, 1, 0, 0, 0],
       [1, 1, 0, 0, 0, 0],
       [1, 1, 1, 1, 1, 1]]

# Construct a PyQUBO expression.
pattern = bad   # Edit this line to contrast "good" and "bad".
nc = len(pattern[0])
vars = [pyqubo.Binary('x%d' % c) for c in range(nc)]
rows = []
for row in pattern:
    cexps = []
    for c, bit in enumerate(row):
        if bit == 1:
            cexps.append(vars[c])
        else:
            cexps.append(pyqubo.Not(vars[c]))
    rows.append(pyqubo.reduce(pyqubo.And, cexps[1:], cexps[0]))
expr = pyqubo.Not(pyqubo.reduce(pyqubo.Or, rows[1:], rows[0]))

# Minimize the QUBO and output the result.
model = expr.compile()
qubo = model.to_qubo()[0]
sampleset = dimod.ExactSolver().sample_qubo(qubo)
for d in sampleset.data():
    bits = [str(d.sample[v]) for v in model.variables]
    print('%s  %s  -->  %.1f' % \
          (' '.join(bits[:nc]),
           ' '.join(bits[nc:]),
           d.energy))

First, change pattern = bad to pattern = good in line 35 and run bad-pyqubo.py | head -16. You should see the rows of the good truth table, possibly permuted, in the lowest eight energy states:

$ ./bad-pyqubo.py | head -16
0 0 0 0 0  0 0 0 0 0  -->  0.0
0 1 0 1 0  0 0 0 0 0  -->  0.0
1 1 0 0 1  1 0 1 1 1  -->  0.0
1 1 1 1 1  1 1 1 1 1  -->  0.0
0 1 1 1 1  0 1 0 1 0  -->  0.0
0 0 1 0 1  0 0 0 0 0  -->  0.0
1 0 1 1 0  0 1 0 0 0  -->  0.0
1 0 0 0 0  0 0 0 0 0  -->  0.0
1 0 1 0 0  0 0 0 0 0  -->  1.0
0 0 1 0 0  0 0 0 0 0  -->  1.0
0 1 0 0 0  0 0 0 0 0  -->  1.0
0 0 1 1 0  0 1 0 0 0  -->  1.0
1 1 1 0 1  1 0 1 1 1  -->  1.0
0 1 1 0 0  0 0 0 0 0  -->  1.0
1 0 1 1 1  0 1 1 0 0  -->  1.0
0 1 0 0 1  0 0 0 1 0  -->  1.0

(The first set of columns represent the truth table. The second set of columns are ancillary product terms introduced by PyQUBO.)

This is correct output. But now revert pattern = good to pattern = bad and run the script again:

$ ./bad-pyqubo.py | head -16
1 1 1 1 0 0  1 1 1 1 1 0 1 1 0 0 0 1  -->  -18.0
0 1 1 1 0 0  0 1 1 1 1 0 1 1 0 0 0 1  -->  -14.0
1 1 1 0 0 0  1 0 1 1 1 0 1 1 0 0 0 1  -->  -14.0
1 0 1 1 0 0  0 1 1 1 1 0 1 1 0 0 0 1  -->  -14.0
1 1 0 1 0 0  1 0 1 1 1 0 1 1 0 0 0 1  -->  -14.0
1 0 1 1 0 0  0 1 1 1 1 0 1 0 0 0 0 1  -->  -12.0
1 0 1 1 0 0  0 1 1 1 1 0 0 1 0 0 0 1  -->  -12.0
0 1 1 1 0 0  0 1 1 0 1 0 1 1 0 0 0 1  -->  -12.0
1 1 0 1 0 0  1 0 1 1 1 0 0 1 0 0 0 1  -->  -12.0
1 1 0 0 0 0  1 0 1 1 1 0 1 1 0 0 0 1  -->  -12.0
0 1 1 1 0 0  0 1 1 1 0 0 1 1 0 0 0 1  -->  -12.0
1 1 0 1 0 0  1 0 1 0 1 0 1 1 0 0 0 1  -->  -12.0
0 0 1 1 0 0  0 1 1 1 1 0 1 1 0 0 0 1  -->  -12.0
1 1 1 0 0 0  1 0 1 1 0 0 1 1 0 0 0 1  -->  -12.0
1 1 1 0 0 0  1 0 1 1 1 0 1 0 0 0 0 1  -->  -12.0
1 0 1 1 0 0  1 1 1 1 1 0 1 1 0 0 0 1  -->  -11.0

Only one row lies at minimum energy, and it is not even a row that appears in the bad truth table. In fact, the output from this run of bad-pyqubo.py bears little resemblance at all to the truth table.

Expected behavior

I would expect bad-pyqubo.py to reproduce any truth table it is given, not just smaller/simpler truth tables.

I suspect that the program is related to #113, but I don't know for sure.

Cannot use Parameter in Constraint

Describe the bug
When I include Param in Constraint expression, an error occurred in decode solution.

To Reproduce

>>> a = Qbit("a")
>>> exp = Constraint(2*Param("c")+a, "const1")
>>> m = exp.compile()
>>> m.decode_solution({"a": 1}, var_type="binary", params={"c":1})
TypeError: '>' not supported between instances of 'AddList' and 'float'

Expected behavior
It should decode solution without error.

Method to_bqm does not maintain variables order

When converting to bqm with .compile().to_bqm(), the original order of the variables is not maintained.
To reproduce:

from pyqubo import Array, Binary

n=3
m=12

X = Array.create('x', shape=(n,m), vartype='BINARY')

for i in range(n):
    expr=0
    for j in range(m):
        expr += 2.0 * X[i,j]

    misfit = expr - 1.5

    model = misfit.compile()

    print("BEFORE: ", model.variables)

    print("AFTER: ", model.to_bqm().variables)

The first two printed lines are:

BEFORE:  ['x[0][0]', 'x[0][1]', 'x[0][2]', 'x[0][3]', 'x[0][4]', 'x[0][5]', 'x[0][6]', 'x[0][7]', 'x[0][8]', 'x[0][9]', 'x[0][10]', 'x[0][11]']
AFTER:  Variables(['x[0][6]', 'x[0][11]', 'x[0][10]', 'x[0][7]', 'x[0][4]', 'x[0][3]', 'x[0][2]', 'x[0][5]', 'x[0][8]', 'x[0][9]', 'x[0][1]', 'x[0][0]'])

What am I missing here?

Automatically handle strength for dimension reduction

Is your feature request related to a problem? Please describe.
When compiling a high order polynomial expression into a model, the strength for dimension reduction has to be specified beforehand. Not only is this annoying because sometimes estimating what the strength should be is non-trivial, but it sometimes leads to unnecessarily large constraints.

For example, if we want to reduce 100*a*b*c + x*y*z to quadratic, the strength has to be at least 100, which is unnecessarily large for the constraint for x*y*z.

Describe the solution you'd like

  1. The compile() function can determine the strength automatically. It already reads through every term of the polynomial anyways, it should have all the information it needs.
  2. The strengths for each constraint should be determined separately so as to minimize unnecessary energy barriers in the model.

Describe alternatives you've considered
I have actually put together a fix that does what I suggested. The fix was based on pyqubo == 0.4.0, however, because I am not fluent in c++ enough to contribute to the current version.

All the modified files are in my repo for your reference if needed.

pickleによるモデル保存について

pyquboを利用させていただいております。ありがとうございます。
改善要望というほどではないのですが、確認させてください。

pyqubo0.4.0時点ではpickleによりモデルを保存することができたのですが、
pyqubo1.0.13(pyqubo1.0.0から?)では以下のエラーが発生し、保存できなくなりました。

>>> with open('model', 'wb') as f:
...     pickle.dump(model, f)
...
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
TypeError: cannot pickle 'cpp_pyqubo.Model' object

今後pickleによるモデル保存を可能とするような対応を検討する予定はありますでしょうか?

Isssue #86と#87の対応はどうなっているでしょうか?

Issue #86と#87を修正するPull Requestを出していますが、まだ対応されていません。その理由を、現バージョンのC++部分のコードが複雑すぎて修正の影響を調べることが困難なためではないかと推測しました(#87は明らかなミスなので、Pull Requestはともかくとして修正すべきでしょうし)。そこで、現バージョンの性能を保ちつつ、シンプルな形でC++部分を書き直してみました。

https://github.com/tail-island/pyquboc/

#ビルドにはboostが必要です。Linuxの場合はパッケージ・マネージャーでインストール、Windowsの場合はダウンロードしてビルドしてBOOST_ROOT環境変数を設定してください。

Pythonの式を多項式に変換する部分はPyQUBO 0.4のコードを、二次式に変換する部分はPyQUBO 1.0のコードを参考にしています。特に0.4のコードは美しくて分かりやすくて大変参考になりました。巨大な多項式になることを考慮し、足し算を二項演算とせずにリストに対する演算とする工夫には本当に助けられました。

作成したクローンのC++部分のコード行数は1,241行でPyQUBO 1.0のC++部分のコード行数は3,304行とコード量を半分以下に削減できましたし、データ構造にライブラリを使用していますし、関数型プログラミングを採用して不変な変数を基本にしましたので、コードの理解は容易だと考えています。

テストは、Pythonのユニット・テストを概ね通過済みです。「概ね」としたのは、以下の理由によります。

  • QUBO作成の際のindex_label=Trueに対応していないので、index_label=Trueの場合のテストはコメント・アウトして除外しました。
  • 生成されるQUBOの順序等が異なりますので、test_integer.pyなどでは問題を解けたらテストを通過することにし、QUBOの内容チェックはコメント・アウトして除外しました。
  • QUBO作成はcimodの1.2.3を使用しているのですが、cimodではlinearが0の項はQUBOのlinear出力に含まれないようでしたので、テスト・コードを修正しました。

Issue #87を再現するコードを、test/x.pyとtest/y.pyに含めました。PyQUBO 1.0だと、二次式に変換する処理のコードの文法ミスで、私の環境では正しい結果が出ませんでした(変数の値が不定となることが原因なので、環境によっては正しい結果がでるかもしれませんが)。

ベンチマークの結果は以下の通りです。私の環境(Arch Linux、Python 3.9.5)で100都市(変数の数が10 ** 4。READMEのグラフでは10 ** 1.3くらいのところ)の場合で、速度が1.3倍程度に高速化され、メモリ使用量が半分以下になりました。

作成したクローンのベンチマーク結果

INFO:benchmark_tsp:Memory usage is 37.1875 MB for n_city=5
INFO:benchmark_tsp:Elapsed time is 0.0027251243591308594 sec (expression: 0.0018165111541748047 sec, compile: 0.0009086132049560547 sec), for n_city=5
INFO:benchmark_tsp:Memory usage is 39.14453125 MB for n_city=10
INFO:benchmark_tsp:Elapsed time is 0.011744499206542969 sec (expression: 0.006608009338378906 sec, compile: 0.0051364898681640625 sec), for n_city=10
INFO:benchmark_tsp:Memory usage is 42.81640625 MB for n_city=15
INFO:benchmark_tsp:Elapsed time is 0.034250736236572266 sec (expression: 0.01752781867980957 sec, compile: 0.016722917556762695 sec), for n_city=15
INFO:benchmark_tsp:Memory usage is 48.48046875 MB for n_city=20
INFO:benchmark_tsp:Elapsed time is 0.07781314849853516 sec (expression: 0.03810858726501465 sec, compile: 0.03970456123352051 sec), for n_city=20
INFO:benchmark_tsp:Memory usage is 60.51953125 MB for n_city=25
INFO:benchmark_tsp:Elapsed time is 0.15884137153625488 sec (expression: 0.07862281799316406 sec, compile: 0.08021855354309082 sec), for n_city=25
INFO:benchmark_tsp:Memory usage is 59.0859375 MB for n_city=30
INFO:benchmark_tsp:Elapsed time is 0.28504467010498047 sec (expression: 0.13607454299926758 sec, compile: 0.1489701271057129 sec), for n_city=30
INFO:benchmark_tsp:Memory usage is 105.78515625 MB for n_city=35
INFO:benchmark_tsp:Elapsed time is 0.44951605796813965 sec (expression: 0.20323562622070312 sec, compile: 0.24628043174743652 sec), for n_city=35
INFO:benchmark_tsp:Memory usage is 110.33984375 MB for n_city=40
INFO:benchmark_tsp:Elapsed time is 0.7357730865478516 sec (expression: 0.30616259574890137 sec, compile: 0.4296104907989502 sec), for n_city=40
INFO:benchmark_tsp:Memory usage is 196.71875 MB for n_city=45
INFO:benchmark_tsp:Elapsed time is 1.0909836292266846 sec (expression: 0.4164590835571289 sec, compile: 0.6745245456695557 sec), for n_city=45
INFO:benchmark_tsp:Memory usage is 259.8203125 MB for n_city=50
INFO:benchmark_tsp:Elapsed time is 1.5151138305664062 sec (expression: 0.5525531768798828 sec, compile: 0.9625606536865234 sec), for n_city=50
INFO:benchmark_tsp:Memory usage is 341.19140625 MB for n_city=55
INFO:benchmark_tsp:Elapsed time is 2.056863784790039 sec (expression: 0.761713981628418 sec, compile: 1.295149803161621 sec), for n_city=55
INFO:benchmark_tsp:Memory usage is 453.13671875 MB for n_city=60
INFO:benchmark_tsp:Elapsed time is 2.86260724067688 sec (expression: 0.9715003967285156 sec, compile: 1.8911068439483643 sec), for n_city=60
INFO:benchmark_tsp:Memory usage is 594.203125 MB for n_city=65
INFO:benchmark_tsp:Elapsed time is 3.698843240737915 sec (expression: 1.2595908641815186 sec, compile: 2.4392523765563965 sec), for n_city=65
INFO:benchmark_tsp:Memory usage is 745.08984375 MB for n_city=70
INFO:benchmark_tsp:Elapsed time is 4.576478004455566 sec (expression: 1.5679798126220703 sec, compile: 3.008498191833496 sec), for n_city=70
INFO:benchmark_tsp:Memory usage is 939.10546875 MB for n_city=75
INFO:benchmark_tsp:Elapsed time is 5.750443458557129 sec (expression: 1.9115562438964844 sec, compile: 3.8388872146606445 sec), for n_city=75
INFO:benchmark_tsp:Memory usage is 1164.68359375 MB for n_city=80
INFO:benchmark_tsp:Elapsed time is 7.244943857192993 sec (expression: 2.3309357166290283 sec, compile: 4.914008140563965 sec), for n_city=80
INFO:benchmark_tsp:Memory usage is 1434.3984375 MB for n_city=85
INFO:benchmark_tsp:Elapsed time is 8.57158875465393 sec (expression: 2.748812198638916 sec, compile: 5.822776556015015 sec), for n_city=85
INFO:benchmark_tsp:Memory usage is 1796.1015625 MB for n_city=90
INFO:benchmark_tsp:Elapsed time is 10.486908197402954 sec (expression: 3.2518081665039062 sec, compile: 7.235100030899048 sec), for n_city=90
INFO:benchmark_tsp:Memory usage is 2163.94140625 MB for n_city=95
INFO:benchmark_tsp:Elapsed time is 12.858793497085571 sec (expression: 3.8454368114471436 sec, compile: 9.013356685638428 sec), for n_city=95
INFO:benchmark_tsp:Memory usage is 2565.2890625 MB for n_city=100
INFO:benchmark_tsp:Elapsed time is 14.818677186965942 sec (expression: 4.462315320968628 sec, compile: 10.356361865997314 sec), for n_city=100

pyqubo 1.0のベンチマーク結果

INFO:benchmark_tsp:Memory` usage is 38.24609375 MB for n_city=5
INFO:benchmark_tsp:Elapsed time is 0.003578662872314453 sec (expression: 0.0026633739471435547 sec, compile: 0.0009152889251708984 sec), for n_city=5
INFO:benchmark_tsp:Memory usage is 41.69140625 MB for n_city=10
INFO:benchmark_tsp:Elapsed time is 0.012277841567993164 sec (expression: 0.007538318634033203 sec, compile: 0.004739522933959961 sec), for n_city=10
INFO:benchmark_tsp:Memory usage is 49.56640625 MB for n_city=15
INFO:benchmark_tsp:Elapsed time is 0.039492130279541016 sec (expression: 0.02266407012939453 sec, compile: 0.016828060150146484 sec), for n_city=15
INFO:benchmark_tsp:Memory usage is 66.90234375 MB for n_city=20
INFO:benchmark_tsp:Elapsed time is 0.09637022018432617 sec (expression: 0.05414533615112305 sec, compile: 0.042224884033203125 sec), for n_city=20
INFO:benchmark_tsp:Memory usage is 99.85546875 MB for n_city=25
INFO:benchmark_tsp:Elapsed time is 0.186492919921875 sec (expression: 0.09900045394897461 sec, compile: 0.08749246597290039 sec), for n_city=25
INFO:benchmark_tsp:Memory usage is 128.66015625 MB for n_city=30
INFO:benchmark_tsp:Elapsed time is 0.3243436813354492 sec (expression: 0.16326189041137695 sec, compile: 0.16108179092407227 sec), for n_city=30
INFO:benchmark_tsp:Memory usage is 175.51171875 MB for n_city=35
INFO:benchmark_tsp:Elapsed time is 0.5409214496612549 sec (expression: 0.2537546157836914 sec, compile: 0.2871668338775635 sec), for n_city=35
INFO:benchmark_tsp:Memory usage is 241.28515625 MB for n_city=40
INFO:benchmark_tsp:Elapsed time is 0.8504424095153809 sec (expression: 0.3677685260772705 sec, compile: 0.48267388343811035 sec), for n_city=40
INFO:benchmark_tsp:Memory usage is 339.57421875 MB for n_city=45
INFO:benchmark_tsp:Elapsed time is 1.2869536876678467 sec (expression: 0.5237963199615479 sec, compile: 0.7631573677062988 sec), for n_city=45
INFO:benchmark_tsp:Memory usage is 459.09375 MB for n_city=50
INFO:benchmark_tsp:Elapsed time is 1.8548743724822998 sec (expression: 0.7235279083251953 sec, compile: 1.1313464641571045 sec), for n_city=50
INFO:benchmark_tsp:Memory usage is 624.85546875 MB for n_city=55
INFO:benchmark_tsp:Elapsed time is 2.5812981128692627 sec (expression: 0.9505772590637207 sec, compile: 1.630720853805542 sec), for n_city=55
INFO:benchmark_tsp:Memory usage is 875.890625 MB for n_city=60
INFO:benchmark_tsp:Elapsed time is 3.453432559967041 sec (expression: 1.2393732070922852 sec, compile: 2.214059352874756 sec), for n_city=60
INFO:benchmark_tsp:Memory usage is 1167.56640625 MB for n_city=65
INFO:benchmark_tsp:Elapsed time is 4.660252809524536 sec (expression: 1.582444429397583 sec, compile: 3.077808380126953 sec), for n_city=65
INFO:benchmark_tsp:Memory usage is 1482.5625 MB for n_city=70
INFO:benchmark_tsp:Elapsed time is 5.894147157669067 sec (expression: 1.9754986763000488 sec, compile: 3.9186484813690186 sec), for n_city=70
INFO:benchmark_tsp:Memory usage is 1959.578125 MB for n_city=75
INFO:benchmark_tsp:Elapsed time is 7.560613632202148 sec (expression: 2.407599449157715 sec, compile: 5.153014183044434 sec), for n_city=75
INFO:benchmark_tsp:Memory usage is 2408.09765625 MB for n_city=80
INFO:benchmark_tsp:Elapsed time is 9.15263557434082 sec (expression: 2.974260091781616 sec, compile: 6.178375482559204 sec), for n_city=80
INFO:benchmark_tsp:Memory usage is 3005.5 MB for n_city=85
INFO:benchmark_tsp:Elapsed time is 11.70089340209961 sec (expression: 3.63127064704895 sec, compile: 8.06962275505066 sec), for n_city=85
INFO:benchmark_tsp:Memory usage is 3766.4609375 MB for n_city=90
INFO:benchmark_tsp:Elapsed time is 13.94095516204834 sec (expression: 4.256178617477417 sec, compile: 9.684776544570923 sec), for n_city=90
INFO:benchmark_tsp:Memory usage is 4474.5703125 MB for n_city=95
INFO:benchmark_tsp:Elapsed time is 16.598959922790527 sec (expression: 5.068280220031738 sec, compile: 11.530679702758789 sec), for n_city=95
INFO:benchmark_tsp:Memory usage is 5399.73046875 MB for n_city=100
INFO:benchmark_tsp:Elapsed time is 19.497759103775024 sec (expression: 5.812123775482178 sec, compile: 13.685635328292847 sec), for n_city=100

Windows環境でも同様の結果になりました。申し訳ありませんが、Mac OSはまだ試せていません。

PyQUBO 0.4と同じ方式で多項式への変換をするためにVisitorパターンを使用している部分を、PyQUBO 1.0のように構文木の要素のメソッドにすれば、もう少し速くなるかもしれません。でも、0.4リスペクトで変更したくないです。。。

segmentation fault

when I was trying to create a large binary matrix (~3000 * 4), it threw the segmentation fault after the equation.compile,

Screen Shot 2022-05-30 at 3 24 50 pm

the original resource is https://github.com/iQuHACK/2021_Surfers/blob/main/kmeans.py

error: zsh: segmentation fault python3 main.py

I did lots of searches, I set maximum recursion by using sys.setrecursionlimit but it does not work.

Thank you in advance.

Create n-dim array class instead of `Vector` and `Matrix`

Is your feature request related to a problem? Please describe.
pyqubo.Vector and pyqubo.Matrix are redundant.

Describe the solution you'd like
Both classes can be replaced with a general n-dim array class.

It could be more useful if the n-dim array could have features like...

  • select by slice like numpy.ndarray (ex: array[:, 0] = first column vector of the array)
  • arithmetic operation like addition, subtraction, multiplicity, division
  • operation with numpy.ndarray
  • dot product operation
  • matrix multiply

Error in parsing specific type of Hamiltonian expression

Pyqubo gives the error for the following Hamiltonian:

n=3
from pyqubo import solve_qubo,Array,Placeholder
x = Array.create('x', shape=(5), vartype='BINARY')
dic={"0":12,"1":6,"2":21,"3":32,"4":7}

H_1 = (sum(x)-n)**2
H_2 = sum(x[i]*dic["{}".format(i)] for i in range(5))
H = sum(dic.values())3/5H_1 + H_2

Error Reported:

TypeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_12292/2264980511.py in
1 H_1 = (sum(x)-n)**2
2 H_2 = sum(x[i]*dic["{}".format(i)] for i in range(5))
----> 3 H = sum(dic.values())3/5H_1 + H_2

TypeError: radd(): incompatible function arguments. The following argument types are supported:
1. (self: cpp_pyqubo.Add, arg0: float) -> cpp_pyqubo.Add

However, this error can be worked around if we re-write the above expression as follows:

H_1 = (sum(x)-n)**2
H_2 = sum(x[i]*dic["{}".format(i)] for i in range(5))
H = sum(dic.values())3/5H_1 - (- H_2) # Negative sign is a work-around

Amazon Braketでの利用

PyQUBOでQUBOを作成し、Amazon Braketでアニーリングを行いたいです。

Array.createメソッドで配列を作成し、Braketでアニーリングをしようとするとエラーになります。

以下の構造ではなく
{('[30][21]', '[30][21]'): -5.0,
('[81][15]', '[81][15]'): -5.0,
('[9][26]', '[9][26]'): -5.0,

次のようにタプルになっていないといけないようです。
{((30, 21), (30, 21)): -5.0,
((81,15), (81, 15)): -5.0,
((9,26), (9,26)): -5.0,

お忙しいとは思いますが、対応していただけますと大変助かります。

PyPi version behind?

The version of PyQubo that installs from PyPi appears to be one minor version behind the master branch in the repo. The reason I think this is because the version that gets installed using pip install pyqubo does not appear to include the small change made to decode_dimod_response. This means that users requiring this change need install the package directly from master branch on git.
Thanks!

Potential problem when using higher order polynomials?

Dear Sir,

I ran into a problem when I use higher order polynomials in PyQUBO. I am not sure if the problem came from PyQUBO compilation function or dimod module. Here is the sample code and the result. Would you be able to check if you also see similar problem on your side?

from pyqubo import Binary, solve_qubo
x0, x1, x2, x3, x4 = Binary("x0"), Binary("x1"), Binary("x2"), Binary("x3"), Binary("x4")
H = x0 + x1 + x2 + x3 + x4 + 4*((1-x0)(1-x1)(1-x4) + (1-x2)(1-x4) + 2(1-x2)(1-x3) + (1-x2)(1-x3)*(1-x4))
model = H.compile()
qubo, offset = model.to_qubo()
solution = solve_qubo(qubo)
decoded_solution, broken, energy = model.decode_solution(solution, vartype='BINARY')
print("solution {}, broken {}, energy {}".format(decoded_solution, broken, energy))

Sometimes (not always) it generates negative energy shown as follows:
solution {'x0': 0, 'x1': 1, 'x2': 1, 'x3': 1, 'x4': 0}, broken {'AND(x2,x3)=x2x3': {'result': {'x2': 1, 'x3': 1, 'x2x3': 0}, 'penalty': 5.0}}, energy -4.0

The possible outcome of formulation H give binary inputs should be zero or positive value. Do you know why I get the negative energy value?

Thanks much for your help.

Recursion error

RecursionError: maximum recursion depth exceeded during compilation

It seems that there is a recursion error when I tried to compile a huge Hamiltonian with about 9000 terms to generate a 140*140 QUBO matrix. I'm wondering if there is any solution to this.

Thanks in advance

Aborted on nested "Or".

Describe the bug
Aborted on nested "Or" with below message.

malloc(): invalid size (unsorted)
Aborted (core dumped)

Also "And" and "Xor", maybe.

To Reproduce
Please execute this program.

from pyqubo import Array, Or

xs = Array.create('x', shape=(2, 4), vartype='BINARY')

# NG.
h = Or(Or(Or(Or(xs[0][0], xs[0][1]), xs[0][2]), xs[0][3]),
       Or(Or(Or(xs[1][0], xs[1][1]), xs[1][2]), xs[1][3]))

# Good.
# h = Or(Or(Or(xs[0][0], xs[0][1]), xs[0][2]),
#        Or(Or(xs[1][0], xs[1][1]), xs[1][2]))

model = h.compile()

Environment
OS: Arch Linux
PyQUBO version: 1.0.10

HOBO feature

Would it be possible to generate a higher-order binary optimization model? Simply the original one before the quadratization.

Basically, it could just output a dictionary at the moment.

Memory leak when repeatedly creating QUBOs with version 1.3.0 or higher

Description
When QUBOs are created repeatedly, memory usage increases linearly and is not released until the process terminates.
This behavior occurs in versions 1.3 and 1.4 , but not in 1.2.

To Reproduce
Run below script with version 1.3.0 or higher

import pyqubo

def create_qubo():
    x = pyqubo.Array.create("x", 100, 'BINARY')
    constraint = (sum(x) - 1) ** 2
    model = constraint.compile()
    qubo, offset = model.to_qubo()
    return qubo, offset

if __name__ == '__main__':
    n_iter = 1000
    for i in range(n_iter):
        qubo, offset = create_qubo()

Incorrect compilation of expression into qubo

Describe the bug
Compiling certain Hamiltonians result in incorrect qubos.

To Reproduce
Run the code below:

N = 7 * 11
p = 1 + 2*Binary('p0') + 4*Binary('p1') + 8
q = 1 + 2*Binary('q0') + 4
H = (N - p*q)**2

model = H.compile()
qubo, offset = model.to_qubo()
print(qubo)

Get:

{('q0', 'q0'): -828.0, ('p0', 'p0'): 0.0, ('p1', 'p1'): -880.0, ('0*1', '0*1'): 415.0, ('p0', '0*1'): -10.0, ('p1', '0*1'): -10.0, ('q0', '0*1'): 384.0, ('p1', 'p0'): 5.0, ('p1', 'q0'): 880.0}

But this qubo is wrong. The ('p0', 'p0') coefficient is incorrectly calculated as 0, while the ('p0', 'q0') term is omitted completely.

Expected behavior
Should get:

{('q0', 'q0'): -828.0, ('p0', 'p0'): -540.0, ('p1', 'p1'): -880.0, ('0*1', '0*1'): 415.0, ('p0', '0*1'): -10.0, ('p1', '0*1'): -10.0, ('q0', '0*1'): 384.0, ('p1', 'p0'): 5.0, ('p1', 'q0'): 880.0, ('p0', 'q0'): 344.0}

cannot execute graph_coloring.ipynb

Describe the bug
1.3.1版では、Placeholderを使った項の演算が、順序に依存してエラーするようになりました。
このエラーのため、このリポジトリにあるgraph_coloring.ipynb も実行できません。

To Reproduce
以下の単純なコードで確認できます。

>>> from pyqubo import Binary, Placeholder
>>> H1 = (Binary('x1') + Binary('x2')) + Placeholder('b') * Binary('x0')  # OKay
>>> H2 = Placeholder('a') * Binary('x0') + (Binary('x1') + Binary('x2'))  # NG
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: __radd__(): incompatible function arguments. The following argument types are supported:
    1. (self: cpp_pyqubo.Add, arg0: float) -> cpp_pyqubo.Add

Invoked with: (Binary('x1') + Binary('x2')), (Placeholder('a') * Binary('x0'))

Expected behavior
このリポジトリのサンプルプログラムが実行できるように修正をお願いします。

Release binary wheels for py39

We added support for python 3.9 in all Ocean packages (that pyqubo depends on), but will have to temporarily exclude pyqubo from the SDK requirements until it can be installed from wheels.

Support dimod 0.10 for next `dwave-ocean-sdk` release

We are hoping to release a new version of dwave-ocean-sdk that includes dimod 0.10.

For this release to occur, we need to make sure that all the sdk packages support the new dimod, including pyqubo. The current upper bound set as dimod<=0.10.0 would need to be raised to <0.11.

Thanks!

to_qubo with zero expression causes core dumped

Describe the bug

to_qubo with zero expression causes core dumped

To Reproduce

>>> import pyqubo as pyq
>>> d = pyq.Placeholder("d")
>>> term = d * 0
>>> print(term)
(Placeholder('d') * 0.000000)
>>> term.compile().to_qubo()
terminate called after throwing an instance of 'std::out_of_range'
  what():  _Map_base::at
Aborted (core dumped)

Expected behavior
to_qubo return empty dictionary {}.

これまでのバージョンでは空の辞書が返ってきていました.

Support for symbolized division

Is your feature request related to a problem? Please describe.
I'm always frustrated when I want to get a number divided a an expression

Describe the solution you'd like
I want pyqubo to find way to transfer from number/expression to valid expression. Theoretically, it could work

e.g. 1 / (Binary(a)*Num(0.2) + Binary(b)*Num(0.8)) = 1/0.2 * Binary(a) + 1/0.8 * Binary(b) + 0.3/0.8 * Binary(a)*Binary(b)

Describe alternatives you've considered
N/A

Additional context

Screen Shot 2022-09-09 at 15 44 16

Add logging functionality to track progress during pyqubo.compile()

First of all, I would like to express my gratitude for the excellent work on the pyqubo library. It has been very useful and provides superior functionality compared to other tools I have used in the past. However, I have a feature request that would further enhance the user experience and improve the library's usability.

I am requesting the addition of logging functionality within the pyqubo library, specifically during the pyqubo.compile() process. This would greatly help users to track the progress of lengthy compilations and provide better visibility into the process.

In my specific use case, I am working with a large Boolean circuit consisting of over 100,000 gates (AND, OR, XOR) to see spec requirements and compiling the corresponding PyQUBO expressions using pyqubo.compile(). The compilation process takes a significant amount of time, and it would be extremely beneficial to monitor the progress and estimate the remaining time.

Adding logging functionality during the pyqubo.compile() process would not only help in my use case, but it could also be beneficial to other users working with complex optimization problems and large circuits.

Please consider adding this feature to the library, as it would greatly enhance user experience and provide valuable insights into the compilation process.

Thank you for your consideration and for the outstanding work you have done on pyqubo.

Different QUBO is generated between 0.4.0 and 1.0.10

Describe the bug
Different QUBO is generated between 0.4.0 and 1.0.10.

To Reproduce
Execute below program with 0.4.0 and 1.0.10.

from pprint import pprint
from pyqubo import Array

xs = Array.create('xs', shape=(2, 2), vartype='BINARY')

h = 0

for j in range(1):
    h += (sum(xs[i][j] for i in range(2)) - 2) ** 2

for i in range(2):
    for j in range(2):
        for i_ in range(2):
            for j_ in range(2):
                h += xs[i][j] * xs[i][j_] * xs[i_][j] * xs[i_][j_]

model = h.compile()
pprint(model.to_qubo())

0.4.0 generates 16 entries, but 1.0.10 generates only 14 entries.

0.4.0 generates below.

01: ({('xs[0][0]', 'xs[0][0]'): -2.0,
02:   ('xs[0][0]', 'xs[0][1]'): 2.0,
03:   ('xs[0][0]', 'xs[0][1]*xs[1][0]*xs[1][1]'): 4.0,
04:   ('xs[0][0]', 'xs[1][0]'): 4.0,
05:   ('xs[0][1]', 'xs[0][1]'): 1.0,
06:   ('xs[0][1]', 'xs[0][1]*xs[1][1]'): -10.0,
07:   ('xs[0][1]', 'xs[1][1]'): 5.0,
08:   ('xs[0][1]*xs[1][0]*xs[1][1]', 'xs[0][1]*xs[1][0]*xs[1][1]'): 15.0,
09:   ('xs[0][1]*xs[1][0]*xs[1][1]', 'xs[0][1]*xs[1][1]'): -10.0,
10:   ('xs[0][1]*xs[1][0]*xs[1][1]', 'xs[1][0]'): -10.0,
11:   ('xs[0][1]*xs[1][1]', 'xs[0][1]*xs[1][1]'): 17.0,
12:   ('xs[0][1]*xs[1][1]', 'xs[1][0]'): 5.0,
13:   ('xs[0][1]*xs[1][1]', 'xs[1][1]'): -10.0,
14:   ('xs[1][0]', 'xs[1][0]'): -2.0,
15:   ('xs[1][0]', 'xs[1][1]'): 2.0,
16:   ('xs[1][1]', 'xs[1][1]'): 1.0},
17:  4.0)

1.0.10 generates below.

01: ({('0*1', '0*1'): 13.0,
02:   ('0*1', '2*3'): 4.0,
03:   ('2*3', '2*3'): 13.0,
04:   ('xs[0][0]', '2*3'): -10.0,
05:   ('xs[0][0]', 'xs[0][0]'): 0.0,
06:   ('xs[0][1]', '2*3'): -10.0,
07:   ('xs[0][1]', 'xs[0][0]'): 5.0,
08:   ('xs[0][1]', 'xs[0][1]'): 1.0,
09:   ('xs[1][0]', '0*1'): -10.0,
10:   ('xs[1][0]', 'xs[1][0]'): 0.0,
11:   ('xs[1][1]', '0*1'): -10.0,
12:   ('xs[1][1]', 'xs[0][1]'): 2.0,
13:   ('xs[1][1]', 'xs[1][0]'): 5.0,
14:   ('xs[1][1]', 'xs[1][1]'): 1.0},
15:  4.0)

('xs[0][0]', 'xs[1][0]') and one more entry is not present here.

Expected behavior
Sorry, I don't know this difference is correct or not. But my program outputs correct answers with 0.4.0 and dwave-neal outputs incorrect answers.

Environment
OS: Arch Linux
Python: 3.9.1

HUBO bug

Describe the bug
Constructing a HUBO works well, but then when I run to_qubo(), the resulting QUBO does not preserve all ground states. I suspect that the reason for this is a default call to dimod.make_quadratic. This default call to dimod does not preserve ground states; the penalty term needs to be maximum of the absolute values of the HUBO terms.

To Reproduce
Create some HUBOs, brute force solve them, compare to the qubo generated from to_qubo(); the ground states do not match.

Expected behavior
A preservation of all ground state solutions.

The wrong for auxiliary variables

I am turning the quartic term to quadratic term, my objective function is like f= q3p6p1+ .....+q4q3p2......+....+q4q3p6p1, it will produce the auxiliary variables p6p1, q4q3 and q4q3p6. But actually q4q3*p6 don't need

Support for advanced matrix operation, pseudo-inverse

Is your feature request related to a problem? Please describe.
I'm always frustrated when I want to transfer an original algorithm to QUBO when the original one has np.linalg.pinv

Describe the solution you'd like
I want to pyqubo to support similar operation

Describe alternatives you've considered
N/A

Additional context
I am doing research with this. We could work together to find solutions for this and implement it in pyqubo

Memory error in class model on big sizes QUBO

Describe the bug
On Hameltanian with about 2^18 binarys, command "model = H.compile()" causes error "Process finished with exit code -1073740791 (0xC0000409)". But "print(sys.getsizeof(H.compile()))" is 56 with Hameltanian of any size, so, as I understand, 56 bytes is not enough for my Hameltanian, but I do not know how to correct it. If you have any ideas, please tell me.

Support to the number of spins

Describe the bug
I try to formulate a problem as the attached figure, but when I convert the model to the Ising model, "model.to_ising()", the python kernel crashed. Similar conditions occurred when the number of spins are over 200. How many spins do Pyqubo support?
bug

crash

Support latest numpy version >= 1.20

Is your feature request related to a problem? Please describe.
pyqubo requires numpy < 1.20 (https://github.com/recruit-communications/pyqubo/blob/master/setup.py#L104) however the latest numpy is 1.20.2 which is required by other quantum packages. This prevents pyqubo from installing in the same environment.

Describe the solution you'd like
Relax the numpy version requirements to include >= 1.20.

Describe alternatives you've considered
N/A

Additional context
N/A

Support Python 3.11

We would like to support Python 3.11 with dwave-ocean-sdk and are releasing wheels for all of our packages. Can you release one for pyqubo as well?

Matrix multiplication with Placeholder

Is your feature request related to a problem? Please describe.
No, it's not related to a problem.

Describe the solution you'd like
I'd like to create an array of placeholders so that I can compile a hamiltonian that involves a matrix multiplication without having to re-compile at each iteration. For example:

>>> import numpy as np
>>> from pyqubo import Array, Placeholder
>>> W = np.random.uniform(0,1, (2,3))
>>> W
array([[0.87430507, 0.97073971, 0.68686897],
       [0.26467267, 0.91617608, 0.75222813]])
>>> Q = Array.create("Q", shape=()
>>> Q
Array([[Binary(Q[0][0])],
       [Binary(Q[1][0])],
       [Binary(Q[2][0])]])
>>> q = np.ones((3,1))
>>> q
array([[1.],
       [1.],
       [1.]])
>>> W @ q
array([[2.53191375],
       [1.93307688]])
>>> W @ Q
array([[((Binary(Q[0][0])*Num(0.8743050686234631))+(Binary(Q[1][0])*Num(0.9707397100912994))+(Binary(Q[2][0])*Num(0.6868689741296812)))],
       [((Binary(Q[0][0])*Num(0.2646726704158193))+(Binary(Q[1][0])*Num(0.9161760768438008))+(Binary(Q[2][0])*Num(0.7522281288676448)))]],
      dtype=object)

So far so good. But how do I do it with Placeholders?

Describe alternatives you've considered
Adding a function to create an array of placeholders with given shape?

Additional context
I'm implementing a non-negative matrix factorization X = W*Q. In the main loop I have to solve a QUBO equation multiple times for different values of the weights (W) matrix to find Q.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.