Code Monkey home page Code Monkey logo

brightway2-calc's People

Contributors

bsteubing avatar cmutel avatar haasad avatar jan-eat avatar m-rossi avatar michaelweinold avatar mixib avatar pascallesage avatar renovate[bot] avatar tngtudor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

brightway2-calc's Issues

Monte Carlo Bug

Original report by Caroline Nadel (Bitbucket: cazcazn, ).


Following the tutorials. Everytime I try to do a Monte Carlo as documented, an exception ValueError: left > mode is thrown. This happens with both tutorials 2 and 3 so far.

Stack trace:

ValueError Traceback (most recent call last)
in ()
2 mc.load_data()
3 for x in range(10):
----> 4 print mc.next()

.../env/lib/python2.7/site-packages/bw2calc/monte_carlo.pyc in next(self)
64 if not hasattr(self, "tech_rng"):
65 raise NameError("Must run load_data before making calculations")
---> 66 self.rebuild_technosphere_matrix(self.tech_rng.next())
67 self.rebuild_biosphere_matrix(self.bio_rng.next())
68 if self.lcia:

.../env/lib/python2.7/site-packages/stats_arrays/random.pyc in next(self)
181 1,
182 self.random,
--> 183 self.maximum_iterations
184 )
185 if len(random_data.shape) == 2:

.../env/lib/python2.7/site-packages/stats_arrays/distributions/base.pyc in bounded_random_variables(cls, params, size, seeded_random, maximum_iterations)
314 maximum_iterations=None):
315 """No bounds checking because the bounds do not exclude any of the distribution."""
--> 316 return cls.random_variables(params, size, seeded_random)
317
318 @classmethod

.../env/lib/python2.7/site-packages/stats_arrays/distributions/geometric.pyc in random_variables(cls, params, size, seeded_random)
68 params['loc'], # Mode
69 params['maximum'], # Right
---> 70 size=(size, params.shape[0])).T
71
72 @classmethod

mtrand.pyx in mtrand.RandomState.triangular (numpy/random/mtrand/mtrand.c:19950)()

ValueError: left > mode

Rework logging

Current logging is rarely useful and doesn't meet modern needs. Please vote on the core features a reworked logging should include:







Paradiso Error when using MultiLCA

When I try to use MultiLCA, I get the error: PyPardisoError: The Pardiso solver failed with error code -3. See Pardiso documentation for details.

Full error

File ~\AppData\Local\Continuum\anaconda3\envs\bright_env\lib\site-packages\bw2calc\multi_lca.py:42, in MultiLCA.init(self, cs_name, log_config)
36 self.lca = LCA(demand=self.all, method=self.methods[0], log_config=log_config)
37 self.lca.logger.info({
38 'message': 'Started MultiLCA calculation',
39 'methods': list(self.methods),
40 'functional units': [wrap_functional_unit(o) for o in self.func_units]
41 })
---> 42 self.lca.lci(factorize=True)
43 self.method_matrices = []
44 self.results = np.zeros((len(self.func_units), len(self.methods)))

File ~\AppData\Local\Continuum\anaconda3\envs\bright_env\lib\site-packages\bw2calc\lca.py:342, in LCA.lci(self, factorize, builder)
340 if factorize:
341 self.decompose_technosphere()
--> 342 self.lci_calculation()

File ~\AppData\Local\Continuum\anaconda3\envs\bright_env\lib\site-packages\bw2calc\lca.py:350, in LCA.lci_calculation(self)
344 def lci_calculation(self):
345 """The actual LCI calculation.
346
347 Separated from lci to be reusable in cases where the matrices are already built, e.g. redo_lci and Monte Carlo classes.
348
349 """
--> 350 self.supply_array = self.solve_linear_system()
351 # Turn 1-d array into diagonal matrix
352 count = len(self.activity_dict)

File ~\AppData\Local\Continuum\anaconda3\envs\bright_env\lib\site-packages\bw2calc\lca.py:314, in LCA.solve_linear_system(self)
301 """
302 Master solution function for linear system :math:Ax=B.
303
(...)
311
312 """
313 if hasattr(self, "solver"):
--> 314 return self.solver(self.demand_array)
315 else:
316 return spsolve(
317 self.technosphere_matrix,
318 self.demand_array)

File ~\AppData\Local\Continuum\anaconda3\envs\bright_env\lib\site-packages\pypardiso\scipy_aliases.py:46, in spsolve(A, b, factorize, squeeze, solver, *args, **kwargs)
44 solver._check_A(A)
45 if factorize and not solver._is_already_factorized(A):
---> 46 solver.factorize(A)
48 x = solver.solve(A, b)
50 if squeeze:

File ~\AppData\Local\Continuum\anaconda3\envs\bright_env\lib\site-packages\pypardiso\pardiso_wrapper.py:150, in PyPardisoSolver.factorize(self, A)
148 self.set_phase(12)
149 b = np.zeros((A.shape[0], 1))
--> 150 self._call_pardiso(A, b)

File ~\AppData\Local\Continuum\anaconda3\envs\bright_env\lib\site-packages\pypardiso\pardiso_wrapper.py:281, in PyPardisoSolver._call_pardiso(self, A, b)
263 self._mkl_pardiso(self.pt.ctypes.data_as(ctypes.POINTER(self._pt_type[0])), # pt
264 ctypes.byref(ctypes.c_int32(1)), # maxfct
265 ctypes.byref(ctypes.c_int32(1)), # mnum
(...)
277 x.ctypes.data_as(c_float64_p), # x -> output
278 ctypes.byref(pardiso_error)) # pardiso error
280 if pardiso_error.value != 0:
--> 281 raise PyPardisoError(pardiso_error.value)
282 else:
283 return np.ascontiguousarray(x)

PyPardisoError: The Pardiso solver failed with error code -3. See Pardiso documentation for details.

Conda list

appdirs 1.4.4 pyh9f0ad1d_0 conda-forge
argon2-cffi 21.3.0 pyhd8ed1ab_0 conda-forge
argon2-cffi-bindings 21.2.0 py310h2bbff1b_0
asteval 0.9.23 pyhd8ed1ab_0 conda-forge
asttokens 2.0.5 pyhd8ed1ab_0 conda-forge
astunparse 1.6.3 pyhd8ed1ab_0 conda-forge
attrs 21.4.0 pyhd8ed1ab_0 conda-forge
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports 1.0 py_2 conda-forge
backports.functools_lru_cache 1.6.4 pyhd8ed1ab_0 conda-forge
beautifulsoup4 4.10.0 pyha770c72_0 conda-forge
blas 1.0 mkl
bleach 4.1.0 pyhd8ed1ab_0 conda-forge
brightway2 2.4.1 py_1 cmutel
brotli 1.0.9 h8ffe710_6 conda-forge
brotli-bin 1.0.9 h8ffe710_6 conda-forge
brotlipy 0.7.0 py310h2bbff1b_1002
bw-recipe-2016 0.3 pypi_0 pypi
bw2analyzer 0.10 py_1 cmutel
bw2calc 1.8.1 py_2 cmutel
bw2data 3.6.4 py_0 cmutel
bw2io 0.8.6 py_1 cmutel
bw2parameters 0.7 py_0 cmutel
bw_migrations 0.1 py_0 cmutel
bw_processing 0.7.1 py_0 cmutel
bzip2 1.0.8 he774522_0
ca-certificates 2022.3.18 haa95532_0
certifi 2020.6.20 pyhd3eb1b0_3
cffi 1.15.0 py310h2bbff1b_1
charset-normalizer 2.0.12 pyhd8ed1ab_0 conda-forge
colorama 0.4.4 pyh9f0ad1d_0 conda-forge
cryptography 36.0.0 py310h21b164f_0
cycler 0.11.0 pyhd8ed1ab_0 conda-forge
debugpy 1.5.1 py310hd77b12b_0
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge
docopt 0.6.2 py_1 conda-forge
eight 1.0.0 py_0 conda-forge
entrypoints 0.4 pyhd8ed1ab_0 conda-forge
et_xmlfile 1.0.1 py_1001 conda-forge
executing 0.8.3 pyhd8ed1ab_0 conda-forge
fasteners 0.17.3 pyhd8ed1ab_0 conda-forge
flit-core 3.7.1 pyhd8ed1ab_0 conda-forge
freetype 2.10.4 h546665d_1 conda-forge
fs 2.4.15 pyhd8ed1ab_0 conda-forge
future 0.18.2 py310haa95532_1
icu 69.1 h0e60522_0 conda-forge
idna 3.3 pyhd8ed1ab_0 conda-forge
importlib-metadata 4.8.2 py310haa95532_0
importlib_resources 5.6.0 pyhd8ed1ab_0 conda-forge
intel-openmp 2022.0.0 h57928b3_3663 conda-forge
ipykernel 6.9.1 py310haa95532_0
ipython 8.1.1 py310haa95532_0
ipython_genutils 0.2.0 py_1 conda-forge
ipywidgets 7.7.0 pyhd8ed1ab_0 conda-forge
jbig 2.1 h8d14728_2003 conda-forge
jedi 0.18.1 py310haa95532_1
jinja2 3.1.1 pyhd8ed1ab_0 conda-forge
jpeg 9e h8ffe710_0 conda-forge
jsonschema 4.4.0 pyhd8ed1ab_0 conda-forge
jupyter 1.0.0 py310haa95532_7
jupyter_client 7.0.6 pyhd3eb1b0_0
jupyter_console 6.4.3 pyhd8ed1ab_0 conda-forge
jupyter_core 4.9.2 py310haa95532_0
jupyterlab_pygments 0.1.2 pyh9f0ad1d_0 conda-forge
jupyterlab_widgets 1.1.0 pyhd8ed1ab_0 conda-forge
kiwisolver 1.3.1 py310hd77b12b_0
lcms2 2.12 h2a16943_0 conda-forge
lerc 3.0 h0e60522_0 conda-forge
libblas 3.9.0 12_win64_mkl conda-forge
libbrotlicommon 1.0.9 h8ffe710_6 conda-forge
libbrotlidec 1.0.9 h8ffe710_6 conda-forge
libbrotlienc 1.0.9 h8ffe710_6 conda-forge
libcblas 3.9.0 12_win64_mkl conda-forge
libclang 13.0.1 default_h81446c8_0 conda-forge
libdeflate 1.10 h8ffe710_0 conda-forge
libffi 3.4.2 h604cdb4_1
libiconv 1.16 he774522_0 conda-forge
liblapack 3.9.0 12_win64_mkl conda-forge
libpng 1.6.37 h1d00b33_2 conda-forge
libsodium 1.0.18 h8d14728_1 conda-forge
libtiff 4.3.0 hc4061b1_3 conda-forge
libwebp 1.2.2 h57928b3_0 conda-forge
libwebp-base 1.2.2 h8ffe710_1 conda-forge
libxcb 1.13 hcd874cb_1004 conda-forge
libxml2 2.9.12 hf5bbc77_1 conda-forge
libxslt 1.1.33 h65864e5_3 conda-forge
libzlib 1.2.11 h8ffe710_1014 conda-forge
lxml 4.8.0 py310he2412df_1 conda-forge
lz4-c 1.9.3 h8ffe710_1 conda-forge
m2w64-gcc-libgfortran 5.3.0 6 conda-forge
m2w64-gcc-libs 5.3.0 7 conda-forge
m2w64-gcc-libs-core 5.3.0 7 conda-forge
m2w64-gmp 6.1.0 2 conda-forge
m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge
markupsafe 2.0.1 py310h2bbff1b_0
matplotlib-base 3.4.3 py310h79a7439_2 conda-forge
matplotlib-inline 0.1.3 pyhd8ed1ab_0 conda-forge
matrix_utils 0.2.3 py_0 cmutel
mistune 0.8.4 py310h2bbff1b_1000
mkl 2021.4.0 h0e2418a_729 conda-forge
mrio_common_metadata 0.1.1 py_0 cmutel
msys2-conda-epoch 20160418 1 conda-forge
munkres 1.1.4 pyh9f0ad1d_0 conda-forge
nbclient 0.5.13 pyhd8ed1ab_0 conda-forge
nbconvert 6.4.5 pyhd8ed1ab_1 conda-forge
nbconvert-core 6.4.5 pyhd8ed1ab_1 conda-forge
nbconvert-pandoc 6.4.5 pyhd8ed1ab_1 conda-forge
nbformat 5.2.0 pyhd8ed1ab_0 conda-forge
nest-asyncio 1.5.4 pyhd8ed1ab_0 conda-forge
notebook 6.4.10 pyha770c72_0 conda-forge
numpy 1.22.3 py310hcae7c84_0 conda-forge
openjpeg 2.4.0 hb211442_1 conda-forge
openpyxl 3.0.9 pyhd3eb1b0_0
openssl 1.1.1n h2bbff1b_0
packaging 21.3 pyhd8ed1ab_0 conda-forge
pandas 1.4.1 py310hf5e1058_0 conda-forge
pandoc 2.17.1.1 h57928b3_0 conda-forge
pandocfilters 1.5.0 pyhd8ed1ab_0 conda-forge
parso 0.8.3 pyhd8ed1ab_0 conda-forge
peewee 3.14.10 py310ha5cd066_0 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 9.0.1 py310hdc2b20a_0
pip 22.0.4 pyhd8ed1ab_0 conda-forge
plotly 5.6.0 pyhd3eb1b0_0
prometheus_client 0.13.1 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.27 pyha770c72_0 conda-forge
prompt_toolkit 3.0.27 hd8ed1ab_0 conda-forge
psutil 5.8.0 py310h2bbff1b_1
pthread-stubs 0.4 hcd874cb_1001 conda-forge
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pygments 2.11.2 pyhd8ed1ab_0 conda-forge
pyopenssl 22.0.0 pyhd8ed1ab_0 conda-forge
pypardiso 0.4.0 pyhd8ed1ab_0 conda-forge
pyparsing 3.0.7 pyhd8ed1ab_0 conda-forge
pyprind 2.11.2 py310h5588dad_1002 conda-forge
pyqt 5.12.3 py310h5588dad_8 conda-forge
pyqt-impl 5.12.3 py310h8a704f9_8 conda-forge
pyqt5-sip 4.19.18 py310h8a704f9_8 conda-forge
pyqtchart 5.12 py310h8a704f9_8 conda-forge
pyqtwebengine 5.12.1 py310h8a704f9_8 conda-forge
pyrsistent 0.18.0 py310h2bbff1b_0
pysocks 1.7.1 py310haa95532_0
python 3.10.4 h9a09f29_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python_abi 3.10 2_cp310 conda-forge
pytz 2022.1 pyhd8ed1ab_0 conda-forge
pywin32 302 py310h827c3e9_1
pywinpty 2.0.2 py310h5da7b33_0
pyzmq 22.3.0 py310hd77b12b_2
qt 5.12.9 h556501e_6 conda-forge
qtconsole 5.3.0 pyhd8ed1ab_0 conda-forge
qtconsole-base 5.3.0 pyhd8ed1ab_0 conda-forge
qtpy 2.0.1 pyhd8ed1ab_0 conda-forge
requests 2.27.1 pyhd8ed1ab_0 conda-forge
scipy 1.8.0 py310h33db832_1 conda-forge
send2trash 1.8.0 pyhd8ed1ab_0 conda-forge
setuptools 58.0.4 py310haa95532_0
six 1.16.0 pyh6c4a22f_0 conda-forge
soupsieve 2.3.1 pyhd8ed1ab_0 conda-forge
sqlite 3.37.1 h8ffe710_0 conda-forge
stack_data 0.2.0 pyhd8ed1ab_0 conda-forge
stats_arrays 0.6.5 py_2 cmutel
tabulate 0.8.9 pyhd8ed1ab_0 conda-forge
tbb 2021.5.0 h2d74725_0 conda-forge
tenacity 8.0.1 pyhd8ed1ab_0 conda-forge
terminado 0.13.1 py310haa95532_0
testpath 0.6.0 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h8ffe710_0 conda-forge
tornado 6.1 py310h2bbff1b_0
traitlets 5.1.1 pyhd8ed1ab_0 conda-forge
tzdata 2021e hda174b7_0
ucrt 10.0.20348.0 h57928b3_0 conda-forge
unicodecsv 0.14.1 py_1 conda-forge
unidecode 1.3.4 pyhd8ed1ab_0 conda-forge
urllib3 1.26.9 pyhd8ed1ab_0 conda-forge
vc 14.2 hb210afc_6 conda-forge
voluptuous 0.12.2 pyhd8ed1ab_1 conda-forge
vs2015_runtime 14.29.30037 h902a5da_6 conda-forge
wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge
webencodings 0.5.1 py_1 conda-forge
wheel 0.37.1 pyhd8ed1ab_0 conda-forge
whoosh 2.7.4 pyhd3eb1b0_1
widgetsnbextension 3.6.0 py310h5588dad_0 conda-forge
win_inet_pton 1.1.0 py310haa95532_0
wincertstore 0.2 py310haa95532_2
winpty 0.4.3 4 conda-forge
wrapt 1.13.3 py310h2bbff1b_2
xlrd 1.2.0 pyh9f0ad1d_1 conda-forge
xlsxwriter 3.0.3 pyhd8ed1ab_0 conda-forge
xorg-libxau 1.0.9 hcd874cb_0 conda-forge
xorg-libxdmcp 1.1.3 hcd874cb_0 conda-forge
xz 5.2.5 h62dcd97_1 conda-forge
zeromq 4.3.4 h0e60522_1 conda-forge
zipp 3.7.0 pyhd8ed1ab_1 conda-forge
zlib 1.2.11 h8ffe710_1014 conda-forge
zstd 1.5.2 h6255e5f_0 conda-forge

I tried to update brightway2, ran into the issue here (AttributeError: module 'bw2calc' has no attribute 'ComparativeMonteCarlo'). I tried to follow the answer tngTUDIR so I updated python in the process to its 3.10 version. And now my calculations that were working before trying to update brightway2 leads to the error described above.

Interface design for multiple LCIA methods

It would be sensible for multiple LCIA methods to be calculated at once - however, it is unclear exactly what this would look like. What does it look like to the end user? For example, is LCA.method now LCA.methods? Is LCA.characterization_matrix a list? What about LCA.score(s)?

One option would be for all method attributes to become lists, but behave like a single element if no explicit index is provided. However, this might be a bit too much magic. On the other hand, a switch from LCA.method to LCA.methods (which would always be a list) will break all existing Brightway code. So maybe the LCA class should always only support one impact category, and a new class should be added for multiple methods and functional units?

I don't know. Please provide your opinions, and, more helpfully, some actual interface ideas!

Possibility to obtain and work with the inverted technology matrix (A-1)

Currently, bw-calc only calculates the solution to the inventory problem, i.e. s in As = f. This is sufficient to get LCA results and do contribution analysis. But it is not sufficient to generate Sankey diagrams that show how impacts build up across the life cycle. The work-around that has been used, e.g. in the Activity Browser, is to use the brightway graph-traversal. This works, but it is not ideal for generating the Sankey diagrams as one graph traversal has to be done for each reference flow and impact category. Having the matrix inverse (A-1) would cost some up-front calculation time, but then make Sankey diagram calculations orders of magnitude quicker (e.g. like in SimaPro). To my knowledge, one of the reasons why calculations in OpenLCA and SimaPro take longer is because the matrix inverses are calculated. OpenLCA has found an elegant solution, which could be copied in bw. You can chose how you want to calculate the LCA results (there is a "quick" way (without matrix inverse, not giving you Sankeys) and an "analysis" way (calculating the inverse and providing Sankey results)).

So what I am practically suggesting is to include a method (e.g. as in bw.LCA(... ,... , calculate_matrix_inverse=False)) that includes a matrix inversion. Default should be "False" to get quick LCA results, but the option "True" should do it (or at least provide) the inverted A matrix.

By the way, this can have other benefits as well:

  • Sankey diagrams
  • once you have the matrix inverse you can calculate LCA results for all processes in your database MUCH quicker

Sorry, I had limited time to write this up, but it would be great if bw could also return the inverse technology matrix (and possibly also do things with it)

Log and make calculations reproducible

Original report by Chris Mutel (Bitbucket: cmutel, GitHub: cmutel).


Logging framework gist here.

Need new input argument for all LCA classes: logpath. This can be either a filepath or a directory (in which case, a log filename is generated). We then need to log the following:

  • Filepaths of all processed arrays for each matrix
  • Hash of each processed array. This is fast, ~one millionth of a second per array.
  • Functional unit
  • Seed value for RNGs if Monte Carlo

Brightway2-data will be updated to include a new datastore for these logfiles, and utility functions to read the logs and construct the necessary arguments to reproduce the calculations.

build_demand_array raises AttributeError: 'LCA' object has no attribute 'product_dict'

Original report by Pascal Lesage (Bitbucket: MPa, ).


build_demand_array fails:

#!python

db = bw.Database('ecoinvent 3_2 CutOff')
act = db.random()
myMethod = bw.methods.random()
myLCA = bw.LCA({act:1}, myMethod)
demand_array = myLCA.build_demand_array()

AttributeError: 'LCA' object has no attribute 'product_dict'

Specifying the product does not help:

#!python

demand_array = myLCA.build_demand_array(demand = {act.get('reference product')})

System: Windows 10. BW updated 2016/04/15

identical results with MultiMonteCarlo

Original report by Miguel F.Ast (Bitbucket: Miguel_fast, ).


For some reason MultimonteCarlo method appears to provide identical solutions for all the iterations on the same activity e.g.

import brightway2 as bw
act_dict_list=[{bw.Database('ecoinvent34_consequential').random():1},
           {bw.Database('ecoinvent34_consequential').random():1},
           {bw.Database('ecoinvent34_consequential').random():1}]

ipcc2013=[m for m in bw.methods if 'IPCC' in m[0]
                    and ('2013') in str(m)
                    and 'GWP 100' in str(m)
                    and 'no LT' not in str(m)][0]

mmc=bw.MultiMonteCarlo(act_dict_list,method=ipcc2013,iterations=10)
r=mmc.calculate()

AttributeError: module 'bw2calc' has no attribute 'ComparativeMonteCarlo'

Good morning,

I have been trying to install brightway2 in the framework of my Phd thesis but I have a problem when importing it.

I attach all the steps I followed for the installation.

=> I installed Miniconda3
image

=> I create a new environment with python 3.7
image

=> Information from the newly-created environment…
image

=> conda install -y -q -c conda-forge -c cmutel -c haasad brightway2 jupyter
image

=> conda install -y -q pywin32
image

=> conda clean –tipsy
image

Here I get some warnings… I do not know if they are important or not.
image

=> conda update -c cmutel -c conda-forge bw2data bw2calc bw2io
image

=> pip install numpy
=> conda install pandas
image

I listed all the packages that I have in my environment. I was thinking that maybe I was not using an updated version of some packages. I dug into that but I could not find anything.
image
image
image
image
image

In my opinion, the creation of the environment and the importation of all the packages were fine.
However, when I try to import brightway2 in a jupyter notebook, I get this error…
image

I'm sorry to bother you with this issue but I am beginner with brightway and I do not have any clue about how I could solve the problem. I tried many times to create new environments, update packages etc... but it still doesn't work.

Do you have any clue about it?

Thanks a lot in advance for your time

Module 'bw2calc' has no attribute 'ComparativeMonteCarlo'

When installing the latest version of the brightway2 metapackage (2.4.1) or also just the latest stable release of bw2calc (1.8.0), I get the following error: AttributeError: module 'bw2calc' has no attribute 'ComparativeMonteCarlo'

To reproduce:

  • fresh install
conda create -n ab705_4 -c conda-forge -c cmutel python=3.9 brightway2
conda activate ab705_4
  • only importing bw2calc works fine:
python -c "import bw2calc; print(bw2calc.__version__)" 
(1, 8, 0)
  • importing brightway2 fails:
python -c "import brightway2 as bw"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/adrian/miniconda3/envs/ab705_4/lib/python3.9/site-packages/brightway2/__init__.py", line 3, in <module>
    from bw2calc import *
AttributeError: module 'bw2calc' has no attribute 'ComparativeMonteCarlo'
  • also:
python -c "from bw2calc import *"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
AttributeError: module 'bw2calc' has no attribute 'ComparativeMonteCarlo'

Unhelpful error when specifying incorrect functional unit

Original report by Chris Mutel (Bitbucket: cmutel, GitHub: cmutel).


If you have a database 'foo', and accidentally set up an LCA with LCA({Database("bar").random(): 1}), you get the following error:

#!python

>>> mc = MonteCarloLCA({Database("ecoinvent 3.2 cutoff").random(): 1}, methods.random())
Warning: This database is empty
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/jupyter/libs/python-3.5.1/lib/python3.5/site-packages/bw2calc/monte_carlo.py", line 21, in __init__
    **kwargs)
  File "/home/jupyter/libs/python-3.5.1/lib/python3.5/site-packages/bw2calc/lca.py", line 66, in __init__
    self.get_array_filepaths()
  File "/home/jupyter/libs/python-3.5.1/lib/python3.5/site-packages/bw2calc/lca.py", line 71, in get_array_filepaths
    get_database_filepaths(self.demand, self._databases),
  File "/home/jupyter/libs/python-3.5.1/lib/python3.5/site-packages/bw2calc/utils.py", line 64, in get_database_filepaths
    dbs = set.union(*[Database(key[0]).find_graph_dependents() for key in functional_unit])
  File "/home/jupyter/libs/python-3.5.1/lib/python3.5/site-packages/bw2calc/utils.py", line 64, in <listcomp>
    dbs = set.union(*[Database(key[0]).find_graph_dependents() for key in functional_unit])
TypeError: 'NoneType' object is not subscriptable

This does nothing to help you understand what the problem actually was.

ValueError: negative row index found during lcia calculation

I cannot calculate any lcia results with the latest version of brightway on my 64-bit Windows machine. I receive an error claiming that the row indices passed during matrix building contain negative values. Stepping through the code, I found that there are not really any negative numbers. The problem is caused by faulty integer interpretations. The array of row indices contains the number 4294967295 which is interpreted as -1 when cast to int32 (which scipy.sparse.coo_matrix seems to do). The number is introduced by function index_with_arrays in bw2calc/indexing.py. It contains the following lines:

index_array = np.zeros(keys.max() + 1) - 1
index_array[keys] = values

mask = array_from <= keys.max()
array_to[:] = -1 # array_to contains only 4294967295's
array_to[mask] = index_array[array_from[mask]]
array_to[array_to == -1] = MAX_SIGNED_32BIT_INT # replacement fails

Because the last line, the replacement step, fails the array still contains many 4294967295's when returned. For some weird reason, 4294967295 is not interpreted as -1 during the replacement but later during matrix building. Thus the error.

Possible solution (no error on my machine but didn't test it thoroughly yet):

index_array = np.zeros(keys.max() + 1) - 1
index_array[keys] = values
index_array[index_array == -1] = MAX_SIGNED_32BIT_INT

mask = array_from <= keys.max()
array_to[:] = MAX_SIGNED_32BIT_INT 
array_to[mask] = index_array[array_from[mask]]

Projects creation not possible

I installed Brightway 2 in a new computer.

I tried to create a new project:
project.set_current (new)

project.set_current (new)

However, I can not create it. Please, look at the message below:

Thank you in advance.

`---------------------------------------------------------------------------``
IntegrityError Traceback (most recent call last)
~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in execute_sql(self, sql, params, commit)
3159 try:
-> 3160 cursor.execute(sql, params or ())
3161 except Exception:

IntegrityError: NOT NULL constraint failed: projectdataset.full_hash

During handling of the above exception, another exception occurred:

IntegrityError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_16580/2692245280.py in
----> 1 projects.set_current("new")

~\Miniconda3\envs\BW2\lib\site-packages\bw2data\project.py in set_current(self, name, writable, update)
141 # for new metadata stores
142 self.read_only = False
--> 143 self.create_project(name)
144 self._reset_meta()
145 self._reset_sqlite3_databases()

~\Miniconda3\envs\BW2\lib\site-packages\bw2data\project.py in create_project(self, name, **kwargs)
210 if not ProjectDataset.select().where(
211 ProjectDataset.name == name).count():
--> 212 ProjectDataset.create(
213 data=kwargs,
214 name=name

~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in create(cls, **query)
6391 def create(cls, **query):
6392 inst = cls(**query)
-> 6393 inst.save(force_insert=True)
6394 return inst
6395

~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in save(self, force_insert, only)
6601 rows = self.update(**field_dict).where(self._pk_expr()).execute()
6602 elif pk_field is not None:
-> 6603 pk = self.insert(**field_dict).execute()
6604 if pk is not None and (self._meta.auto_increment or
6605 pk_value is None):

~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in inner(self, database, *args, **kwargs)
1909 raise InterfaceError('Query must be bound to a database in order '
1910 'to call "%s".' % method.name)
-> 1911 return method(self, database, *args, **kwargs)
1912 return inner
1913

~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in execute(self, database)
1980 @database_required
1981 def execute(self, database):
-> 1982 return self._execute(database)
1983
1984 def _execute(self, database):

~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in _execute(self, database)
2759 self._returning = (self.table._primary_key,)
2760 try:
-> 2761 return super(Insert, self)._execute(database)
2762 except self.DefaultValuesException:
2763 pass

~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in _execute(self, database)
2477 cursor = self.execute_returning(database)
2478 else:
-> 2479 cursor = database.execute(self)
2480 return self.handle_result(database, cursor)
2481

~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in execute(self, query, commit, **context_options)
3171 ctx = self.get_sql_context(**context_options)
3172 sql, params = ctx.sql(query).query()
-> 3173 return self.execute_sql(sql, params, commit=commit)
3174
3175 def get_context_options(self):

~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in execute_sql(self, sql, params, commit)
3165 else:
3166 if commit and not self.in_transaction():
-> 3167 self.commit()
3168 return cursor
3169

~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in exit(self, exc_type, exc_value, traceback)
2931 new_type = self.exceptions[exc_type.name]
2932 exc_args = exc_value.args
-> 2933 reraise(new_type, new_type(exc_value, *exc_args), traceback)
2934
2935

~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in reraise(tp, value, tb)
189 def reraise(tp, value, tb=None):
190 if value.traceback is not tb:
--> 191 raise value.with_traceback(tb)
192 raise value
193

~\Miniconda3\envs\BW2\lib\site-packages\peewee.py in execute_sql(self, sql, params, commit)
3158 cursor = self.cursor(commit)
3159 try:
-> 3160 cursor.execute(sql, params or ())
3161 except Exception:
3162 if self.autorollback and not self.in_transaction():

IntegrityError: NOT NULL constraint failed: projectdataset.full_hash`

Test in vector monte carlo assumes self.sample exists

Original report by Chris Mutel (Bitbucket: cmutel, GitHub: cmutel).


If a sample is never defined, but only pre-computed vectors are passed in, then comparing to self.sample will fail.

So, have to choose between:

  1. Drawing a sample when self.rng is constructed on line 46, or
  2. Comparing to length of array used to populate RNG, or
  3. Doing a different comparison completely, i.e. 1-dimensional and fits into the self.positions.

Not sure which approach is best.

Unhelpful error for LCIA methods which don't provide global indices

~/Code/bw2/calc/bw2calc/utils.py in consistent_global_index(packages, matrix)
     27             f"Multiple global index values found: {global_list}. If multiple LCIA datapackages are present, they must use the same value for ``GLO``, the global location, in order for filtering for site-generic LCIA to work correctly."
     28         )
---> 29     return global_list[0]
     30
     31

IndexError: list index out of range

redo_lci and redi_lcia fail if not supplied a demand argument

Original report by Pascal Lesage (Bitbucket: MPa, ).


Despite apparently accepting demand=None, LCA.redo_lci() and LCA.redo_lcia() will fail if no demand argument is provided.

>>> import brightway2 as bw
>>> lca = bw.LCA({('some', 'act'):1})
>>> lca.lci()
>>> lca.redo_lci()

Traceback (most recent call last):
  File "C:\mypy\anaconda\envs\bw2020\lib\site-packages\IPython\core\interactiveshell.py", line 3331, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-25-82c6d21a2d04>", line 1, in <module>
    lca.redo_lci()
  File "C:\mypy\anaconda\envs\bw2020\lib\site-packages\bw2calc\lca.py", line 516, in redo_lci
    self.logger.info("Redoing LCI", extra={'demand': wrap_functional_unit(demand or self.demand)})
  File "C:\mypy\anaconda\envs\bw2020\lib\site-packages\bw2calc\utils.py", line 219, in wrap_functional_unit
    for key, amount in dct.items():
AttributeError: 'NoneType' object has no attribute 'items'

The problem is here (): and here. The line self.demand = demand should be in the if block.

I will provide a PR if there is no objection.

Function 'load_arrays' in utils.py - Unsorted file paths in 1.7.5

Original report by Pedro Anchieta (Bitbucket: phanchieta, GitHub: phanchieta).


Version 1.7.4:

def load_arrays(paths):
    """Load the numpy arrays in list of filepaths ``paths``."""
    assert all(os.path.isfile(fp) for fp in paths)
    return np.hstack([np.load(path) for path in sorted(paths)])

was changed (in version 1.7.5) to

...
for obj in objs:
    if isinstance(obj, np.ndarray):
        # we're done here as the object is already a numpy array
        arrays.append(obj.copy())
    else:
        # treat object as loadable by numpy and try to load it from disk
        arrays.append(np.load(obj))
return np.hstack(arrays)

List "arrays" appends unsorted objects and causes errors when using file paths.

montecarlo with 1 activity database

I have a problem when for an example I try to run a montecarlo results for an activity of a database with a single activity. It throws a value error ValueError: A and x have incompatible dimensions after the first iteration. Here is the code to reproduce it. The problem does not happen if the database has another activity.

pm25_key=('biosphere3', '051aaf7a-6c1a-4e86-999f-85d5f0830df6')

act1_key=('test_1_act','activity_1')

biosphere_exchange_1={'amount':1,
                    'input':pm25_key,
                    'output':act1_key,
                    'type':'biosphere',
                    'uncertainty type': 0}

production_exchange_1={'amount':1,
                     'input':act1_key,
                     'output':act1_key,
                     'type':'production',
                     'uncertainty type':0}

act_1_dict={'name':'activity_1',
             'unit':'megajoule',
             'exchanges':[production_exchange_1,biosphere_exchange_1]}

database_dict={act1_key:act_1_dict}

db=bw.Database('test_1_act')

db.write(database_dict)

a1=bw.get_activity(act1_key)

# montecarlo, problem after first iteration
mc1a=bw.MonteCarloLCA({a1:1},bw.methods.random())

next(mc1a)

next(mc1a)

LCA seeds should not over-ride presamples seeds

Original report by Pascal Lesage (Bitbucket: MPa, ).


When instantiating an LCA, the LCA-level seed is passed to the matrix_presamples:

#!python

self.presamples = [MatrixPresamples(presamples, self.seed)]    

This overwrites presamples-level seeds.
It would be more useful for presamples seeds to be independent of LCA seeds.

`seed` should be passed during presamples creation

Original report by Pascal Lesage (Bitbucket: MPa, ).


Right now, seeds are passed when matrix_presamples are loaded. They should be passed when presamples are created. This is the correct time to distinguish between randomly sampled presamples and sequentially sampled presamples. It also improves reproducibility.

LCIA result as a function

Help - I don't know where to ask this question.

I think BW2 could benefit from being able to calculate results outputting a function instead of a number.

eg. GWP(x) = a x + b

In my mind this should be doable, and I would like to contribute to BW2 by developing this feature. However I don't really know where to start. Any pointers would be appreciated.

From what I can see, Activity Browser does not allow parameters to have unknown values

fail to import brightway

Hello, I have recently re-installed brightway and I get an error when I trying to import bw2calc (v1.8.0) on a windows machine. It seems to be due to library used by the solver Pardiso (v 0.2.2 installed) ... I am not sure how to address this.

 FileNotFoundError                         Traceback (most recent call last)
<ipython-input-1-a3ddbb8a34e8> in <module>
----> 1 import brightway2 as bw

~\Anaconda3\envs\bw\lib\site-packages\brightway2\__init__.py in <module>
      1 # -*- coding: utf-8 -*
      2 from bw2data import *
----> 3 from bw2calc import *
      4 from bw2io import *
      5

~\Anaconda3\envs\bw\lib\site-packages\bw2calc\__init__.py in <module>
     24 __version__ = (1, 8, 0)
     25
---> 26 from .lca import LCA
     27 from .dense_lca import DenseLCA
     28 from .independent_lca import IndependentLCAMixin

~\Anaconda3\envs\bw\lib\site-packages\bw2calc\lca.py in <module>
     27
     28 try:
---> 29     from pypardiso import factorized, spsolve
     30 except ImportError:
     31     from scipy.sparse.linalg import factorized, spsolve

~\Anaconda3\envs\bw\lib\importlib\_bootstrap.py in _find_and_load(name, import_)

~\Anaconda3\envs\bw\lib\importlib\_bootstrap.py in _find_and_load_unlocked(name, import_)

~\Anaconda3\envs\bw\lib\importlib\_bootstrap.py in _load_unlocked(spec)

~\Anaconda3\envs\bw\lib\importlib\_bootstrap.py in _load_backward_compatible(spec)

<frozen zipimport> in load_module(self, fullname)

~\Anaconda3\envs\bw\lib\site-packages\pypardiso-0.2.2-py3.6.egg\pypardiso\__init__.py in <module>
      2
      3 from .pardiso_wrapper import PyPardisoSolver
----> 4 from .scipy_aliases import spsolve, factorized
      5 from .scipy_aliases import pypardiso_solver as ps
      6

~\Anaconda3\envs\bw\lib\importlib\_bootstrap.py in _find_and_load(name, import_)

~\Anaconda3\envs\bw\lib\importlib\_bootstrap.py in _find_and_load_unlocked(name, import_)

~\Anaconda3\envs\bw\lib\importlib\_bootstrap.py in _load_unlocked(spec)

~\Anaconda3\envs\bw\lib\importlib\_bootstrap.py in _load_backward_compatible(spec)

<frozen zipimport> in load_module(self, fullname)

~\Anaconda3\envs\bw\lib\site-packages\pypardiso-0.2.2-py3.6.egg\pypardiso\scipy_aliases.py in <module>
      7 # pypardsio_solver is used for the 'spsolve' and 'factorized' functions. Python crashes on windows if multiple
      8 # instances of PyPardisoSolver make calls to the Pardiso library
----> 9 pypardiso_solver = PyPardisoSolver()
     10
     11

~\Anaconda3\envs\bw\lib\site-packages\pypardiso-0.2.2-py3.6.egg\pypardiso\pardiso_wrapper.py in __init__(self, mtype, phase, size_limit_storage)
     64             self.libmkl = ctypes.CDLL('libmkl_rt.dylib')
     65         elif sys.platform == 'win32':
---> 66             self.libmkl = ctypes.CDLL('mkl_rt.dll')
     67         else:
     68             self.libmkl = ctypes.CDLL('libmkl_rt.so')

~\Anaconda3\envs\bw\lib\ctypes\__init__.py in __init__(self, name, mode, handle, use_errno, use_last_error, winmode)
    379
    380         if handle is None:
--> 381             self._handle = _dlopen(self._name, mode)
    382         else:
    383             self._handle = handle

FileNotFoundError: Could not find module 'mkl_rt.dll' (or one of its dependencies). Try using the full path with constructor syntax.

to_dataframe sorting

Hi

I was having some trouble with to_dataframe and I see now it is commented. In case this method will be used at any point, I think stacked.sort() should be replaced by something like stacked = stacked[:,stacked[0,:].argsort()] (or anything more elegant).. otherwise the ranking is not well sorted.

import brightway database error

Hello everyone,
Recently I am struggling with a problem of brightway import. I am performing an LCA project for my master thesis and it is stuck. Since yesterday, python/jupyternotebook is giving the following problem. I need to find a solution asap. Is there anyone who can help me? Any advice or answer would be awesome helpful.

Thank u!

### Input
import brightway2 as bw
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt

### Output

ImportError Traceback (most recent call last)
File ~\anaconda3\envs\MG\lib\site-packages\bw2data\serialization.py:242, in PickledDict.deserialize(self)
241 try:
--> 242 return self.unpack(pickle.load(open(self.filepath, "rb")))
243 except ImportError:

File ~\anaconda3\envs\MG\lib\site-packages\bw2data\backends_init_.py:5, in
3 from eight import *
----> 5 from .base import LCIBackend
6 from .peewee import SQLiteBackend

File ~\anaconda3\envs\MG\lib\site-packages\bw2data\backends\base.py:5, in
3 from eight import *
----> 5 from .. import (
6 config,
7 databases,
8 geomapping,
9 mapping,
10 projects,
11 )
12 from ..data_store import ProcessedDataStore

**ImportError: cannot import name 'databases' from partially initialized module 'bw2data' (most likely due to a circular import) (C:\Users\cansu\anaconda3\envs\MG\lib\site-packages\bw2data_init_.py)

During handling of the above exception, another exception occurred:**

PickleError Traceback (most recent call last)
Input In [1], in <cell line: 1>()
----> 1 import brightway2 as bw
2 import os # to use "operating system dependent functionality"
3 import numpy as np # "the fundamental package for scientific computing with Python"

File ~\anaconda3\envs\MG\lib\site-packages\brightway2_init_.py:2, in
1 # -- coding: utf-8 -
----> 2 from bw2data import *
3 from bw2calc import *
4 from bw2io import *

File ~\anaconda3\envs\MG\lib\site-packages\bw2data_init_.py:35, in
33 from .project import projects
34 from .utils import set_data_dir
---> 35 from .meta import (
36 dynamic_calculation_setups,
37 calculation_setups,
38 databases,
39 geomapping,
40 mapping,
41 methods,
42 normalizations,
43 preferences,
44 weightings,
45 )
47 # Add metadata class instances to global list of serialized metadata
48 config.metadata.extend([
49 dynamic_calculation_setups,
50 calculation_setups,
(...)
57 weightings,
58 ])

File ~\anaconda3\envs\MG\lib\site-packages\bw2data\meta.py:192, in
190 preferences = Preferences()
191 weightings = WeightingMeta()
--> 192 calculation_setups = CalculationSetups()
193 dynamic_calculation_setups = DynamicCalculationSetups()

File ~\anaconda3\envs\MG\lib\site-packages\bw2data\serialization.py:123, in SerializedDict.init(self, dirpath)
118 raise NotImplemented("SerializedDict must be subclassed, and the filename must be set.")
119 self.filepath = os.path.join(
120 dirpath or projects.dir,
121 self.filename
122 )
--> 123 self.load()

File ~\anaconda3\envs\MG\lib\site-packages\bw2data\serialization.py:128, in SerializedDict.load(self)
126 """Load the serialized data. Creates the file if not yet present."""
127 try:
--> 128 self.data = self.deserialize()
129 except IOError:
130 # Create if not present
131 self.data = {}

File ~\anaconda3\envs\MG\lib\site-packages\bw2data\serialization.py:245, in PickledDict.deserialize(self)
243 except ImportError:
244 TEXT = "Pickle deserialization error in file '%s'" % self.filepath
--> 245 raise PickleError(TEXT)

PickleError: Pickle deserialization error in file 'C:\Users\cansu\AppData\Local\pylca\Brightway3\default.c21f969b5f03d33d43e04f8f136e7682\setups.pickle'

Create bw2remote and package functionality for offline calculations

Original report by Chris Mutel (Bitbucket: cmutel, GitHub: cmutel).


This will be quite some work:

  1. We need to be able to package up the processed files for a given functional unit and method (and weighting, normalization). This should be a single archive file.

  2. Translate functional unit, etc. to the correct filenames and indices.

  3. Write bw2remote, which is a flask application that can accept the files and a JSON payload.

  4. bw2remote should then stream Monte Carlo results back to a notebook widget that will produce a dynamic D3 histogram. See: https://github.com/mbostock/d3/wiki/Histogram-Layout, http://stackoverflow.com/questions/22052694/how-to-update-d3-js-bar-chart-with-new-data, http://blog.thedataincubator.com/2015/08/embedding-d3-in-an-ipython-notebook/

bw2remote should use rq and redis: https://redis-py.readthedocs.org/en/latest/, http://python-rq.org/docs/, and a Queue during multiprocessing.

lca.switch_method is broken

It is currently not possible to switch methods using lca.switch_method. The reason: lca.switch_method loads packages and calls lca.load_lcia_data(packages) successfully. However, when calling lca.lcia() thereafter, lca.load_lcia_data() is again called - this time without arguments. This time self.packages is loaded, which still holds the old method data because it was never overwritten during switch_method.

Current implementation:

    def switch_method(self, method):
        """Switch to LCIA method `method`"""
        try:
            _, data_objs, _ = prepare_lca_inputs(method=method)
            packages = [load_package(obj) for obj in data_objs]
        except AssertionError:
            packages = method
        self.method = method
        self.load_lcia_data(packages)


    def lcia(self):
        assert hasattr(self, "inventory"), "Must do lci first"
        # assert self.method, "Must specify a method to perform LCIA"
        if not self.dicts.biosphere:
            raise EmptyBiosphere

        self.load_lcia_data()
        self.lcia_calculation()

Proposed fix:

```python

    def switch_method(self, method):
        """Switch to LCIA method `method`"""
        try:
            _, data_objs, _ = prepare_lca_inputs(method=method)
            packages = [load_package(obj) for obj in data_objs]
        except AssertionError:
            packages = method
        self.method = method
        self.packages = packages
        self.load_lcia_data(packages)


    def lcia(self):
        assert hasattr(self, "inventory"), "Must do lci first"
        # assert self.method, "Must specify a method to perform LCIA"
        if not self.dicts.biosphere:
            raise EmptyBiosphere

        self.load_lcia_data()
        self.lcia_calculation()

Additionally, one could introduce a flag to prevent load_lcia_data from being called twice.

ParameterVectorLCA should make it easy to include parameter values

Original report by Chris Mutel (Bitbucket: cmutel, GitHub: cmutel).


Currently this is impossible, because only matrix presamples are included when loading presamples. Seems like the most reasonable approach would be to override __init__, and append the necessary space onto self.sample. Note that the current implementation doesn't preserve self.sample; it could be that a wrapper class would be easier in this case.

Package scikit-umfpack for Windows (Py3.5 only)

Original report by Chris Mutel (Bitbucket: cmutel, GitHub: cmutel).


UMFPACK is significantly faster than the default solver, but is a huge pain to package on Windows.

Packaging for 2.7 requires an ancient Microsoft compiler, and is out of scope for this ticket.

Current progress: Using Suitesparse-METIS for Windows, I can compile SuiteSparse, but don't understand how to "install" it so that the paths are correct for scikit-umfpack/numpy to find.

Also still not sure if the eventual Windows will include the SuiteSparse DLLs...

Current failure on Appveyor.

Branch of scikit-umfpack on Github.

See also (in no particular order):

Could also think about building wheels for Linux and OS X automatically, but one thing at a time. See travis config, just need to build wheels.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.