Code Monkey home page Code Monkey logo

gptchem's Introduction

gptchem

Tests PyPI - License Documentation Status Codecov status Cookiecutter template from @cthoyt Code style: black Contributor Covenant

Use GPT-3 to solve chemistry problems. Most of the repo is currently not intended for use as library but as documentation of our experiments. We'll factor out the experiments (that come with tricky dependencies) into its own repository over time.

๐Ÿ’ช Getting Started

from gptchem.gpt_classifier import GPTClassifier 
from gptchem.tuner import Tuner 

classifier = GPTClassifier(
    property_name="transition wavelength", # this is the property name we will use in the prompt template
    tuner=Tuner(n_epochs=8, learning_rate_multiplier=0.02, wandb_sync=False),
)

classifier.fit(["CC", "CDDFSS"], [0, 1])
predictions = classifier.predict(['CCCC', 'CCCCCCCC'])

The time these call take can vary as the methods call the OpenAI API under the hood. Therefore, in situation of high load, we also experienced hours of waiting time in the queue.

๐Ÿš€ Installation

The most recent code and data can be installed directly from GitHub with:

$ pip install git+https://github.com/kjappelbaum/gptchem.git

The installation should only take a few seconds to minutes. You can install additional depenencies using the extras experiments and eval.

๐Ÿ‘ Contributing

Contributions, whether filing an issue, making a pull request, or forking, are appreciated. See CONTRIBUTING.md for more information on getting involved.

๐Ÿ‘‹ Attribution

โš–๏ธ License

The code in this package is licensed under the MIT License.

๐Ÿ“– Citation

If you found this package useful, please cite our preprint

@inproceedings{Jablonka_2023,
	doi = {10.26434/chemrxiv-2023-fw8n4},
	url = {https://doi.org/10.26434%2Fchemrxiv-2023-fw8n4},
	year = 2023,
	month = {feb},
	booktitle = {ChemRxiv},
	author = {Kevin Maik Jablonka and Philippe Schwaller and Andres Ortega-Guerrero and Berend Smit},
	title = {Is {GPT} all you need for low-data discovery in chemistry?}
}

๐Ÿ› ๏ธ For Developers

See developer instructions

The final section of the README is for if you want to get involved by making a code contribution.

Development Installation

To install in development mode, use the following:

$ git clone git+https://github.com/kjappelbaum/gptchem.git
$ cd gptchem
$ pip install -e .

๐Ÿฅผ Testing

After cloning the repository and installing tox with pip install tox, the unit tests in the tests/ folder can be run reproducibly with:

$ tox

Additionally, these tests are automatically re-run with each commit in a GitHub Action.

๐Ÿ“– Building the Documentation

The documentation can be built locally using the following:

$ git clone git+https://github.com/kjappelbaum/gptchem.git
$ cd gptchem
$ tox -e docs
$ open docs/build/html/index.html

The documentation automatically installs the package as well as the docs extra specified in the setup.cfg. sphinx plugins like texext can be added there. Additionally, they need to be added to the extensions list in docs/source/conf.py.

๐Ÿ“ฆ Making a Release

After installing the package in development mode and installing tox with pip install tox, the commands for making a new release are contained within the finish environment in tox.ini. Run the following from the shell:

$ tox -e finish

This script does the following:

  1. Uses Bump2Version to switch the version number in the setup.cfg, src/gptchem/version.py, and docs/source/conf.py to not have the -dev suffix
  2. Packages the code in both a tar archive and a wheel using build
  3. Uploads to PyPI using twine. Be sure to have a .pypirc file configured to avoid the need for manual input at this step
  4. Push to GitHub. You'll need to make a release going with the commit where the version was bumped.
  5. Bump the version to the next patch. If you made big changes and want to bump the version by minor, you can use tox -e bumpversion minor after.

๐Ÿช Cookiecutter

This package was created with @audreyfeldroy's cookiecutter package using @cthoyt's cookiecutter-snekpack template.

gptchem's People

Contributors

bmiles avatar kjappelbaum avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gptchem's Issues

empty prediction

@kjappelbaum in various runs of the example you provided, I get a prediction of either None, or in this case a silent return. any tips on debugging. my openai_api_key seems to not be an issue - TIA, Venu
Screenshot 2023-04-15 at 5 30 37 PM

the costs of fine-tune

Your idea is really interesting, but I'm more worried about the fine-tune program spending too much money.

  1. Can you share the approximate cost of your previous work?
  2. As well as GPT-4 has been released and it shows very strong thinking power, perhaps GPT-4 can further improve the prediction performance of the model. Do you have a plan to use the GPT-4 ?

Inaccessible examples

Hi,

I wanted to run some of the examples, but it seems like the dropbox links inside

gptchem/data.py

only lead to dropbox page saying "That didn't work for some reason"

trying any get_xxx_data() function leads to something like this:

---------------------------------------------------------------------------
ParserError                               Traceback (most recent call last)
Cell In[4], line 1
----> 1 data = get_lipophilicity_data()

File ~/.conda/envs/zstruct-llm/lib/python3.11/site-packages/gptchem/data.py:225, in get_lipophilicity_data()
    221 def get_lipophilicity_data() -> pd.DataFrame:
    222     """Return the Lipophilicity data parsed from ChEMBL [chembl]_"""
    223     return (
    224         pystow.module("gptchem")
--> 225         .ensure_csv(
    226             "lipophilicity",
    227             url="https://www.dropbox.com/s/secesuqvqrdexz4/lipophilicity.csv?dl=1",
    228             read_csv_kwargs=dict(sep=","),
    229         )
    230         .reset_index(drop=True)
    231     )

File ~/.conda/envs/zstruct-llm/lib/python3.11/site-packages/pystow/impl.py:632, in Module.ensure_csv(self, url, name, force, download_kwargs, read_csv_kwargs, *subkeys)
    627 import pandas as pd
    629 path = self.ensure(
    630     *subkeys, url=url, name=name, force=force, download_kwargs=download_kwargs
    631 )
--> 632 return pd.read_csv(path, **_clean_csv_kwargs(read_csv_kwargs))

File ~/.conda/envs/zstruct-llm/lib/python3.11/site-packages/pandas/io/parsers/readers.py:1026, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, date_format, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options, dtype_backend)
   1013 kwds_defaults = _refine_defaults_read(
   1014     dialect,
   1015     delimiter,
   (...)
   1022     dtype_backend=dtype_backend,
   1023 )
   1024 kwds.update(kwds_defaults)
-> 1026 return _read(filepath_or_buffer, kwds)

File ~/.conda/envs/zstruct-llm/lib/python3.11/site-packages/pandas/io/parsers/readers.py:626, in _read(filepath_or_buffer, kwds)
    623     return parser
    625 with parser:
--> 626     return parser.read(nrows)

File ~/.conda/envs/zstruct-llm/lib/python3.11/site-packages/pandas/io/parsers/readers.py:1923, in TextFileReader.read(self, nrows)
   1916 nrows = validate_integer("nrows", nrows)
   1917 try:
   1918     # error: "ParserBase" has no attribute "read"
   1919     (
   1920         index,
   1921         columns,
   1922         col_dict,
-> 1923     ) = self._engine.read(  # type: ignore[attr-defined]
   1924         nrows
   1925     )
   1926 except Exception:
   1927     self.close()

File ~/.conda/envs/zstruct-llm/lib/python3.11/site-packages/pandas/io/parsers/c_parser_wrapper.py:234, in CParserWrapper.read(self, nrows)
    232 try:
    233     if self.low_memory:
--> 234         chunks = self._reader.read_low_memory(nrows)
    235         # destructive to chunks
    236         data = _concatenate_chunks(chunks)

File parsers.pyx:838, in pandas._libs.parsers.TextReader.read_low_memory()

File parsers.pyx:905, in pandas._libs.parsers.TextReader._read_rows()

File parsers.pyx:874, in pandas._libs.parsers.TextReader._tokenize_rows()

File parsers.pyx:891, in pandas._libs.parsers.TextReader._check_tokenize_status()

File parsers.pyx:2061, in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 1 fields in line 4, saw 3

InvalidRequestError

Hello,

I keep getting the "InvalidRequestError: Unknown request URL: POST /v1/fine-tunes" error when I try to run finetuning / few shot inference from the examples you have provided.
The issue persists on both windows and linux systems and I was not able to find a way to solve it, do you have any ideas on what I could be doing wrong?

Thanks!

Limit on number of training points?

Hello,

I recently downloaded your code after reading your paper. I was testing the model for melting point prediction and I observed the model would fail to make predictions when I went above 1000 training points (for example 2000 or 3000 training points). Is there a fundamental limit on how much data should be used for fine-tuning?

Doesn't work with the lastest openai package or API url

It seems that 'from openai.cli import FineTune as FineTuneCli' was only compatible with versions on and before 0.28.1 of the openai package.
But if installing the 0.28.1 version, the fine-tune API url is wrong.

Has anyone tested the "getting started" code lately?
Do you have the same issue?

switch to new fine tuning API

gpt-3.5-turbo can only be fine-tuned on the new fine-tuning API (`/fine_tuning/jobs`). This API (`/fine-tunes`) is being deprecated. Please refer to our documentation for more information: https://platform.openai.com/docs/api-reference/fine-tuning

Installer doesn't work (anymore) ?

Is it only me who has troubles with the yml file? This seems to be a machine specific list with all depencies and version numbers which doesn't work elsehwere.
While this has nothing to do with the gptchem package per se, considering the amount of interesting experiments included which require things like jupyter, xgboost and other packages it would be easier to just list the required packages, not the dependencies that get installed alongside. This could be done via:

conda env export --from-history >> environment.yml

If the author wants to keep his own env.yml may I recommend to rename appropriately or use a different repository?

If there are any forks or dockers availabe that anyone is willing to share otherwise it would be appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.