Code Monkey home page Code Monkey logo

crossrefapi's Introduction

Crossref API Client

Library with functions to iterate through the Crossref API.

https://travis-ci.org/fabiobatalha/crossrefapi.svg?branch=master

How to Install

pip install crossrefapi

How to Use

Works

Agency

In [1]: from crossref.restful import Works

In [2]: works = Works()

In [3]: works.agency('10.1590/0102-311x00133115')
Out[3]:
{'DOI': '10.1590/0102-311x00133115',
 'agency': {'id': 'crossref', 'label': 'CrossRef'}}

Sample

In [1]: from crossref.restful import Works

In [2]: works = Works()

In [3]: for item in works.sample(2):
   ...:     print(item['title'])
   ...:
['On the Origin of the Color-Magnitude Relation in the Virgo Cluster']
['Biopsychosocial Wellbeing among Women with Gynaecological Cancer']

Query

See valid parameters in Works.FIELDS_QUERY

In [1]: from crossref.restful import Works

In [2]: works = Works()

In [3]: w1 = works.query(bibliographic='zika', author='johannes', publisher_name='Wiley-Blackwell')

In [4]: for item in w1:
   ...:     print(item['title'])
   ...:
   ...:
['Inactivation and removal of Zika virus during manufacture of plasma-derived medicinal products']
['Harmonization of nucleic acid testing for Zika virus: development of the 1st\n World Health Organization International Standard']

Doi

In [1]: from crossref.restful import Works

In [2]: works = Works()

In [3]: works.doi('10.1590/0102-311x00133115')
Out[3]:
{'DOI': '10.1590/0102-311x00133115',
 'ISSN': ['0102-311X'],
 'URL': 'http://dx.doi.org/10.1590/0102-311x00133115',
 'alternative-id': ['S0102-311X2016001107002'],
 'author': [{'affiliation': [{'name': 'Surin Rajabhat University,  Thailand'}],
   'family': 'Wiwanitki',
   'given': 'Viroj'}],
 'container-title': ['Cadernos de Saúde Pública'],
 'content-domain': {'crossmark-restriction': False, 'domain': []},
 'created': {'date-parts': [[2016, 12, 7]],
  'date-time': '2016-12-07T21:52:08Z',
  'timestamp': 1481147528000},
 'deposited': {'date-parts': [[2017, 5, 24]],
  'date-time': '2017-05-24T01:57:26Z',
  'timestamp': 1495591046000},
 'indexed': {'date-parts': [[2017, 5, 24]],
  'date-time': '2017-05-24T22:39:11Z',
  'timestamp': 1495665551858},
 'is-referenced-by-count': 0,
 'issn-type': [{'type': 'electronic', 'value': '0102-311X'}],
 'issue': '11',
 'issued': {'date-parts': [[2016, 11]]},
 'member': '530',
 'original-title': [],
 'prefix': '10.1590',
 'published-print': {'date-parts': [[2016, 11]]},
 'publisher': 'FapUNIFESP (SciELO)',
 'reference-count': 3,
 'references-count': 3,
 'relation': {},
 'score': 1.0,
 'short-container-title': ['Cad. Saúde Pública'],
 'short-title': [],
 'source': 'Crossref',
 'subject': ['Medicine(all)'],
 'subtitle': [],
 'title': ['Congenital Zika virus syndrome'],
 'type': 'journal-article',
 'volume': '32'}

Filter

See valid parameters in Works.FILTER_VALIDATOR. Replace . with __ and - with _ when using parameters.

In [1] from cross.restful import Works

In [2]: works = Works()

In [3]: for i in works.filter(license__url='https://creativecommons.org/licenses/by', from_pub_date='2016').sample(5).select('title'):
   ...: print(i)
   ...:
{'title': ['Vers une économie circulaire... de proximité ? Une spatialité à géométrie variable']}
{'title': ['The stakeholders of the Olympic System']}
{'title': ["Un cas de compensation écologique dans le secteur minier : la réserve forestière Dékpa (Côte d'Ivoire) au secours des forêts et des populations locales"]}
{'title': ['A simple extension of FFT-based methods to strain gradient loadings - Application to the homogenization of beams and plates with linear and non-linear behaviors']}
{'title': ['Gestion des déchets ménagers dans la ville de Kinshasa : Enquête sur la perception des habitants et propositions']}

Select

See valid parameters in Works.FIELDS_SELECT

In [1]: from crossref.restful import Works

In [2]: works = Works()

In [3]: for i in works.filter(has_funder='true', has_license='true').sample(5).select('DOI, prefix'):
   ...:     print(i)
   ...:
{'DOI': '10.1111/str.12144', 'member': 'http://id.crossref.org/member/311', 'prefix': '10.1111'}
{'DOI': '10.1002/admi.201400154', 'member': 'http://id.crossref.org/member/311', 'prefix': '10.1002'}
{'DOI': '10.1016/j.surfcoat.2010.10.057', 'member': 'http://id.crossref.org/member/78', 'prefix': '10.1016'}
{'DOI': '10.1007/s10528-015-9707-8', 'member': 'http://id.crossref.org/member/297', 'prefix': '10.1007'}
{'DOI': '10.1016/j.powtec.2016.04.009', 'member': 'http://id.crossref.org/member/78', 'prefix': '10.1016'}

In [4]: for i in works.filter(has_funder='true', has_license='true').sample(5).select(['DOI', 'prefix']):
   ...:     print(i)
   ...:
{'DOI': '10.1002/jgrd.50059', 'member': 'http://id.crossref.org/member/311', 'prefix': '10.1002'}
{'DOI': '10.1111/ajt.13880', 'member': 'http://id.crossref.org/member/311', 'prefix': '10.1111'}
{'DOI': '10.1016/j.apgeochem.2015.05.006', 'member': 'http://id.crossref.org/member/78', 'prefix': '10.1016'}
{'DOI': '10.1016/j.triboint.2015.01.023', 'member': 'http://id.crossref.org/member/78', 'prefix': '10.1016'}
{'DOI': '10.1007/s10854-016-4649-4', 'member': 'http://id.crossref.org/member/297', 'prefix': '10.1007'}

In [5]: for i in works.filter(has_funder='true', has_license='true').sample(5).select('DOI').select('prefix'):
   ...:     print(i)
   ...:
{'DOI': '10.1002/mrm.25790', 'member': 'http://id.crossref.org/member/311', 'prefix': '10.1002'}
{'DOI': '10.1016/j.istruc.2016.11.001', 'member': 'http://id.crossref.org/member/78', 'prefix': '10.1016'}
{'DOI': '10.1002/anie.201505015', 'member': 'http://id.crossref.org/member/311', 'prefix': '10.1002'}
{'DOI': '10.1016/j.archoralbio.2010.11.011', 'member': 'http://id.crossref.org/member/78', 'prefix': '10.1016'}
{'DOI': '10.1145/3035918.3064012', 'member': 'http://id.crossref.org/member/320', 'prefix': '10.1145'}

In [6]: for i in works.filter(has_funder='true', has_license='true').sample(5).select('DOI', 'prefix'):
   ...:     print(i)
   ...:
{'DOI': '10.1016/j.cplett.2015.11.062', 'member': 'http://id.crossref.org/member/78', 'prefix': '10.1016'}
{'DOI': '10.1016/j.bjp.2015.06.001', 'member': 'http://id.crossref.org/member/78', 'prefix': '10.1016'}
{'DOI': '10.1111/php.12613', 'member': 'http://id.crossref.org/member/311', 'prefix': '10.1111'}
{'DOI': '10.1002/cfg.144', 'member': 'http://id.crossref.org/member/98', 'prefix': '10.1155'}
{'DOI': '10.1002/alr.21987', 'member': 'http://id.crossref.org/member/311', 'prefix': '10.1002'}

Facet

In [1]: from crossref.restful import Works, Prefixes

In [2]: works = Works()

In [3]: works.facet('issn', 10)
Out[3]:
{'issn': {'value-count': 10,
  'values': {'http://id.crossref.org/issn/0009-2975': 306546,
   'http://id.crossref.org/issn/0028-0836': 395353,
   'http://id.crossref.org/issn/0140-6736': 458909,
   'http://id.crossref.org/issn/0302-9743': 369955,
   'http://id.crossref.org/issn/0931-7597': 487523,
   'http://id.crossref.org/issn/0959-8138': 392754,
   'http://id.crossref.org/issn/1095-9203': 253978,
   'http://id.crossref.org/issn/1468-5833': 388355,
   'http://id.crossref.org/issn/1556-5068': 273653,
   'http://id.crossref.org/issn/1611-3349': 329573}}}

In [4]: prefixes = Prefixes()

In [5]: prefixes.works('10.1590').facet('issn', 10)
Out[5]:
{'issn': {'value-count': 10,
  'values': {'http://id.crossref.org/issn/0004-282X': 7712,
   'http://id.crossref.org/issn/0034-8910': 4752,
   'http://id.crossref.org/issn/0037-8682': 4179,
   'http://id.crossref.org/issn/0074-0276': 7941,
   'http://id.crossref.org/issn/0100-204X': 3946,
   'http://id.crossref.org/issn/0100-4042': 4198,
   'http://id.crossref.org/issn/0102-311X': 6548,
   'http://id.crossref.org/issn/0103-8478': 6607,
   'http://id.crossref.org/issn/1413-8123': 4658,
   'http://id.crossref.org/issn/1516-3598': 4678}}}

In [6]: prefixes.works('10.1590').query('zika').facet('issn', 10)
Out[6]:
{'issn': {'value-count': 10,
  'values': {'http://id.crossref.org/issn/0004-282X': 4,
   'http://id.crossref.org/issn/0036-4665': 4,
   'http://id.crossref.org/issn/0037-8682': 7,
   'http://id.crossref.org/issn/0074-0276': 7,
   'http://id.crossref.org/issn/0102-311X': 12,
   'http://id.crossref.org/issn/0103-7331': 2,
   'http://id.crossref.org/issn/0104-4230': 3,
   'http://id.crossref.org/issn/1519-3829': 7,
   'http://id.crossref.org/issn/1679-4508': 2,
   'http://id.crossref.org/issn/1806-8324': 2}}}

Journals

Exemplifying the use of API Library to retrieve data from Journals endpoint.

In [1]: from crossref.restful import Journals

In [2]: journals = Journals()

In [3]: journals.journal('0102-311X')
Out[3]:
{'ISSN': ['0102-311X', '0102-311X'],
 'breakdowns': {'dois-by-issued-year': [[2013, 462],
   [2007, 433],
   [2008, 416],
   [2009, 347],
   [2006, 344],
   [2014, 292],
   [2004, 275],
   [2012, 273],
   [2011, 270],
   [2010, 270],
   [2005, 264],
   [2003, 257],
   [2001, 220],
   [2002, 219],
   [1998, 187],
   [2000, 169],
   [1997, 142],
   [1999, 136],
   [1994, 110],
   [1995, 104],
   [1996, 103],
   [1993, 99],
   [2015, 93],
   [1992, 65],
   [1986, 63],
   [1985, 53],
   [1990, 49],
   [1988, 49],
   [1991, 48],
   [1987, 46],
   [1989, 45]]},
 'counts': {'backfile-dois': 5565, 'current-dois': 335, 'total-dois': 5900},
 'coverage': {'award-numbers-backfile': 0.0,
  'award-numbers-current': 0.0,
  'funders-backfile': 0.0,
  'funders-current': 0.0,
  'licenses-backfile': 0.0,
  'licenses-current': 0.0,
  'orcids-backfile': 0.0,
  'orcids-current': 0.0,
  'references-backfile': 0.0,
  'references-current': 0.0,
  'resource-links-backfile': 0.0,
  'resource-links-current': 0.0,
  'update-policies-backfile': 0.0,
  'update-policies-current': 0.0},
 'flags': {'deposits': True,
  'deposits-articles': True,
  'deposits-award-numbers-backfile': False,
  'deposits-award-numbers-current': False,
  'deposits-funders-backfile': False,
  'deposits-funders-current': False,
  'deposits-licenses-backfile': False,
  'deposits-licenses-current': False,
  'deposits-orcids-backfile': False,
  'deposits-orcids-current': False,
  'deposits-references-backfile': False,
  'deposits-references-current': False,
  'deposits-resource-links-backfile': False,
  'deposits-resource-links-current': False,
  'deposits-update-policies-backfile': False,
  'deposits-update-policies-current': False},
 'last-status-check-time': 1459491023622,
 'publisher': 'SciELO',
 'title': 'Cadernos de Saúde Pública'}

In [4]: journals.journal_exists('0102-311X')
Out[4]: True

In [5]: journals.query('Cadernos').url
Out[5]: 'https://api.crossref.org/journals?query=Cadernos'

In [6]: journals.query('Cadernos').count()
Out[6]: 60

In [7]: journals.works('0102-311X').query('zika').url
Out[7]: 'https://api.crossref.org/journals/0102-311X/works?query=zika'

In [8]: journals.works('0102-311X').query('zika').count()
Out[8]: 12

In [9]: journals.works('0102-311X').query('zika').query(author='Diniz').url
Out[9]: 'https://api.crossref.org/journals/0102-311X/works?query.author=Diniz&query=zika'

In [10]: journals.works('0102-311X').query('zika').query(author='Diniz').count()
Out[10]: 1

Base Methods

The base methods could be used along with the query, filter, sort, order and facet methods.

Version

This method returns the Crossref API version.

In [1]: from crossref.restful import Journals

In [2]: journals = Journals()

In [3]: journals.version
Out[3]: '1.0.0'

Count

This method returns the total number of items a query result should retrieve. This method will not iterate through and retrieve the API documents. This method will fetch 0 documents and retrieve the value of total-result attribute.

In [1]: from crossref.restful import Works

In [2]: works = Works()

In [3]: works.query('zika').count()
Out[3]: 3597

In [4]: works.query('zika').filter(from_online_pub_date='2017').count()
Out[4]: 444

Url

This method returns the url that will be used to query the Crossref API.

In [1]: from crossref.restful import Works

In [2]: works = Works()

In [3]: works.query('zika').url
Out[3]: 'https://api.crossref.org/works?query=zika'

In [4]: works.query('zika').filter(from_online_pub_date='2017').url
Out[4]: 'https://api.crossref.org/works?query=zika&filter=from-online-pub-date%3A2017'

In [5]: works.query('zika').filter(from_online_pub_date='2017').query(author='Mari').url
Out[5]: 'https://api.crossref.org/works?query.author=Mari&filter=from-online-pub-date%3A2017&query=zika'

In [6]: works.query('zika').filter(from_online_pub_date='2017').query(author='Mari').sort('published').url
Out[6]: 'https://api.crossref.org/works?query.author=Mari&query=zika&filter=from-online-pub-date%3A2017&sort=published'

In [7]: works.query('zika').filter(from_online_pub_date='2017').query(author='Mari').sort('published').order('asc').url
Out[7]: 'https://api.crossref.org/works?filter=from-online-pub-date%3A2017&query.author=Mari&order=asc&query=zika&sort=published'

In [8]: from crossref.restful import Prefixes

In [9]: prefixes = Prefixes()

In [10]: prefixes.works('10.1590').query('zike').url
Out[10]: 'https://api.crossref.org/prefixes/10.1590/works?query=zike'

In [11]: from crossref.restful import Journals

In [12]: journals = Journals()

In [13]: journals.url
Out[13]: 'https://api.crossref.org/journals'

In [14]: journals.works('0102-311X').url
Out[14]: 'https://api.crossref.org/journals/0102-311X/works'

In [15]: journals.works('0102-311X').query('zika').url
Out[15]: 'https://api.crossref.org/journals/0102-311X/works?query=zika'

In [16]: journals.works('0102-311X').query('zika').count()
Out[16]: 12

All

This method returns all items of an endpoint. It will use the limit offset parameters to iterate through the endpoints Journals, Types, Members and Prefixes.

For the works endpoint, the library will make use of the cursor to paginate through API until it is totally consumed.

In [1]: from crossref.restful import Journals

In [2]: journals = Journals()

In [3]: for item in journals.all():
   ...:     print(item['title'])
   ...:
JNSM
New Comprehensive Biochemistry
New Frontiers in Ophthalmology
Oral Health Case Reports
Orbit A Journal of American Literature
ORDO

Support for Polite Requests (Etiquette)

Respecting the Crossref API polices for polite requests. This library allows users to setup an Etiquette object to be used in the http requests.

In [1]: from crossref.restful import Works, Etiquette

In [2]: my_etiquette = Etiquette('My Project Name', 'My Project version', 'My Project URL', 'My contact email')

In [3]: str(my_etiquette)
Out[3]: 'My Project Name/My Project version (My Project URL; mailto:My contact email) BasedOn: CrossrefAPI/1.1.0'

In [4]: my_etiquette = Etiquette('My Project Name', '0.2alpha', 'https://myalphaproject.com', '[email protected]')

In [5]: str(my_etiquette)
Out[5]: 'My Project Name/0.2alpha (https://myalphaproject.com; mailto:[email protected]) BasedOn: CrossrefAPI/1.1.0'

In [6]: works = Works(etiquette=my_etiquette)

In [7]: for i in works.sample(5).select('DOI'):
   ...:     print(i)
   ...:

{'DOI': '10.1016/j.ceramint.2014.10.086'}
{'DOI': '10.1016/j.biomaterials.2012.02.034'}
{'DOI': '10.1001/jamaoto.2013.6450'}
{'DOI': '10.1016/s0021-9290(17)30138-0'}
{'DOI': '10.1109/ancs.2011.11'}

Voilá!!! The requests made for the Crossref API, were made setting the user-agent as: 'My Project Name/0.2alpha (https://myalphaproject.com; mailto:[email protected]) BasedOn: CrossrefAPI/1.1.0'

Depositing Metadata to Crossref

This library implements the deposit operation "doMDUpload", it means you are able to submit Digital Objects Metadata to Crossref. Se more are: https://support.crossref.org/hc/en-us/articles/214960123

To do that, you must have an active publisher account with crossref.org.

First of all, you need a valid XML following the crossref DTD.

<?xml version='1.0' encoding='utf-8'?>
<doi_batch xmlns:jats="http://www.ncbi.nlm.nih.gov/JATS1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.crossref.org/schema/4.4.0" version="4.4.0" xsi:schemaLocation="http://www.crossref.org/schema/4.4.0 http://www.crossref.org/schemas/crossref4.4.0.xsd">
  <head>
    <doi_batch_id>c5473e12dc8e4f36a40f76f8eae15280</doi_batch_id>
    <timestamp>20171009132847</timestamp>
    <depositor>
      <depositor_name>SciELO</depositor_name>
      <email_address>[email protected]</email_address>
    </depositor>
    <registrant>SciELO</registrant>
  </head>
  <body>
    <journal>
      <journal_metadata>
        <full_title>Revista Brasileira de Ciência Avícola</full_title>
        <abbrev_title>Rev. Bras. Cienc. Avic.</abbrev_title>
        <issn media_type="electronic">1516-635X</issn>
      </journal_metadata>
      <journal_issue>
        <publication_date media_type="print">
          <month>09</month>
          <year>2017</year>
        </publication_date>
        <journal_volume>
          <volume>19</volume>
        </journal_volume>
        <issue>3</issue>
      </journal_issue>
      <journal_article publication_type="full_text" reference_distribution_opts="any">
        <titles>
          <title>Climatic Variation: Effects on Stress Levels, Feed Intake, and Bodyweight of Broilers</title>
        </titles>
        <contributors>
          <person_name contributor_role="author" sequence="first">
            <given_name>R</given_name>
            <surname>Osti</surname>
            <affiliation>Huazhong Agricultural University,  China</affiliation>
          </person_name>
          <person_name contributor_role="author" sequence="additional">
            <given_name>D</given_name>
            <surname>Bhattarai</surname>
            <affiliation>Huazhong Agricultural University,  China</affiliation>
          </person_name>
          <person_name contributor_role="author" sequence="additional">
            <given_name>D</given_name>
            <surname>Zhou</surname>
            <affiliation>Huazhong Agricultural University,  China</affiliation>
          </person_name>
        </contributors>
        <publication_date media_type="print">
          <month>09</month>
          <year>2017</year>
        </publication_date>
        <pages>
          <first_page>489</first_page>
          <last_page>496</last_page>
        </pages>
        <publisher_item>
          <identifier id_type="pii">S1516-635X2017000300489</identifier>
        </publisher_item>
        <doi_data>
          <doi>10.1590/1806-9061-2017-0494</doi>
          <resource>http://www.scielo.br/scielo.php?script=sci_arttext&amp;pid=S1516-635X2017000300489&amp;lng=en&amp;tlng=en</resource>
        </doi_data>
        <citation_list>
          <citation key="ref1">
            <journal_title>Journal of Agriculture Science</journal_title>
            <author>Alade O</author>
            <volume>5</volume>
            <first_page>176</first_page>
            <cYear>2013</cYear>
            <article_title>Perceived effect of climate variation on poultry production in Oke Ogun area of Oyo State</article_title>
          </citation>

          ...

          <citation key="ref40">
            <journal_title>Poultry Science</journal_title>
            <author>Zulkifli I</author>
            <volume>88</volume>
            <first_page>471</first_page>
            <cYear>2009</cYear>
            <article_title>Crating and heat stress influence blood parameters and heat shock protein 70 expression in broiler chickens showing short or long tonic immobility reactions</article_title>
          </citation>
        </citation_list>
      </journal_article>
    </journal>
  </body>
</doi_batch>

Second! Using the library

In [1]: from crossref.restful import Depositor

In [2]: request_xml = open('tests/fixtures/deposit_xml_sample.xml', 'r').read()

In [3]: depositor = Depositor('your prefix', 'your crossref user', 'your crossref password')

In [4]: response = depositor.register_doi('testing_20171011', request_xml)

In [5]: response.status_code
Out[5]: 200

In [6]: response.text
Out[6]: '\n\n\n\n<html>\n<head><title>SUCCESS</title>\n</head>\n<body>\n<h2>SUCCESS</h2>\n<p>Your batch submission was successfully received.</p>\n</body>\n</html>\n'

In [7]: response = depositor.request_doi_status_by_filename('testing_20171011.xml')

In [8]: response.text
Out[8]: '<?xml version="1.0" encoding="UTF-8"?>\n<doi_batch_diagnostic status="queued">\r\n  <submission_id>1415653976</submission_id>\r\n  <batch_id />\r\n</doi_batch_diagnostic>'

In [9]: response = depositor.request_doi_status_by_filename('testing_20171011.xml')

In [10]: response.text
Out[10]: '<?xml version="1.0" encoding="UTF-8"?>\n<doi_batch_diagnostic status="queued">\r\n  <submission_id>1415653976</submission_id>\r\n  <batch_id />\r\n</doi_batch_diagnostic>'

In [11]: response = depositor.request_doi_status_by_filename('testing_20171011.xml', data_type='result')

In [12]: response.text
Out[12]: '<?xml version="1.0" encoding="UTF-8"?>\n<doi_batch_diagnostic status="queued">\r\n  <submission_id>1415653976</submission_id>\r\n  <batch_id />\r\n</doi_batch_diagnostic>'

In [13]: response = depositor.request_doi_status_by_filename('testing_20171011.xml', data_type='contents')

In [14]: response.text
Out[14]: '<?xml version=\'1.0\' encoding=\'utf-8\'?>\n<doi_batch xmlns:jats="http://www.ncbi.nlm.nih.gov/JATS1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.crossref.org/schema/4.4.0" version="4.4.0" xsi:schemaLocation="http://www.crossref.org/schema/4.4.0 http://www.crossref.org/schemas/crossref4.4.0.xsd">\n  <head>\n    <doi_batch_id>c5473e12dc8e4f36a40f76f8eae15280</doi_batch_id>\n    <timestamp>20171009132847</timestamp>\n    <depositor>\n      <depositor_name>SciELO</depositor_name>\n      <email_address>[email protected]</email_address>\n    </depositor>\n    <registrant>SciELO</registrant>\n  </head>\n  <body>\n    <journal>\n      <journal_metadata>\n        <full_title>Revista Brasileira de Ciência Avícola</full_title>\n        <abbrev_title>Rev. Bras. Cienc. Avic.</abbrev_title>\n        <issn media_type="electronic">1516-635X</issn>\n      </journal_metadata>\n      <journal_issue>\n        <publication_date media_type="print">\n          <month>09</month>\n          <year>2017</year>\n        </publication_date>\n        <journal_volume>\n          <volume>19</volume>\n        </journal_volume>\n        <issue>3</issue>\n      </journal_issue>\n      <journal_article publication_type="full_text" reference_distribution_opts="any">\n        <titles>\n          <title>Climatic Variation: Effects on Stress Levels, Feed Intake, and Bodyweight of Broilers</title>\n        </titles>\n        <contributors>\n          <person_name contributor_role="author" sequence="first">\n            <given_name>R</given_name>\n            <surname>Osti</surname>\n            <affiliation>Huazhong Agricultural University,  China</affiliation>\n          </person_name>\n          <person_name contributor_role="author" sequence="additional">\n            <given_name>D</given_name>\n            <surname>Bhattarai</surname>\n            <affiliation>Huazhong Agricultural University,  China</affiliation>\n          </person_name>\n          <person_name contributor_role="author" sequence="additional">\n            <given_name>D</given_name>\n            <surname>Zhou</surname>\n            <affiliation>Huazhong Agricultural University,  China</affiliation>\n          </person_name>\n        </contributors>\n        <publication_date media_type="print">\n          <month>09</month>\n          <year>2017</year>\n        </publication_date>\n        <pages>\n          <first_page>489</first_page>\n          <last_page>496</last_page>\n        </pages>\n        <publisher_item>\n          <identifier id_type="pii">S1516-635X2017000300489</identifier>\n        </publisher_item>\n</doi_batch>'

In [15]: response = depositor.request_doi_status_by_filename('testing_20171011.xml', data_type='result')

In [16]: response.text
Out[16]:
  <doi_batch_diagnostic status="completed" sp="ds4.crossref.org">
     <submission_id>1415649102</submission_id>
     <batch_id>9112073c7f474394adc01b82e27ea2a8</batch_id>
     <record_diagnostic status="Success">
        <doi>10.1590/0037-8682-0216-2016</doi>
        <msg>Successfully updated</msg>
        <citations_diagnostic>
           <citation key="ref1" status="resolved_reference">10.1590/0037-8682-0284-2014</citation>
           <citation key="ref2" status="resolved_reference">10.1371/journal.pone.0090237</citation>
           <citation key="ref3" status="resolved_reference">10.1093/infdis/172.6.1561</citation>
           <citation key="ref4" status="resolved_reference">10.1016/j.ijpara.2011.01.005</citation>
           <citation key="ref5" status="resolved_reference">10.1016/j.rvsc.2013.01.006</citation>
           <citation key="ref6" status="resolved_reference">10.1093/trstmh/tru113</citation>
           <citation key="ref7" status="resolved_reference">10.1590/0074-02760150459</citation>
        </citations_diagnostic>
     </record_diagnostic>
     <batch_data>
        <record_count>1</record_count>
        <success_count>1</success_count>
        <warning_count>0</warning_count>
        <failure_count>0</failure_count>
     </batch_data>
  </doi_batch_diagnostic>

Explaining the code

Line 1: Importing the Depositor Class

Line 2: Loading a valid XML for deposit

Line 3: Creating an instance of Depositor. You should use you crossref credentials at this point. If you wana be polite, you should also give an etiquette object at this momment.

.. block-code:: python

  etiquette = Etiquette('My Project Name', 'My Project version', 'My Project URL', 'My contact email')
  Depositor('your prefix', 'your crossref user', 'your crossref password', etiquette)

Line 4: Requesting the DOI (Id do not means you DOI was registered, it is just a DOI Request)

Line 5: Checking the DOI request response.

Line 6: Printing the DOI request response body.

Line 7: Requesting the DOI registering status.

Line 8: Checking the DOI registering status, reading the body of the response. You should parse this XML to have the current status of the DOI registering request. You should do this util have an success or error status retrieved.

Line 9-12: Rechecking the request status. It is still in queue. You can also set the response type between ['result', 'contents'], where result will retrieve the status of the DOI registering process, and contents will retrieve the submitted XML content while requesting the DOI.

Line 13-14: Checking the content submitted passing the attribute data_type='contents'.

Line 15-16: After a while, the success status was received.

crossrefapi's People

Contributors

1kastner avatar air-kyi avatar ankush-chander avatar benselme avatar bluetyson avatar daguiam avatar danidelvalle avatar fabiobatalha avatar geritwagner avatar markpbaggett avatar richardscottoz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crossrefapi's Issues

works.filter() and more than 1000 results

I'm using the filter ('for i in works.filter()') and selecting journal (using ISSN) and one month date period to gather articles (using from-pub-date and until-pub-date) from same issue/volume of journal. This seems to work, but when there are more than 1000 records I receive error "Expecting value: line 1 column 1 (char 0)" when using 'json.dumps(i)' and it kills my code.

I can't figure out why this is happening? Any ideas?

Unexpected query output

When trying to retrieve information via simple queries, I consistently got outputs that I did not expect. Specifically, the publications which are referred to by the keywords are not returned in the result of the query. I do however get a return with the right publication data via a manual HTTP GET request.

Example code:

from crossref.restful import Works 

keyword = 'Albert Einstein Elektrodynamik bewegter Körper'

works = Works()
result = works.query(keyword)
for entry in result:
    print(entry)
    break
>> {'indexed': {'date-parts': [[2019, 11, 19]], 'date-time': '2019-11-19T19:11:52Z', 'timestamp': 1574190712445}, 'reference-count': 0, 'publisher': 'Maney Publishing', 'issue': '1', 'content-domain': {'domain': [], 'crossmark-restriction': False}, 'short-container-title': ['Journal of the American Institute for Conservation'], 'published-print': {'date-parts': [[1980]]}, 'DOI': '10.2307/3179679', 'type': 'journal-article', 'created': {'date-parts': [[2006, 4, 18]], 'date-time': '2006-04-18T05:15:34Z', 'timestamp': 1145337334000}, 'page': '21', 'source': 'Crossref', 'is-referenced-by-count': 0, 'title': ['A Semi-Rigid Transparent Support for Paintings Which Have Both Inscriptions on Their Fabric Reverse and Acute Planar Distortions'], 'prefix': '10.1179', 'volume': '20', 'author': [{'given': 'Albert', 'family': 'Albano', 'sequence': 'first', 'affiliation': []}], 'member': '138', 'container-title': ['Journal of the American Institute for Conservation'], 'deposited': {'date-parts': [[2015, 6, 26]], 'date-time': '2015-06-26T01:05:23Z', 'timestamp': 1435280723000}, 'score': 4.5581737, 'issued': {'date-parts': [[1980]]}, 'references-count': 0, 'journal-issue': {'published-print': {'date-parts': [[1980]]}, 'issue': '1'}, 'URL': 'http://dx.doi.org/10.2307/3179679', 'ISSN': ['0197-1360'], 'issn-type': [{'value': '0197-1360', 'type': 'print'}]}

I get this kind of output which has nothing to do with my input keyword with different keywords, too. I have tried modifying the order of the result [result.order('desc')] but that does not seem to change anything.

When I then do the same request via HTTP GET and the normal API URL, I get the expected output as the first result:

import requests

keyword = 'Albert Einstein Elektrodynamik bewegter Körper'

keyword = '+'.join(keyword.split())
url = 'https://api.crossref.org/works?query=' + keyword
result = requests.get(url = url)
# Take first result
result = result.json()['message']['items'][0]
print(result)

>> {'indexed': {'date-parts': [[2020, 5, 25]], 'date-time': '2020-05-25T14:23:45Z', 'timestamp': 1590416625775}, 'publisher-location': 'Wiesbaden', 'reference-count': 0, 'publisher': 'Vieweg+Teubner Verlag', 'isbn-type': [{'value': '9783663193722', 'type': 'print'}, {'value': '9783663195108', 'type': 'electronic'}], 'content-domain': {'domain': [], 'crossmark-restriction': False}, 'published-print': {'date-parts': [[1923]]}, 'DOI': '10.1007/978-3-663-19510-8_3', 'type': 'book-chapter', 'created': {'date-parts': [[2013, 12, 6]], 'date-time': '2013-12-06T02:08:43Z', 'timestamp': 1386295723000}, 'page': '26-50', 'source': 'Crossref', 'is-referenced-by-count': 5, 'title': ['Zur Elektrodynamik bewegter Körper'], 'prefix': '10.1007', 'author': [{'given': 'A.', 'family': 'Einstein', 'sequence': 'first', 'affiliation': []}], 'member': '297', 'container-title': ['Das Relativitätsprinzip'], 'link': [{'URL': 'http://link.springer.com/content/pdf/10.1007/978-3-663-19510-8_3', 'content-type': 'unspecified', 'content-version': 'vor', 'intended-application': 'similarity-checking'}], 'deposited': {'date-parts': [[2013, 12, 6]], 'date-time': '2013-12-06T02:08:45Z', 'timestamp': 1386295725000}, 'score': 53.638336, 'issued': {'date-parts': [[1923]]}, 'ISBN': ['9783663193722', '9783663195108'], 'references-count': 0, 'URL': 'http://dx.doi.org/10.1007/978-3-663-19510-8_3'}

The output that I have retrieved with the tool in this repository has nothing to do with my query keyword. Do you have an idea about how I can fix this? I would be very grateful for every kind of help.

Value Error due to headers format change/

Hi @fabiobatalha,

It seems like crossref api has made some modifications in header format.

{'date': 'Mon, 26 Jul 2021 11:41:59 GMT', 'content-type': 'application/json', 'transfer-encoding': 'chunked', 'access-control-allow-origin': '*', 'access-control-allow-headers': 'X-Requested-With', 'vary': 'Accept-Encoding', 'content-encoding': 'gzip', 'server': 'Jetty(9.4.40.v20210413)', 'x-ratelimit-limit': '50', 'x-ratelimit-interval': '1s', 'x-rate-limit-limit': '50, 50', 'x-rate-limit-interval': '1s, 1s', 'permissions-policy': 'interest-cohort=()', 'connection': 'close'}

'x-rate-limit-limit': '50, 50', 'x-rate-limit-interval': '1s, 1s',

Running below code

from crossref.restful import Works
works = Works()
w1 = works.query('zika').sample(20)
for item in w1:
    print(item["title"])

is giving following error:

Traceback (most recent call last):
  File "/home/ankush/.config/JetBrains/PyCharm2021.1/scratches/crossref_scratch.py", line 6, in <module>
    for item in w1:
  File "/media/ankush/ContinentalGroun/workplace/open_source/crossrefapi/crossref/restful.py", line 264, in __iter__
    result = self.do_http_request(
  File "/media/ankush/ContinentalGroun/workplace/open_source/crossrefapi/crossref/restful.py", line 80, in do_http_request
    self._update_rate_limits(result.headers)
  File "/media/ankush/ContinentalGroun/workplace/open_source/crossrefapi/crossref/restful.py", line 43, in _update_rate_limits
    self.rate_limits['X-Rate-Limit-Limit'] = int(headers.get('X-Rate-Limit-Limit', 50))
ValueError: invalid literal for int() with base 10: '50, 50'

Add Missing Documentation for Classes and Methods

Description:

The code currently lacks proper documentation, making it difficult for users to understand the classes and methods and their intended usage. In order to improve the code's usability and maintainability, we should add comprehensive documentation.

Documentation Status:

  • Classes have no documentation
  • Some methods have documentation while others don't.
  • Lack of docstrings throughout the code.

Action Required:

  • Add docstrings to classes, methods, and functions where missing.
  • Improve or complete existing docstrings.
  • Ensure consistent style and formatting of the documentation.

Expected Documentation Style:
We can use PEP 257 style docstrings for documenting classes, methods, and functions. Refer to the PEP 257 documentation for guidelines.

Specific Examples:

  • Class Endpoint and Works have no documentation.
  • Method do_http_request in the HTTPRequest class has no documentation

Question: Matching unstructured citations with DOIs

I realise this is more of a general question, but I hope I can still get some help.

I would like to get the DOIs of a list of unstructured citations (somehow, similar to this issue).

However, if I run:

unstructured_citation = "Jan Hansen, Jochen Hung, Jaroslav Ira, Judit " \
                        "Klement, Sylvain Lesage, Juan Luis Simal and " \
                        "Andrew Tompkins (eds), The European Experience: " \
                        "A Multi-Perspective History of Modern Europe. " \
                        "Cambridge, UK: Open Book Publishers, 2023."
work = Works.query(bibliographic="unstructured_citation").sort("relevance")

I get a huge numbers of results in the variable work (some of which are not even related).

What am I am missing? Is the bibliographic argument meant to be used for work titles only? Should I try to extract the work titles from the raw citations and then use them as part of the query?
Thank you!

Please add Depositor example to Readme.md

I'm interested in using this project to do deposits into crossref. Could you add an example of how to use the Depositor class?

Also, does the Depositor support resource-only deposits?

Thanks!

support for select parameter

We have added a select parameter that allows one finer control of response sizes. The following, for example, will only return the DOI and title for each matching record.

http://api.crossref.org/works?sample=10&select=DOI,title

result rank

Are results ranked in a row? If I search the same key words in different number in 'sample()', how can I get different results in the second time?

For example, works.query(bibliographic=key_words).sample(10), get 10 results.
Then works.query(bibliographic=key_words).sample(20), how to get new 20 results instead 10 old and 10 new.

Thanks!

Question: matching titles to doi

Apologies if this isn't the right channel to ask.

I'm trying to match titles to their DOI with a simple loop

for article in articles:
	work = works.query(bibliographic=article.title)
	for w in work:
		if hasattr(article, 'title') and hasattr(w,'title') and w['title'][0] == article.title:
			article.doi = w['DOI']
			print(article.doi)
			article.save()
		else:
			print('not found', article.title)

But since work contains over 80k results, the method is too slow to be valuable. I have also tried with .sample(20) hoping it would narrow the search, but it didn't match any titles. Is it because the sample is random?

Is there any way I can just fetch the first items from the work class? It seems they always contain the match I need.

Result inconsistency

If I run below code it gives me 0 results, which is expected
journals.works('1946-3944').filter(type='journal-article').filter(from_created_date='2021-11-05').filter(until_created_date='2021-11-05').count()

But when I run the same code with all(), it gives me some random results.
journals.works('1946-3944').filter(type='journal-article').filter(from_created_date='2021-11-05').filter(until_created_date='2021-11-05').all()
If I iterate over the results, it gives me some random results. Even this code should return an empty array

support for "polite" usage of the API

We have added some new header and parameter support for providing contact information. This is designed to help us troubleshoot problems with the API. See the section on etiquette at api.crossref.org. Would love to see support for this.

Wiki example: download PDF from DOI

Hi there, great work!!!

I am a first-time user, trying to wrap my head around dowloading a PDF based on its DOI, something like
works.doi.download('10.1590/0102-311x00133115', '~/Downloads/')
that would result in the PDF landing in my Downloads folder.

I guess this is something very simple, however I could not find any example. Would you please provide one, perhaps even as a Wiki entry?

Many thanks & Merry Christmas,
Stav

BUG: timeout when looking for works

I sometimes get timeout errors when searching for dois:

Traceback (most recent call last):
  File "C:\Users\delap\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\requests\models.py", line 910, in json
    return complexjson.loads(self.text, **kwargs)
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\json\decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\Projets\h-transport-materials-dashboard\test.py", line 6, in <module>
    works.doi("10.1103/PhysRevB.4.330")
  File "C:\Users\delap\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\crossref\restful.py", line 957, in doi
    result = result.json()
  File "C:\Users\delap\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\requests\models.py", line 917, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: [Errno Expecting value] <html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
</body>
</html>
: 0

This is rather new and I haven't experience that before.

Here's the code to reproduce:

from crossref.restful import Works

works = Works()
works.doi("10.1103/PhysRevB.4.330")

get metadata of a certain paper

I have known the exact title of a paper, how can I print metadata, like author name?
(My plan is getting information automatically of more than 80 papers which I only know titles)
Anyone have some ideas?

ImportError: No module named restful

When i am importing crossref.restful, getting error

`from crossref.restful import Works
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "crossref.py", line 1, in <module>
    from crossref.restful import Works
ImportError: No module named restful

`

Multiple words in query

How to do query with multiple words? I use "+" but the results that I got from Crossref website is different than using crossrefapi?

It seems that a lot of literature doesn't exist abstracts in crossref

I have got 10000+ Wiley dois by crossref API( works.query() without "has_abstract=true", cuz it returns 0 with "has_abstract=true" ) and tried two different ways to fetch abstracts, but they don't work.
If I tried to fetch abstracts by Wiley API, it would download full texts(PDF), which wastes a lot of time in parsing.
So how can I get abstracts by crossref or does crossref not have abstracts of these dois?
Thanks for your help!
There are my approaches to fetch abstracts:
1.from crossref.restful import Works API:
image
2."requests" tool to link crossref url:

image

Depositor missing 'timeout' attribute used with self.do_http_request

Hello,
I've found your library useful, but encountered a bug recently. crossref.restful.Depositor has several methods that expect to pass the the attribute self.timeout as a parameter to self.do_http_request, which throws this error: "AttributeError: 'Depositor' object has no attribute 'timeout'" because self.timeout is never initialized.

I did a quick patch by adding timeout=100 to Depositor.__init__()'s parameter list and adding self.timeout = timeout in the body of the init, which fixed the issue, at least in my very cursory testing.

If the above is an acceptable solution, I'm happy to test it out more thoroughly and submit a pull request. If you'd prefer another approach, I'd be happy to help with that as well.

Thanks for creating and maintaining this library!

Support for rows missing

Cool library. I however find that only supporting sample(n) for limiting the results is a pity.
It would be cool if you could also support the rows query parameter to control number of results returned by a query. This would also help limiting the load for common use cases like "find the best (or best n) match based on title and author"

New Crossref REST API coming soon

Hi,

Thank you for maintaining one of the documented libraries for using the Crossref REST API.

We’ve been working on a new version of the REST API, replacing the Solr backend with Elasticsearch and moving from our own hardware in a datacenter to a cloud platform.

We plan to cutover to the new version shortly (expect an official announcement on our blog in the next few days with more details), and wanted to invite you to test it out before the official cutover.

Please check it out at https://api.production.crossref.org/

During the cutover phase (expected to last a few weeks), traffic will be redirected to the above domain on a pool by pool basis. Once all traffic is using the new service, we will continue to use the api.crossref.org domain, so please do not update anything to use the temporary domain.

Let me know if you have any questions. Issues can be filed into our GitLab issue repository, or I’ll keep an eye on this thread.

Thanks again,
Patrick

Question regarding paging

I want to read 200 results in 50 result chunks. The reading is done not concurrently but with sometimes other requests in between. How to tell the api to give the next 50 results (51 to 100)?

order of sample and filter produces different API calls

Does not seem to like filter before sample, but does not complain:

>>>works.filter(type='book').sample(10).url
https://api.crossref.org/works?sample=10

Sample before filter works as expected:

>>>works.sample(10).filter(type='book').url
https://api.crossref.org/works?sample=10&filter=type%3Abook

'crossref' is not a package error

I installed crossrefapi using pip, it seemed to install fine. When I attempted to use it, got the following error: ModuleNotFoundError: No module named 'crossref.restful'; 'crossref' is not a package

Here was my code

from crossref.restful import Journals

journals = Journals()

print(journals.journal('1759-3441'))

supporting proxies

Great package !
It would be nice to add support for proxies

the request.get method accept proxies (in a form of a dictionary)

 dict_proxies = {'https': 'https://username:password@HOST:PORT',
            'http': 'http://username:password@HOST:PORT',
               }
  requests.get(url , proxies = dict_proxies)

from-accepted-date filter returning UrlSyntaxError

I was trying to get my own works using the from_accepted_date filter and it returned this error.

from crossref.restful import Works
works = Works()

pub_date = '2001'
author='aguiam'
pub = works.query(author=author,).filter(from_accepted_date=pub_date
                                                    ).sort('published')

UrlSyntaxError: Filter from-accepted-date specified but there is no such filter for this route. Valid filters for this route are: from-event-start-date, has-update, has-abstract, article_number, until-update-date, from-posted-date, license.delay, has-update-policy, prefix, has-content-domain, has-authenticated-orcid, type, relation.type, from-event-end-date, has-orcid, archive, full-text.version, until-event-end-date, from-pub-date, until-index-date, has-full-text, has-assertion, until-posted-date, until-print-pub-date, has-affiliation, funder-doi-asserted-by, license.version, assertion, has-funder, member, from-created-date, has-domain-restriction, from-index-date, full-text.application, has-event, until-pub-date, until-event-start-date, from-deposit-date, relation.object-type, has-award, clinical-trial-number, assertion-group, until-deposit-date, award.funder, until-accepted-date, from-online-pub-date, until-online-pub-date, has-archive, license.url, orcid, type-name, isbn, full-text.type, has-relation, from-print-pub-date, until-created-date, from-update-date, has-clinical-trial-number, has-references, content-domain, doi, award.number, until-issued-date, has-license, issn, alternative_id, group-title, relation.object, is-update, container-title, directory, category-name, funder, from-accepted_date, has-funder-doi, update-type, updates, from-issued-date

Configuration for different API endpoint (test crossref site)

It would be useful if you could set the API url for crossref API requests.

For example, as we are testing, it would be good to make requests to test.crossref.org instead of api.crossref.org so that we are not testing using the production site.

Thanks!

Support for rate limiting

  1. Does this library automatically apply throttling to comply with Crossref rate limits? I.e., as an extreme example, if non-Plus caller invokes doi() 100 times a second, would crossrefapi throttle outgoing requests and make the caller wait so as not to exceed Crossref API limits?

  2. If not, is there a way for the caller to programmatically find out rate limit currently in effect and throttle its doi() invocations accordingly?

See also: https://api.crossref.org/swagger-ui/index.html. It seems that Crossref signals current rate limits using HTTP headers.

Presumably, complying with rate limits is preferable and guarantees not running into any further limiting.

Affiliation missing

When I search for a DOI, the affiliations for the authors are missing. Example

from crossref.restful import Works
import json

doi = "10.1016/j.jbusvent.2019.105970"
works = Works()
res = works.doi(doi)

with open("doi.json","w", encoding="utf8") as fileh:
    json.dump(res, fileh, ensure_ascii=False, indent=4, sort_keys=True)

The relevant rows in the output file doi.json are:

"author": [
        {
            "affiliation": [],
            "family": "Douglas",
            "given": "Evan J.",
            "sequence": "first"
        },
        {
            "affiliation": [],
            "family": "Shepherd",
            "given": "Dean A.",
            "sequence": "additional"
        },
        {
            "affiliation": [],
            "family": "Prentice",
            "given": "Catherine",
            "sequence": "additional"
        }
    ],

with the affiliation of the authors missing

query error

I am a fresh user, when applying syntax
w1 = works.query(title='zika')
return
TypeError: list indices must be integers or slices, not str
But it is OK when query author and others, have any ideas?

Too Many Requests Error

Hi @fabiobatalha
I am using this library now and i got too many request error.
Because i called the multiple request at a time.
How to handle this error
Please reply me.

sample doesn't work with filters

Great library. But just discovered that sample doesn't work when combined with a filter.

w = Works().filter(type='journal-article').sample(5).url
w

returns

'https://api.crossref.org/works?sample=5'

Would expect something like this:

http://api.crossref.org/works?filter=type:journal-article&sample=5

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.