Code Monkey home page Code Monkey logo

rxivist's People

Contributors

dependabot[bot] avatar rabdill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

rxivist's Issues

Papers get incomplete stats for month they were indexed

When a paper's traffic is recorded, it grabs all the months, including the current (unfinished) month. When we go back to add new months to that list, there's a section in spider.py to make sure we don't re-record any papers:

# make a list that excludes the records we already know about
      to_record = []
      for i, record in enumerate(stats):
        print(record)
        month = record[0]
        year = record[1]
        if year in done.keys() and month in done[year]:
          print("Found, not recording")
        else:
          to_record.append(record)

HOWEVER, this loop should also throw out the stats for the most recent month in the most recent year, because that would have been recorded during whatever that month was, and we have a chance to get a more up-to-date number now that we're revisiting it.

Calculate most popular papers

We need a way to rank all papers by traffic, and it would be chaos to run that calculation every time someone asks.

Combine traffic stats from all revisions of paper

If a new version of a paper is released, we should pull in the download stats from all the old versions too.

NOTE: This may already happen in bioRxiv automatically, not sure.

OTHER NOTE: If bioRxiv doesn't combine traffic numbers between versions, should we keep crawling the old versions to get their updated traffic numbers?

Evaluate "shape" of traffic for each paper

When does a paper get most of its traffic? Is it right away, and then it tapers off? How quickly? (Could that be used for some kind of "enduring popularity" metric?) Do some papers start slower and pick up steam over time?

Rank papers by bounce rate

Which ones have the closest ratio of "abstract viewed" to "PDF" downloaded? If we set a minimum number of views, then we can have a list for "Most appealing abstracts"

Record clicks from Rxivist over to bioRxiv

We may end up with a weird problem when this thing gets released—if Rxivist is popular enough, the patterns we're observing will end up being influenced by our site. My guess is that popular papers will end up getting more popular, but it will be hard to measure the impact.

If we can at least track how many visitors click on the "view paper" button for a particular article, we can try to account for our impact.

Combine all sorting metrics into one big table

We don't need separate tables for each metric we're ranking on and what the rank is—we could have a table where the primary key is the article ID, and then each field is a metric we need to sort on. This way we could have queries be more precise (for example, only give the 'bounce rate leaders' for papers with more than 50 downloads or something)

Incorporate bioRxiv categories into results

From about page:

Articles in bioRxiv are categorized as New Results, Confirmatory Results, or Contradictory Results. New Results describe an advance in a field. Confirmatory Results largely replicate and confirm previously published work, whereas Contradictory Results largely replicate experimental approaches used in previously published work but the results contradict and/or do not
support it.

It would be cool to have separate charts for these available: "Most popular contradictory results" sounds fun

Stream redirection isn't working right in API supervisor script

Some of the stdout stuff that SHOULD be going to /var/log/rxivist.log is just being written to the console. Hiding things in /var/log/messages is silly, I should figure out how to get this to behave right.

To replicate, go to server:

cd /etc
./rc.local

then refresh the web page

Rank trending papers by recent traffic

Some papers at the top of the "all-time" list may have thousands of downloads, all of them years ago. If we assign a decreasing weight to traffic as it gets farther away from the present, we could have a list that incorporates papers with a lot of downloads but favors papers with RECENT downloads.

Add pauses to crawling

Add a flag to turn on a more polite mode of crawling that pauses between major operations to reduce load on bioRxiv

When re-crawling papers, make sure there isn't a more recent version available

If the link we have for a paper isn't the right one (i.e. there's a newer one available), figure out the new one and update the old entry.

NOTE: Be careful with this one, if we record a new paper in this step, the spider will STOP when it runs into that paper in the "new" results because it thinks we've already seen everything after this one. That won't be the case.

We may be able to prevent this entire problem from happening if we only do the "re-crawl old papers" step AFTER we've pulled the latest papers, so all the URLs will already be updated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.