mntnr / name-your-contributors Goto Github PK
View Code? Open in Web Editor NEWName your GitHub contributors; get commits, issues, and comments
License: MIT License
Name your GitHub contributors; get commits, issues, and comments
License: MIT License
It would be good to support for a config file that lists several repos and timeframes, so that it generates it all in one swipe.
Requested in #16.
Hi @RichardLitt! I'm looking forward to start using this module more, there are just a couple of things I would love to have in order to do so which. I would like to know if they sounds like they fit in the scope of this module or if I just should go ahead and create one that fits my needs. :) Features I'm looking for are:
-o
--output-path
option to be able to specify where the data gets written//example
"{ timeframe: <2017-10-12--2017-12-20>, project: <org/repo>, user: <username>, issuesCreated: <>, commits: <>.. plus other data } \n"
// example:
{
"2017-10-12--2017-12-20": {
"org/repo": {
user1: {
// all the interesting stats
}
user2: {
// ...
}
}
}
}
Overall I just also need it to be working, which I'm being unsuccessful at the moment.
Specifically, we need an example that runs on a repo and ends with a list of users that can be used in a CONTRIBUTORS file. This would probably involve also mentioning whodunnit, which is OK.
If the commit flag is on, the CSV format doesn't show the commits. This is due to these lines https://github.com/mntnr/name-your-contributors/blob/master/src/index.js#L43-L50. Commits needs to be added in here.
It looks more and more like this is a must have. It means another query to get the user object.
Necessary for #46.
CF https://github.com/semantic-release/semantic-release
I like this a lot more than manually pushing stuff. The more I use it, the more I like it. Let's add a note to the README and looking into a way of checking PRs for standardization.
Some queries can't be fulfilled at present because they require more than an hour's quota of info.
One strategy to deal with this would be to have a nyc run as a daemon where you can send it queries and it will grab them over time and store the results in a file. The user should be able to query the status of their running queries to know when the files are safe to copy.
I'm basically thinking of transmission-remote as the interface we want.
We already query exactly who does what and where. Which PR of which repo did so and so comment on?
The current aggregation logic throws that away and just returns bucketed counts.
We need the option to keep the entire query response tree (cleaned up a bit, but no info thrown away).
When I ran
$ name-your-contributors hoodiehq --since=2016-02-01T00:20:000Z
I get
Error: no gitconfig to be found at /private/tmp
at /Users/gregor/.nvm/versions/node/v4.2.2/lib/node_modules/name-your-contributors/node_modules/gitconfiglocal/index.js:12:27
at /Users/gregor/.nvm/versions/node/v4.2.2/lib/node_modules/name-your-contributors/node_modules/gitconfiglocal/index.js:45:48
at FSReqWrap.cb [as oncomplete] (fs.js:212:19)
any idea?
$ node -v && npm -v
v4.2.2
3.7.3
Currently working on branch feature/add-prs. I found an issue with depaginate, which now needs to be updated in each dep. Fun times.
This entire module would work much, much simpler if it didn't have thousands of hits. Let's reduce it to a few. This will also make bug fixing much easier.
Is there a way I can send the request authenticated as my github user?
$ name-your-contributors hoodiehq --since=2016-02-01T00:20:000Z
err { [Error: {"message":"API rate limit exceeded for 38.104.218.154. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)","documentation_url":"https://developer.github.com/v3/#rate-limiting"}]
status: 403,
json:
{ message: 'API rate limit exceeded for 38.104.218.154. (But here\'s the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)',
documentation_url: 'https://developer.github.com/v3/#rate-limiting' } }
Unable to get unique users { [Error: {"message":"API rate limit exceeded for 38.104.218.154. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)","documentation_url":"https://developer.github.com/v3/#rate-limiting"}]
status: 403,
json:
{ message: 'API rate limit exceeded for 38.104.218.154. (But here\'s the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)',
documentation_url: 'https://developer.github.com/v3/#rate-limiting' } }
err { [Error: {"message":"API rate limit exceeded for 38.104.218.154. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)","documentation_url":"https://developer.github.com/v3/#rate-limiting"}]
status: 403,
json:
{ message: 'API rate limit exceeded for 38.104.218.154. (But here\'s the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)',
documentation_url: 'https://developer.github.com/v3/#rate-limiting' } }
Would be awesome to support time ranges:
nameYourContributors('ipfs', {
since: '2016-01-15T00:20:24Z',
until: '2016-10-15T00:20:24Z' // or "to:"
})
If someone comments on a thread three months ago, and someone else comments a day ago, both will be pulled if I am searching within the past week. This is a serious bug that devalues the stats about the size of a community. Each object will need to be date-sorted, not just the top-level issues and PRs.
Right now, this exports only to JSON. CSV exporting would be a cool feature.
Requested in #16.
Verbose and debugging logging, dry run, etc..
🐕 node cli.js ipfs --since=2016-10-15T00:20:24Z
Got response
wrote issue_creators
wrote issue_commenters
/Users/richard/src/name-your-contributors/node_modules/octokat/dist/node/plugins/cache-handler.js:49
throw new Error('ERROR: Bug in Octokat cacheHandler. It had an eTag but not the cached response');
In addition to logins and names, we should grab the GitHub URLs of contributors.
Expose a --flat option, where:
//example
"{ timeframe: <2017-10-12--2017-12-20>, project: <org/repo>, user: <username>, issuesCreated: <>, commits: <>.. plus other data } \n"
// example:
{
"2017-10-12--2017-12-20": {
"org/repo": {
user1: {
// all the interesting stats
}
user2: {
// ...
}
}
}
}
Asked for in #16.
All Contributors is a similar tool. What do they do differently? How can we help them? How can we work together to make contributions easier?
To see what happens to your code in Node.js 10, Greenkeeper has created a branch with the following changes:
.travis.yml
If you’re interested in upgrading this repo to Node.js 10, you can open a PR with these changes. Please note that this issue is just intended as a friendly reminder and the PR as a possible starting point for getting your code running on Node.js 10.
Greenkeeper has checked the engines
key in any package.json
file, the .nvmrc
file, and the .travis.yml
file, if present.
engines
was only updated if it defined a single version, not a range..nvmrc
was updated to Node.js 10.travis.yml
was only changed if there was a root-level node_js
that didn’t already include Node.js 10, such as node
or lts/*
. In this case, the new version was appended to the list. We didn’t touch job or matrix configurations because these tend to be quite specific and complex, and it’s difficult to infer what the intentions were.For many simpler .travis.yml
configurations, this PR should suffice as-is, but depending on what you’re doing it may require additional work or may not be applicable at all. We’re also aware that you may have good reasons to not update to Node.js 10, which is why this was sent as an issue and not a pull request. Feel free to delete it without comment, I’m a humble robot and won’t feel rejected 🤖
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
In the resulting array of contributors, sort by the most active first.
I think the design is not as good as it could be. Look at git-standup -- tons of stars and committers, and not that different of a product.
Take some time to refactor this, Richard.
Using the --org
flag causes a GraphQL permissions error.
🐕 node src/cli.js -o orbitdb -r example-orbitdb-todomvc
Error: Graphql error: {
"data": null,
"errors": [
{
"message": "Your token has not been granted the required scopes to execute this query. The 'name' field requires one of the following scopes: ['read:org'], but your token has only been granted the: ['repo', 'user'] scopes. Please modify your token's scopes at: https://github.com/settings/tokens.",
"locations": [
{
"line": 122,
"column": 1
}
]
}
]
}
at parseResponse (/Users/richard/src/name-your-contributors/src/graphql.js:414:11)
at <anonymous>
This works if you use -u
instead.
This will help adoption.
I want to be able to automatically add all contributors into the package.json contributors field. Can we get the email from authors, too, to match this spec?
This should be possible, with the name of the user and a link. Perhaps a shim tool would be best, on top of this?
Unfortunately I cannot get it to work with any token, even if I enable all scopes? I always get this error back:
{
"data": null,
"errors": [
{
"message": "Your token has not been granted the required scopes to execute this query. The 'email' field requires one of the following scopes: ['user:email', 'read:user'], but your token has only been granted the: ['repo'] scopes. Please modify your token's scopes at: https://github.com/settings/tokens.",
"locations": [
{
"line": 9,
"column": 1
}
]
}
]
}
I tried to debug it but couldn’t figure it out ... any idea what it could be?
The full command I run is
GITHUB_TOKEN=... node src/cli.js -u octokit -r "rest.js" -a 2018-01-17
Make the old dependencies useless.
You shouldn't need to specify hours for the date options of after
or before
.
Requested in #16.
We would like commit authors and people who react to anything in a repo to be included in the definition.
So to that end, we should add two more keys to the returned JSON, commitAuthors
and reactors
, and extend the logic to populate them.
Edit: PullRequestReviewComment
is a distinct entity to PullRequestComment
. We should be grabbing those too.
Is there any way to get this module working without a token, on the client side?
Hi, really interested in using the lib, thanks! Got this error, though, after hanging for a few minutes:
(trusty)warren@localhost:~/sites/plots2$ name-your-contributors ipfs --since=2016-10-15T00:20:24Z
Got response
wrote issue_creators
wrote issue_commenters
<--- Last few GCs --->
403290 ms: Mark-sweep 701.8 (738.5) -> 699.6 (738.5) MB, 8779.9 / 0 ms [allocation failure] [GC in old space requested].
413027 ms: Mark-sweep 699.6 (738.5) -> 699.6 (738.5) MB, 9736.6 / 0 ms [allocation failure] [GC in old space requested].
429222 ms: Mark-sweep 699.6 (738.5) -> 698.8 (738.5) MB, 16195.4 / 13 ms [last resort gc].
441528 ms: Mark-sweep 698.8 (738.5) -> 697.9 (738.5) MB, 12302.8 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x29d65341 <JS Object>
1: /* anonymous */(aka /* anonymous */) [/home/warren/.nvm/versions/node/v4.4.0/lib/node_modules/name-your-contributors/node_modules/octokat/dist/node/verb-methods.js:79] [pc=0x25676570] (this=0x29d080dd <undefined>,verbFunc=0x47f8f695 <JS Function module.exports.SimpleVerbs.verbs.create (SharedFunctionInfo 0x3a6b53e5)>,verbName=0x29d25431 <String[6]: create>)
2: injectVerbMethods [/home/wa...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Aborted (core dumped)
Grabbing everything like we currently do can get insanely expensive. Ways to get around this:
Query tuning: look more carefully at the branching factors in the queries and reduce the cost of the average query
Make reactions optional (and opt in). Reactions, as leaves in the tree, are the most expensive thing to query and not always of interest to end users. So this is a no brainer.
Don't get commit authors from the API by default. This isn't an expensive query, but we have to throttle the pagination since github returns these requests fast enough for us to appear spammy when we don't wait between. There are two ways to go here: 1) allow the user to specify the path to a local clone of the repo which we can query directly, and 2) have a flag to get them from the API. To reiterate, grabbing from the API isn't untenable, plotly.js has ~14000 commits, so it takes 140 requests and 140 quota to get it all. That's not necessarily too much quota, but at a reasonable rate limit for requests the user could be waiting a while. So opt in. This relates to #47.
I got the following error. It really doesn't help me; somewhere, something failed to paginate. What we should specify is in what dependency, where, and how many hits we have had to the API - maybe that is why we get server error
issue.
$ name-your-contributors ipfs --since=2016-02-29T12:00:01Z
Failed to depaginate
{ [Error: Error: connect ENETUNREACH 192.30.252.127:443
at Object.exports._errnoException (util.js:860:11)
at exports._exceptionWithHostPort (util.js:883:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1063:14)] status: 0 }
Failed to depaginate
{ [Error: Error: connect ENETUNREACH 192.30.252.127:443
at Object.exports._errnoException (util.js:860:11)
at exports._exceptionWithHostPort (util.js:883:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1063:14)] status: 0 }
Failed to depaginate
{ [Error: {"message":"Server Error","documentation_url":"https://developer.github.com/v3"}]
status: 500,
json:
{ message: 'Server Error',
documentation_url: 'https://developer.github.com/v3' } }
Failed to depaginate
{ [Error: {"message":"Server Error","documentation_url":"https://developer.github.com/v3"}]
status: 500,
json:
{ message: 'Server Error',
documentation_url: 'https://developer.github.com/v3' } }
Failed to depaginate
{ [Error: Error: socket hang up
at createHangUpError (_http_client.js:209:15)
at TLSSocket.socketOnEnd (_http_client.js:294:23)
at emitNone (events.js:72:20)
at TLSSocket.emit (events.js:166:7)
at endReadableNT (_stream_readable.js:905:12)
at doNTCallback2 (node.js:450:9)
at process._tickCallback (node.js:364:17)] status: 0 }
Failed to depaginate
{ [Error: Error: socket hang up
at createHangUpError (_http_client.js:209:15)
at TLSSocket.socketOnEnd (_http_client.js:294:23)
at emitNone (events.js:72:20)
at TLSSocket.emit (events.js:166:7)
at endReadableNT (_stream_readable.js:905:12)
at doNTCallback2 (node.js:450:9)
at process._tickCallback (node.js:364:17)] status: 0 }
Failed to depaginate
{ [Error: Error: socket hang up
at createHangUpError (_http_client.js:209:15)
at TLSSocket.socketOnEnd (_http_client.js:294:23)
at emitNone (events.js:72:20)
at TLSSocket.emit (events.js:166:7)
at endReadableNT (_stream_readable.js:905:12)
at doNTCallback2 (node.js:450:9)
at process._tickCallback (node.js:364:17)] status: 0 }
Failed to depaginate
{ [Error: Error: socket hang up
at createHangUpError (_http_client.js:209:15)
at TLSSocket.socketOnEnd (_http_client.js:294:23)
at emitNone (events.js:72:20)
at TLSSocket.emit (events.js:166:7)
at endReadableNT (_stream_readable.js:905:12)
at doNTCallback2 (node.js:450:9)
at process._tickCallback (node.js:364:17)] status: 0 }
Failed to depaginate
{ [Error: Error: socket hang up
at createHangUpError (_http_client.js:209:15)
at TLSSocket.socketOnEnd (_http_client.js:294:23)
at emitNone (events.js:72:20)
at TLSSocket.emit (events.js:166:7)
at endReadableNT (_stream_readable.js:905:12)
at doNTCallback2 (node.js:450:9)
at process._tickCallback (node.js:364:17)] status: 0 }
Failed to depaginate
{ [Error: Error: socket hang up
at createHangUpError (_http_client.js:209:15)
at TLSSocket.socketOnEnd (_http_client.js:294:23)
at emitNone (events.js:72:20)
at TLSSocket.emit (events.js:166:7)
at endReadableNT (_stream_readable.js:905:12)
at doNTCallback2 (node.js:450:9)
at process._tickCallback (node.js:364:17)] status: 0 }
Failed to depaginate
{ [Error: Error: socket hang up
at createHangUpError (_http_client.js:209:15)
at TLSSocket.socketOnEnd (_http_client.js:294:23)
at emitNone (events.js:72:20)
at TLSSocket.emit (events.js:166:7)
at endReadableNT (_stream_readable.js:905:12)
at doNTCallback2 (node.js:450:9)
at process._tickCallback (node.js:364:17)] status: 0 }
Semantic release has broken. :( I don't know why, and it's annoying because the current version doesn't work.
@tgetgood I need a tool that shows me the types of interactions, and the amount of times a user does each, for a repo.
Basically, I need: RichardLitt committed 4 times, reacted 17 times, commented 5 times, etc...
Can we do this, easily?
master
branch failed. 🚨I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit the master
branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
semantic-release cannot push the version tag to the branch master
on remote Git repository.
Please refer to the authentication configuration documentation to configure the Git credentials on your CI environment.
Good luck with your project ✨
Your semantic-release bot 📦🚀
Because it needs some for each method.
To daemonise effectively (regarding #48) we need a local cache on disk. Nothing fancy just hashes of the queries and timestamps. Global ttl.
This will also have a beneficial impact on development, and in normal use it will mean that if a query fails because it's too expensive (not in daemon mode) then the data grabbed does not get thrown away when you try again later.
It's possible a lot of things could be gotten from git shortlog -s -n
.
If a user requests info on a very active repo, the current strategy is to spam the server until we have all the data. This has resulted in us getting banned (trying to pull everything from plotly.js).
We should throttle requests to make sure we're not issuing more than X per minute. I'm not sure what X should be but that can be tweaked.
In addition we should alert the user when a given request triggers throttling so that they know it's going to take a long time. And give them an option to abort. When applicable we should also give them advice for making a better (narrower) query.
When I run:
~
🍔 $ npm install --global name-your-contributors
~
🍔 $ name-your-contributors formly-js
Unable to get unique users { [Error: {"message":"API rate limit exceeded for 24.11.5.123. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)","documentation_url":"https://developer.github.com/v3/#rate-limiting"}]
status: 403,
json:
{ message: 'API rate limit exceeded for 24.11.5.123. (But here\'s the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)',
documentation_url: 'https://developer.github.com/v3/#rate-limiting' } }
Failed to depaginate
Failed to depaginate
Failed to depaginate
Failed to depaginate
Failed to depaginate
Failed to depaginate
{ [Error: {"message":"API rate limit exceeded for 24.11.5.123. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)","documentation_url":"https://developer.github.com/v3/#rate-limiting"}]
status: 403,
json:
{ message: 'API rate limit exceeded for 24.11.5.123. (But here\'s the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)',
documentation_url: 'https://developer.github.com/v3/#rate-limiting' } }
Failed to depaginate
{ [Error: {"message":"API rate limit exceeded for 24.11.5.123. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)","documentation_url":"https://developer.github.com/v3/#rate-limiting"}]
status: 403,
json:
... continues
Any ideas?
I tried it with my own username as well. Same error message.
Currently we have to either specify the user and repository name, or an org name. It should be possible in addition to specify:
5.0.8
to 5.0.9
.This version is covered by your current version range and after updating it in your project the build failed.
travis-deploy-once is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
require.resolve
to load babel preset (16292d3)The new version differs by 2 commits.
16292d3
fix: use require.resolve
to load babel preset
858d475
test: add babel-register.js
to test coverage
See the full diff
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.