Code Monkey home page Code Monkey logo

architecting_attribution's People

Contributors

cgcook avatar eichmann avatar jmcmurry avatar kglibrarian avatar kristiholmes avatar lisaokeefe1 avatar mellybelly avatar nicolevasilevsky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

architecting_attribution's Issues

Scope demonstrator application project

Develop wizard to generate enhanced attribution content. Input could be a persistent identifier (or set of PIDs), output is properly formatted citation with enhanced content or block of text.

  • narrative for contributions to science sections of the biosketch
  • annotations for individual citations (e.g., critical references form or a CV)

To determine: deliverables, staffing, schedule, stakeholders, CTSA context, communication plan

Documentation of diverse use cases

We need to illustrate: contributions to different types of research products, and their representation in the file. The use cases could also include what viewing that information might look like (for example, in a software repo there could be a file analogous to the license.md file).

Investigate NISO Scholarly Outputs table (from NISO RP-25-2016)

from the report:

The NISO Scholarly Outputs table (see Google document at https://sites.google.com/a/niso.org/scholarlyoutputs/) is a first attempt at a comprehensive list of research outputs, including the traditional academic publication and extending to more alternative outputs. These outputs may fall within the scope of assessment when developing metrics to evaluate the impact of scholarly activity, with the acknowledgement that meaningful impact can go far beyond conventional publishing workflows and often involves the rich array of scholarly products that are created during the research process. These output types are grouped by class and alphabetized with a brief description and documentation of known current efforts (and by whom they are being undertaken). Relevant links are listed where available, and most entries have been assigned a focus area to group them by similar contextual uses. The focus areas are: Basic Sciences; Capacity; Code and Software; Communications; Data; Education and Training Materials; Events; Grey Literature; Images, Diagrams, and Video; Industry; Instruments, Devices, and Inventions; Methodologies, Publications; Regulatory, Compliance, and Legislation; Standards; and Other.

This work is not complete, as the very nature of scholarly activity continually evolves. However, the rich array of outputs represented in this table help to better establish the breadth and depth of scholarly work that may be produced by an investigator or by a research team. Through this effort, the working group hopes to generate discussion about how we may begin to leverage integrated data, persistent identifiers, and automated workflows to better capture and track the full complement of research activity, as is possible for publication data.

Stating intent

Write a quick sentence explaining what we'd like to do w/WD and COAR and why it's good.

decide if we want to add additional classes to CRO from the Force2015 workshop

this spreadsheet contains all of the terms that were brainstormed at the Force2015 workshop:
https://docs.google.com/spreadsheets/d/1BudaIWSIgo7RVmzIpWO_hkE5mlgeii16xBTY37d0Zek/edit#gid=0

At the F2F, we decided that we didn't want to add all of these terms to the CRO at this time. I think a lot of terms are overlapping though, is it worth comparing this file to to the terms that are currently in the CRO, to see if there are some new terms we may want to potentially add?

suggestions for the project description

A few comments to potentially make this project repo description more useful to naive readers and to NCATS:

Under the deliverables, I made some suggested descriptors for some, but others i didn't really know what they meant:

  • Better understanding of how to address research output types and versioning of objects in the context of unique identifiers
    => is this the data model and use of Wikidata and other sources as types? perhaps this can be better described if so?
  • A large knowledge base of contribution / attribution data available for use by CTSA hubs
    => where will this come from? is it feasible in the 6 month timeframe? where will it live?

Minor- I find the long list of potentially relevant repositories distracting from the actual project. One way to address this would be to create requirements/documentation in the Github Wiki and organize the information there. Similarly, it would be good to start developing some of the implementation requirements and best practices there.

Plan for objects/artfacts (DRAFT)

Use research objects from Wikidata for the annotation files

Work on a plan of sorts for the RAO.

work on a plan of sorts for the RAO. Enumerate use cases, carefully document all the sources for terms we’ve considered, and evaluate ontologies for reuse. We'll break this out into user stories then.

Feedback and brainstorming

What else can we be doing with this project? Where would you like to see us focus? What do you want to help create? Please share here!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.