Code Monkey home page Code Monkey logo

ovide's Introduction

Ovide [alpha]

an experimental writing and publishing application for materials-based composition

Price License: AGPL v3.0 Tweet

Ovide screenshot

Ovide is an editor built with the Peritext ecosystem.

It is provided both as a web and desktop application.

Installation for use

You can test Ovide online or go the releases page in order to download Ovide desktop version for your Operating System.

Installation for development

As a prerequisite you will need git and nodejs softwares installed.

Then:

  1. Get a google api key in google dev console -> https://console.developers.google.com then enable youtube api (for automatic metadata retrieval for video resources)
  2. Get a maptiler key (for glossary geolocalization server) - https://cloud.maptiler.com/geocoding/
  3. Open a terminal/bash and type the following lines :
git clone https://github.com/peritext/ovide
cd ovide
npm install
cp app/config/sample.json app/config/default.json
  1. Fill app/config/default.json with your credentials.

Main dev scripts

# run in electron/dev mode with hot reloading
npm run dev:electron

# run in web/dev mode with hot reloading
npm run dev:web

# pack electron application for all platforms
npm run pack

# build web version for production
npm run build:web

# diagnose and fix js code style and inconsistencies
npm run lint:fix

[advanced users] Modifying ovide configuration

Ovide is designed to allow an easy forking and customization in order to change the types of resources, contextualizers and edition templates available in the app.

Everything about this happens in app/src/peritextConfig.render.js : just modify the JS object exposed by this module to require specific schemas or components that you would want to use instead of the default config.

Note: more doc may be written about ovide customization at some point in the future if the tool proves useful for an extended community.

Contributing to ovide

The source code of Ovide is published under free license AGPL-3.0.

This software is currently in alpha stage (which means: lots of bugs and performance issues), and contributions/PR to improve it are more than welcome.

Besides, please do not hesitate to submit new issues to the project's repository in order to signal bugs or missing features.

Acknowledgements

Ovide is a technology that was developped as a generic spinoff of several projects, notably funded by MESR/Université Rennes 2 and médialab Sciences Po.

See the peritext project website for more information about Ovide history.

Besides, it relies on numerous npm packages and libraries for which here are some acknowledgements :

ovide's People

Contributors

dependabot[bot] avatar robindemourat avatar sylr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

sylr jkhaui vujicp

ovide's Issues

Scrollytelling layout

This would require to add a specific clearContextualization block in order to be able to prevent block display, and possibly an main/aside position marker for block contextualizations.

[major][breaking] flattening the model

A future version of peritext/ovide should dismiss the differenciation between sections and resources.

Changes in the schema

  • all sections are resources of type "section", with their contents, notes, and notesOrder a dataprop
  • all resources can have their own contents, notes, and notesOrder in data

Changes in the templates

  • Possibility to add any resource in the summary as a standalone section -> corresponding templates
  • Need to refacto all the linked objects listings. It should be simplified

Changes in the editor

  • In resources form, add an "edit content" button
  • Add a resource content edition view at /resource/:id
  • a bunch of changes everywhere to cope with this

FR: Ecriture inclusive

Personellement je ne suis pas un grand fan de ce style d'écriture donc je voulais savoir si c'est négociable ?

Est-ce qu'a la place de ce style on ne pourrait pas simplement citer la forme masculine et féminine ? .i.e. : auteur.e -> auteur et/ou autrice

Cordialement.

Improve export modal

When clicking on the "export" button, a modal is displayed allowing to export to standard data format.

  • the icon is a cursor while it should be a pointer
  • the export message and explanation are not clear and could be improved

Glossary reverse tagging

Right now the workflow to constitute a glossary with ovide is cumbersome: people have to go through the sections and manually contextualize it in each piece of content related to a given glossary entry.

It would be very useful to add a new feature that would allow to quickly tag multiple parts of contents with a given glossary resource.

In detail this would mean allowing to :

  • visualize all the current contextualizations of the glossary resources in the context of their paragraph, and "untag" them - in the related excerpts, having the tagged strings visually differentiated from other glossary contextualizations (e.g. specific background color for the target text)
  • have an input component attached to a list that would allow to see all exact matches to the given searched string (excluding existing glossary contextualizations)
  • being able to "tag one match" or to "tag all matches" in that list

Location of the feature : in a modal called from the glossary resource edition form, or directly in that form.

Generators events

Generators should be enabled to emit events in order to inform user about what is going on when generating big outputs

[performance] precompute citations for exports

  • add a peritext util to compute citations on the go
  • in the build process compute citations and store them as data
  • in templates accept both no citations (compute them on the go) and citations data to display right away

Bonus :

  • process them in webworker for glossary view and editor view to lighten the interface

Toward tool-specific and source-oriented resources and contextualizations

Hyphe resource/contextualization

Inline : point to the whole corpus, or to a specific WE in the corpus.

Block : parametrable visualization

Gazouilloire resource/contextualization

Inline : point to a specific tweet, thread, hashtag/query (to be represented specifically in the editions ? -> https://github.com/medialab/rebus)

Block : tweet embed, custom visualization ?

Dicto resource/contextualization

Each contextualization should enable to specify specific triggers for choosing which things to show : choosing a montage / choosing a tag / choosing a specific tag category

Inline : display as sound-only selected excerpts.

Block : entire dicto composition.

Twitter resource/contextualization

Displays dynamically or non-dynamically the result of a query on twitter, aligning items.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.