Comments (2)
This issue is paraphrased from a discussion on IIIF Slack; capturing it as a meta-user-story for posterity. It's a really important one.
I've extracted "@" mentions because not everyone has the same name on Slack as on Github.
@mialondon @jronallo @nicolasfranck @edsilv @LlGC-szw @atarix83 - I think that's everyone mentioned below.
jronallo [2017-10-15 11:39 PM]
Are there any plans to have UV display annotations? I have a need for a transcript to show up next to a page image. This is a case where transcription is done at the page level and not at the line or word level. So mousing over a line to see the transcript isn't possible. Instead the best that could be done is to show the page image on one side and the transcript of the whole page on the other side. Anyone do this with UV or on a site that embeds UV?
mia [2 months ago]
I'm interested in this use case too
njfranck [7:42 AM]
I think the way in which the annotations should be displayed depends on the "motivation". Something like "comment" can be shown next to the page always (or be turned off explicitely), while transcriptions should be "hover only" because there are just too many of them (probably one per line). So jronallo I think that should be taken into account first instead of taking a decision to show all annotations next to the page. What do you think?
tomcrane [9:30 AM]
jronallo - have a look at this PR - UniversalViewer/universalviewer#424
Demo - https://dspace-glam.4science.it/explore?bitstream_id=1877&handle=1234/11&provider=iiif-image#?c=0&m=0&s=0&cv=0&xywh=-6744%2C-390%2C18586%2C7798
saraweale [9:31 AM]
We are using a tab outside of the viewer to display OCR text https://journals.library.wales/view/2919943/2989093/0#?xywh=-937%2C-197%2C4052%2C3931
It's not the prettiest and doesn't allow you to link the text with the image in a meaningful way, that's something we're going to be looking at during the next year.
edsilv [9:33 AM]
We talked a little bit about this a few weeks ago on the community call. Perhaps atarix83 could make this PR into a separate component, a bit like https://github.com/viewdir/iiif-av-component
Also, the new "together" viewing hint was mentioned as a way to do multi-up UVs alongside each other using the new API
tomcrane [9:35 AM]
Another example of external page annos, like saraweale's example - https://wellcomelibrary.org/moh/report/b1988414x/8 - the UV's events are connecting viewer navigation with the external page
But yeah, having transcription annos (which are painting
and therefore part of the representation of the object) rendered by the UV is desirable, as long as the UX works. The above example shows one approach.
Related - IIIF/api#1258 (comment)
mia [1:00 PM]
as our handwritten text recognition tech improves, we should have more mss with transcriptions. In those cases, I'd imagine many users would read the transcript rather than the handwritten version, so side-by-side would be better than underneath.
saraweale [1:39 PM]
I also like the ability to highlight the transcript and see the corresponding area on the image. I think this would really help with manuscripts for us (and mostly everything else really!).
mia [5:33 PM]
We're going to have a bunch of annotations from our playbills project. As a first pass, they could be used in full(ish) text search, but has anyone found a nifty way of displaying more structured annotations on the page?
Sample data is at https://www.libcrowds.com/collection/playbills/data/
tomcrane [6:32 PM]
mia do you mean in the UV, or generally?
(nice to see some W3C annos there by the way)
mia [6:49 PM]
tomcrane in the UV, as we want as much data created by volunteers to end up in strategic / discovery systems as possible. We're also looking at adding to MARC records for the catalogue layer
tomcrane [9:55 AM
mia this matches what saraweale and others have said. The accumulated efforts of crowdsourcing and other content creation outside of the catalogue needs to be as accessible as possible through general discovery, otherwise what's the point of all that effort? The UV is definitely not the place to capture that content, but what of that content should it present? This includes, but is not limited to, transcription, tagging, identifying (and the rest of the W3C annotation motivations) and extends to include structured models captured through special projects (identifying things in photographs with multi-field descriptions, extracting row records from tabular data., etc). But the number of possible user experiences that could present in a discovery environment from that content starts to spin out of control, and if not handled well would destroy the UV's simplicity of presentation (which is what makes the UV the appropriate tool in a general catalogue case, but not in others).
Nevertheless there is a subset of all possible annotation that the UV should always attempt to present: any content with a painting
motivation, because that is part of the object and the publisher of the manifest has thought it appropriate to publish it. This means text transcription, choice of images, segments of images etc., but does not include tagging, commentary or more complex structured data linked through other non-painting motivations.
I don't have a simple answer. What's in scope for the UV to present, and what does it do with the rest? Does it expose the rest to the containing page to make use of, if the page understands it? Does it have a kind of double plugin system, where the base component takes care of canvas rendering for the painting
motivation annotations, then passes the rest of the annotated content through a pipeline of additional plugins that might take on the job of rendering them? For example, NLW capture details of naval personnel in a book of remembrance; this is a very specific model, but if they supply a plugin to the chain that can recognise this model and step in and render it to the canvas, then they could make those volunteer efforts show up in general catalogue view. Or there might be a simpler, fallback pattern that alerts the user who views the work in the catalogue that there is a more detailed representation elsewhere.
It would be good to collect more stories/reqts on this topic.
from user-stories.
Since the above discussion, we've been moving towards agreement that transcribing
should be a formal IIIF annotation motivation distinct from painting
. Both are used for content of the object, but transcribing
leaves room for more flexibility in presentation. This makes the UV's job (as a IIIF client) easier.
from user-stories.
Related Issues (20)
- View search results of generated text
- View highlighted search results
- View a modern interpretation of the generated text
- Accessible display of generated text
- Copy generated text
- Download generated text
- View options for text and/or image display
- Clickable links within the generated text HOT 2
- View information from an external source
- View citation source
- Cite text
- Adjust text size
- Swap text colour and background HOT 1
- View copyright status HOT 2
- Handle long texts
- Transcript Source Formats HOT 2
- Support text display at both page and document/sequence levels HOT 1
- display attribution and license data of canvases in metadata panel
- Title attribute for thumbnails in contents pane
- Disable certain unused modules on build HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from user-stories.