Comments (4)
(The above issue superseded by UniversalViewer/universalviewer#548).
The original had some UI changes to zoom behaviour that could be made prior to UVCON.
Issue 548 has the remaining (and significant) UI changes.
from user-stories.
I have prepared some comments; they are in the "Search" section of this document:
from user-stories.
What is the relationship between this user story, UV UniversalViewer/universalviewer#548 and user story #4 ?
Current state of play:
https://gist.github.com/tomcrane/8ca89f971d6571acab1016ba34c9dc85
Putting the mechanism of accessing Search (panel/popup etc) to one side for a moment, what does this icon do?
I think it reveals the non-painting
content of the object, and the means to interact with it. Which, very often, involves a search box. That's one way of interacting with the non-painting content of the object; it's the only way the UV has at the moment of interacting with the non-painting content of the object, but in future, the UV can have other ways, such as:
- viewing the running transcript of AV in real time
- the text content of canvases, for reading, and copying OCR, transcription
- possible future complex interactions with the textual content of canvases, like UniversalViewer/universalviewer#424
The first phase is to make search happen here, because it belongs there and is a rearrangement of current UV functionality. It's only available when a search service is available. But that 'panel' becomes the home for other textual content interactions as well, which aren't separable from search. So later it may be available when textual content is available, even if no formal IIIF search service is available. You could still have an interaction that searched what text the UV can see directly.
from user-stories.
Addendum to above comment. For running transcripts, you might wish to configure the UV to have the "text" content visible by default (that panel open, that dialogue visible, TBC). Which is a requirement for this:
https://user-images.githubusercontent.com/1443575/44481517-de718e80-a63d-11e8-85df-87773cb12079.PNG
(the UV needs to provide similar functionality)
Note that "Filter" in that image above is search... in the sense that it's likely to be hitting a IIIF search service.
By extension and if that facility arrives later, you might want the UV configured to present the textual panel open by default for image-based content - so I can see the text of a page of a book, and easily select that text. The kind of thing that happens outside of the UV here:
https://wellcomelibrary.org/moh/report/b18251341/2
(that is just a rendering of the anno list that the UV has access to from the manifest)
My issue... in this UI (and it's not necessarily the UI that we have to adopt) the textual content display occupies the same part of the UI as the search, and the presentation of search results shares a great deal with the presentation of the content that can be searched. After all, that content, and search results, are identical things as far as the UV is concerned - they are both lists of textual annotations. In some contexts they are very clearly the same thing to the user too, where whole annotations are returned as search results. Where the search service generates dynamic annotations for search results on the fly, it's not so clear (the differences between NLW search and Wellcome search, for example). But the UV can't tell how the publisher generates search results from queries, or how big the anno result text is likely to be (from a single word to an entire page transcript in one annotation).
How important is consistent treatment of presentation of text, and search within the textual content, for image based media (where the text is a transcription from an image) and time-based media (where the text is a transcription of the audio or video*)?
Should these all be aspects of the same part of the viewer, as my comment above?
I think I feel that there should be a common search treatment for time-based media and spatial media, and that if the UV is to show textual context of the object (#4) then that is somehow tied to search, which is search of textual content. Which means there is a link between how the UV displays the running transcript of a radio or TV news broadcast, and how you search a digitised book. There is a common thread of UI, they feel part of the same cluster of actions.
*There is another possibility here that we can maybe ignore for now... transcription of spatially targeted text in video; that is, text visible in a scene rather than audible.
from user-stories.
Related Issues (20)
- View search results of generated text
- View highlighted search results
- View a modern interpretation of the generated text
- Accessible display of generated text
- Copy generated text
- Download generated text
- View options for text and/or image display
- Clickable links within the generated text HOT 2
- View information from an external source
- View citation source
- Cite text
- Adjust text size
- Swap text colour and background HOT 1
- View copyright status HOT 2
- Handle long texts
- Transcript Source Formats HOT 2
- Support text display at both page and document/sequence levels HOT 1
- display attribution and license data of canvases in metadata panel
- Title attribute for thumbnails in contents pane
- Disable certain unused modules on build HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from user-stories.