Code Monkey home page Code Monkey logo

Comments (4)

tomcrane avatar tomcrane commented on June 24, 2024

(The above issue superseded by UniversalViewer/universalviewer#548).

The original had some UI changes to zoom behaviour that could be made prior to UVCON.
Issue 548 has the remaining (and significant) UI changes.

from user-stories.

tomcrane avatar tomcrane commented on June 24, 2024

I have prepared some comments; they are in the "Search" section of this document:

https://docs.google.com/document/d/1YJGne0JK4t_5ygC7wHvZrngw2TjTKHt8-Iq__L4Xklo/edit#heading=h.phxwno6lkxjv

from user-stories.

tomcrane avatar tomcrane commented on June 24, 2024

What is the relationship between this user story, UV UniversalViewer/universalviewer#548 and user story #4 ?

Current state of play:
https://gist.github.com/tomcrane/8ca89f971d6571acab1016ba34c9dc85

Putting the mechanism of accessing Search (panel/popup etc) to one side for a moment, what does this icon do?

image

I think it reveals the non-painting content of the object, and the means to interact with it. Which, very often, involves a search box. That's one way of interacting with the non-painting content of the object; it's the only way the UV has at the moment of interacting with the non-painting content of the object, but in future, the UV can have other ways, such as:

  1. viewing the running transcript of AV in real time
  2. the text content of canvases, for reading, and copying OCR, transcription
  3. possible future complex interactions with the textual content of canvases, like UniversalViewer/universalviewer#424

The first phase is to make search happen here, because it belongs there and is a rearrangement of current UV functionality. It's only available when a search service is available. But that 'panel' becomes the home for other textual content interactions as well, which aren't separable from search. So later it may be available when textual content is available, even if no formal IIIF search service is available. You could still have an interaction that searched what text the UV can see directly.

from user-stories.

tomcrane avatar tomcrane commented on June 24, 2024

Addendum to above comment. For running transcripts, you might wish to configure the UV to have the "text" content visible by default (that panel open, that dialogue visible, TBC). Which is a requirement for this:

https://user-images.githubusercontent.com/1443575/44481517-de718e80-a63d-11e8-85df-87773cb12079.PNG

(the UV needs to provide similar functionality)
Note that "Filter" in that image above is search... in the sense that it's likely to be hitting a IIIF search service.

By extension and if that facility arrives later, you might want the UV configured to present the textual panel open by default for image-based content - so I can see the text of a page of a book, and easily select that text. The kind of thing that happens outside of the UV here:

https://wellcomelibrary.org/moh/report/b18251341/2

(that is just a rendering of the anno list that the UV has access to from the manifest)

My issue... in this UI (and it's not necessarily the UI that we have to adopt) the textual content display occupies the same part of the UI as the search, and the presentation of search results shares a great deal with the presentation of the content that can be searched. After all, that content, and search results, are identical things as far as the UV is concerned - they are both lists of textual annotations. In some contexts they are very clearly the same thing to the user too, where whole annotations are returned as search results. Where the search service generates dynamic annotations for search results on the fly, it's not so clear (the differences between NLW search and Wellcome search, for example). But the UV can't tell how the publisher generates search results from queries, or how big the anno result text is likely to be (from a single word to an entire page transcript in one annotation).

How important is consistent treatment of presentation of text, and search within the textual content, for image based media (where the text is a transcription from an image) and time-based media (where the text is a transcription of the audio or video*)?

Should these all be aspects of the same part of the viewer, as my comment above?

I think I feel that there should be a common search treatment for time-based media and spatial media, and that if the UV is to show textual context of the object (#4) then that is somehow tied to search, which is search of textual content. Which means there is a link between how the UV displays the running transcript of a radio or TV news broadcast, and how you search a digitised book. There is a common thread of UI, they feel part of the same cluster of actions.

*There is another possibility here that we can maybe ignore for now... transcription of spatially targeted text in video; that is, text visible in a scene rather than audible.

from user-stories.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.