Code Monkey home page Code Monkey logo

cognitive-services-sample-data-files's Introduction

Cognitive Services sample data files

These sample files are used to build models, update models, run tests, and import data. They are samples of files you can generate yourself and use with the associated service.

Type of file Purpose
Apps Import complete app into LUIS. These are small apps but fully defined and importable. See Create an app for more information.
Batch tests These are examples of batch tests of 1000 labeled utterances. See Batch testing with a set of example utterances for more information.
List entities Import large lists of items with synonyms. See Add list entities for more information.
Phrase lists Improve prediction of entities by listing important words to the app domain. See Use phrase lists to boost signal of word list for more information.
Type of file Purpose
Data sources Knowledge bases (KBs) can add questions and answers from a variety of file types. See Data sources for QnA Maker content for more information.
Type of file Purpose
Images Sample images for Computer Vision SDK and Rest API quickstarts.
Spatial analysis Sample deployment manifest for spatial analysis container.
Type of file Purpose
Flow diagrams Example flow diagrams for using Power Automate with Text Analytics.
Sample data Example data files for Text Analytics tutorials.
Email app Example data files for the custom text classification quickstart
Movie summaries Example data files for the custom text classification quickstart
Web of Science Example data files for the custom text classification quickstart
Example loan agreements Example data files for the custom NER quickstart
Example customer reviews Example files for the custom sentiment analysis quickstart.
Example summarization articles Example data files for the custom summarization quickstart
Type of file Purpose
Example benefits document Example document for Azure OpenAI On Your Data.

cognitive-services-sample-data-files's People

Contributors

aahill avatar baherabdullah avatar chrishmsft avatar diberry avatar erhopf avatar eric-urban avatar harishkrishnav avatar jboback avatar kbrowne8 avatar laujan avatar leareai avatar magrefaat avatar microsoftopensource avatar mrbullwinkle avatar msftgits avatar naghazal avatar patrickfarley avatar roy-har avatar sahithikkss-zz avatar srasool avatar teddmcadams avatar v-jaswel avatar wiazur avatar yungshinlintw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cognitive-services-sample-data-files's Issues

criteria for frameId

I completed deployment for spatial analysis by referencing the following docs.
https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest-NonASE.json
For video URL and polygon setting, I referred to the following link.
https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json

And output stream of JSON messages were generated, which sent to my Azure blob storage. And I was trying to visualize the coordinates of bounding box from outputs on the recorded video without using .debug operation. Output, especially 'detection' part, can be visualized as an video only when frameId is taken at regular time intervals.

Here I got some problems.

  1. Looking at the events in JSON, the frameId used for each event was different.
    ('personDistanceEvent' : 1 to 820,
    'personCountEvent' : 1 to 782,
    'personZoneDwellTimeEvent' : 1375 to 9066,
    'personZoneEnterExitEvent' 1389 to 4285: ,
    'personLineEvent' : 1756, 2443,
    'cameraCalibrationEvent' : 7784, 8143)

  2. Even though I set the 'trigger' as 'interval', it was not that periodic. Timestamps were not spaced uniformly, neither was frameId.
    This is part of timestamp of personCountEvent.
    image
    This is part of frameId of perconCountEvent. Sometimes the interval is 1, sometimes 2, sometimes more.
    image

Here's my question.

  1. I would like to know if there is any criteria for cutting frames for each event.
  2. Is there another method to display the detection result on the video without using .debug operation?

Help in face api

I am using face api for comparing two images in python by verify.I cannot do it on Python on my google colab stored files.Anyt help?

QnA Maker many:many samples

Would it be possible for someone to build a template for Q&A Maker that would work for many questions : many answer relationships? Are table formats parse correctly? Or potentially documentation on what the parser is looking for to support many:many questions.

the sample label jason file has format issue

the sample label jason file has format issue. when I import it, it said "InvalidArgument: Invalid project json object. Cannot deserialize the current JSON array (e.g. [1,2,3]) into type 'ExportedDocumentClass' because the type requires a JSON object (e.g. {"name":"value"}) to deserialize correctly. To fix this error either change the JSON to a JSON object (e.g. {"name":"value"}) or change the deserialized type to an array or a type that implements a collection interface (e.g. ICollection, IList) like List that can be deserialized from a JSON array. JsonArrayAttribute can also be added to the type to force it to deserialize from a JSON array. Path 'assets.documents[0].class', line 1, position 352.ย "

I notice someone raise the same issue a few days before, and I don't think it is already resolved

Duplicates in sample for multi-label classification

There are a few duplicate elements in the .json file containing the labels (movieLabels.json) for multi-label classification in this tutorial: https://learn.microsoft.com/en-us/azure/cognitive-services/language-service/custom-text-classification/quickstart?tabs=multi-classification&pivots=language-studio#upload-sample-data-to-blob-container. If you follow it to the letter using Language Studio, it will throw an error mentioning these duplicates.

image

If you delete the duplicates, which are the final 11 elements in the list, it works.

Importing multi-turn.docx does not result in multi-turn behavior from QnA Bot

When I imported, trained, and deployed the document here:

https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx

The resulting chat was not multi-turn. None of the multi-turn buttons were present.

It wasn't clear how the document was formed. Is there a word template for the formatting of bot headings for multi-turn? I wasn't sure how closely I needed to mimic the formatting in this document to get multi-turn behavior, and since it wasn't working for me with the standard Word Heading 1, Heading 2, etc. I tried loading this document to see if it would demonstrate multi-turn conversations and it did not.

My bot was created using the steps from this Quickstart: https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/Quickstarts/create-publish-knowledge-base#create-a-bot

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.