elvis-project / vis-framework Goto Github PK
View Code? Open in Web Editor NEWThoroughly modern symbolic musical data analysis suite.
Home Page: http://elvisproject.ca/
Thoroughly modern symbolic musical data analysis suite.
Home Page: http://elvisproject.ca/
I want to have multiple voice pairs in a single LilyPond score. I'll need to revise vis_these_parts() and everything that calls it.
It's a good question, and we both need to answer it, but GitHub's issue tracker is not so good.
For a given piece and fixed n, we want a method which will:
I'm not sure whether the "graph" option works at all as it stands, but either way, we need to have a purpose-built form for the output from matplotlib. Search for "matplotlib pyqt" to find help.
Users should be able to save their settings and load the automatically. And, when you "reset" V_I_S, the settings shouldn't be reset. You could make it independent.
Currently very clumsy.
Something is wrong with this file in the test_corpus, and it causes vis to fail. I tried in the built-in music21 version, and the same thing happens. Use .show() and look at the end of the file--that's not right, is it?!
Voice crossing is currently shown with positive intervals, and it must be modified to use negative intervals. Considering the size of vis, this is a large problem that will involve minor revision of many methods, major revision of many unit tests, and the addition of more unit tests.
I'm not sure if this is a problem with vis or with music21... or with music21 that vis can overcome... but the "ABC" notation files seem to be very different from other formats when imported to music21. Figure out what's going on and try to support analyzing those files.
Note to Jamie: for the files you modify, please add your name to the copyright notice.
We need to import MEI files to music21. We can use the Python bindings of libmei, but it's still going to take significant work.
First we need to ask ourselves whether it's worth doing this in a music21-official way. Our LilyPond output module isn't intended for upstreaming because (1) upstream is already working on improved LilyPond output, and (2) the long-term plan is to use our module for highly-specialized, unconventional output. But MEI-->music21 is firstly something that doesn't exist yet, secondly something important for the long term, and thirdly extremely complicated.
I guess the real question is whether we should try to collaborate with upstream so that our "just enough to import MEI from the JRP" module can be extended by them later.
Figure out how to put it into the test_corpus directory.
Python convention is to write variable, method, and class names with underscores, but I did it in camel case.
We want a method which will, for each n-gram in a piece, compute the relative (ratio of) frequencies of an n-gram and its retrograde and take the average of all of these relative frequencies, in hopes that some interesting trends will be observed.
We want a method which will take two different pieces, find all the n-grams in each (probably heeding quality), find the difference in the frequency of each n-gram and add all these up, giving a "metric" for distinguishing two pieces, in the hopes that pieces in the same key/mode will have smaller "differences".
For robustness, correctness, and future-proof-ness, I think we should include metadata in our PDF-format results output. This isn't trivial.
It's not possible to do this through LilyPond (see the discussion starting here). We'll have to use another external program.
Pretty straight-forward problem. Because of the vis_these_parts() algorithm, all intervals are counted as many times as there are values of 'n' for n-grams.
Two people isn't enough to work on two interfaces that do the same thing. The GUI has a ways to go before it's reliable, and especially before it's useful for testing and troubleshooting, but I feel it's better for us to focus on enhancing and improving the GUI rather than both the GUI and TUI--or worse, having a GUI "for non-computer people" and a TUI "for those who can deal with it."
Thoughts? What do we need to do to get there?
As per Issue #15.
The user interface should never crash, which means we need more robust input/output methods and unit testing. Consider using Okaara, which is available for Fedora if for nothing else.
We should eventually make an independent user guide of some sort, to explain what our program does and so on.
We can't rely on attaching our score annotations to a particular voice because that voice may not have a note at the particular offset we need to attach a label. Therefore, we need a purpose-built LilyPond context for annotations. We'll be able to move this between any staves we wish, and more importantly we'll be able to use any offset we wish.
NGrams must:
Looks like GitHub automatically closes a milestone when all the Issues are complete. That's a problem for me, because I haven't necessarily added all the issues for a milestone before I complete the existing ones. This Issue will prevent that issue.
^
Produce summary results in a PDF with LilyPond. This will collect the most frequent n-grams and intervals, showing the pieces in which they are the most popular. This is only relevant after analyzing multiple pieces/movements.
-sort by frequency of interval/n-gram
-pay attention to whether to print interval quality
-pay attention to whether to use simple or compound intervals
-for n-grams, only output the values of n requested
Make it work. Lots of work to do... it's barely started.
There are some pieces that I have already chosen for large-section unit testing. We need to finish these tests, so we know our program overall produces accurate results.
When we get a list of integers from theSettings, we'll assume that each is a valid value for n for which to search, and be able to search both parts for all the values of n at once.
Also uncomment the tests for it and make sure they work.
In the stuff-processing portion, I want to catch all exceptions. They should be handled intelligently where possible, but at minimum the program should be able to make a note of which scores were not processed/analyzed, and simply move onto the next score. The point is, I don't want to get 400 scores into a 600 score analysis project, then realize one of the scores doesn't work for some reason, and lose all the results because of a poorly-handled crash.
Do we even really need NonsensicalInputError? Maybe we could use other, default exceptions? Maybe we could use our own, more specific exceptions?
vis should be able to call MrJob by itself.
Find some LilyPond way to dynamically resize the triangles, so the n-gram digits fit inside the triangle shapes.
As part of Issue #27, for Milestone 2 we have to ensure that all the pieces in the test_corpus directory do not produce errors.
Must be able to parse and verify an argument to 'set lookForTheseNs'. Maybe other things.
"The lowest sounding note at all times" must be a voice against which to compare. Write this into vis_these_parts(), depending on a VIS_Settings() switch.
Some should-be-valid files fail to import, and some fail to be imported correctly. Sometimes vis can solve this and sometimes it can't. There are also certainly many bugs that remain to be found, and many of them can be worked out by testing against a much larger range of pieces. In particular, we need to test against the Palestrina corpus. We also need to figure out why some pieces take much longer than others, and try to fix it. It'll be better to do this on a more powerful computer after MrElvis is implemented.
We'll use this Issue to collect all the Issues that arise out of larger-corpus testing.
I don't fully understand the requirements of n-gram inversion, so I need to find out, then implement it.
Find a better way to provide in-program help, and to make it translatable, just in case.
We could use a "show" command in the UI to easily access things in the Vertical_Interval_Statistics object that we have. This is complicated by the fact our V_I_S object are currently destroyed at the end of analyze_this().
Looks like GitHub automatically closes a milestone when all the Issues are complete. That's a problem for me, because I haven't necessarily added all the issues for a milestone before I complete the existing ones. This Issue will prevent that issue.
We have to be able to process multiple files with one database. This is related to Issue #7, which will either help or complicate the issue in the short term.
Users should be able to give us a directory pathname, and we'll use the same settings for all of the files in that directory. We'll also compile the statistics for all scores into one Vertical_Interval_Statistics() instance.
Currently called propertySet() and propertyGet() but it makes more sense to be set_setting() and get_setting(). Must also update the unit tests.
So people can decide for themselves what their indentation should look like.
Currently, you can only analyze "all" voices/parts in a score.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.