Comments (6)
Example:
Result from testcase 1 with current numpy/scipy (1.12.0, 0.19.0)
[[-49.02129835 -69.14604136]
[-47.79420165 -74.84795598]
[-47.44534636 -72.89867019]
...,
[-46.38570684 -77.84945309]
[-46.95308516 -78.21196059]
[-53.87246341 -75.88541763]]
Result from old numpy/scipy:
[[-49.03116378 -69.13140778]
[-47.78351041 -74.82051528]
[-47.43848092 -72.93956067]
...,
[-46.39025455 -77.83772359]
[-46.91766086 -78.20293629]
[-53.86249229 -75.91799936]]
from sms-tools.
Perhaps related, not sure: in week 3 I had some tests where most of the samples in a final array were identical to the test case, but there were the odd values that differed in their first decimal place. What was strange is that most of the values were correct, and I'm not doing anything specific per-sample, so it felt like a library difference. I am using automated tests to validate my results, and I had to relax the tolerance to get them to pass. I'll see if the same thing happens in Week 4 shortly.
from sms-tools.
If it helps, for A3Part1 I calculate the first 10 samples of test case 2 as:
000 = {float64} -274.888101294
001 = {float64} -265.470359476
002 = {float64} -263.744744041
003 = {float64} 51.1260500153
004 = {float64} -260.647437918
005 = {float64} -265.407851699
006 = {float64} -264.019595082
007 = {float64} -261.270455034
008 = {float64} 49.1878497552
009 = {float64} -273.164812274
Whereas the model answer gives:
000 = {float64} -274.888101294
001 = {float64} -265.470359476
002 = {float64} -263.238935638
003 = {float64} 51.1260500153
004 = {float64} -260.647437918
005 = {float64} -265.407851699
006 = {float64} -264.019595082
007 = {float64} -261.335834625
008 = {float64} 49.1878497552
009 = {float64} -273.164812274
All samples are identical except for sample 002 and 007 which are different at the first decimal place. These samples do not correspond to the 3rd and 8th bins, which match the input frequencies, but they are directly adjacent. Yet the two prominent bins are identical so there's no leakage to account for the error, and it's only on one side of each prominent bin. I can't explain this. Could it be an unpickling issue?
$ python --version
Python 2.7.13
$ pip freeze
# ...
Cython==0.25.2
matplotlib==2.0.0
numpy==1.12.1
scipy==0.19.0
Using Mac OSX 10.11.6 (not officially supported by the course, I understand).
from sms-tools.
I'm running in Ubuntu as my main OS but I generally use Conda to manage my Python environments (I usually use Python 3) so when I Conda installed I got the most recent numpy/scipy. However I can choose versions and if I choose the earlier version then my code passes.
It's not an unpickling issue because I get different results for the calculations using the two different versions of Scipy while the test result stays the same. It's most likely a change in Scipy but I don't have the time to work out exactly where. I don't think the numpy version matters but I can't be sure as changing scipy version (in Conda at least) requires a change in numpy version.
0.19.0 and 1.12.1 were the versions I was using when I couldn't get the code to pass so it will be interesting to see if you get the same result.
As the submissions appear to run the code locally for the tests, the version you have installed affects the marking. You can submit the same code twice and get different results depending on the versions of scipy/numpy you have installed.
from sms-tools.
I downgraded to numpy==1.11.0 and scipy==0.17.0 however it didn't affect my results:
My calculation:
000 = {float64} -274.888101294
001 = {float64} -265.470359476
002 = {float64} -263.744744041
003 = {float64} 51.1260500153
004 = {float64} -260.647437918
005 = {float64} -265.407851699
006 = {float64} -264.019595082
007 = {float64} -261.270455034
008 = {float64} 49.1878497552
009 = {float64} -273.164812274
Model answer (same as earlier):
000 = {float64} -274.888101294
001 = {float64} -265.470359476
002 = {float64} -263.238935638
003 = {float64} 51.1260500153
004 = {float64} -260.647437918
005 = {float64} -265.407851699
006 = {float64} -264.019595082
007 = {float64} -261.335834625
008 = {float64} 49.1878497552
009 = {float64} -273.164812274
from sms-tools.
I didn't have any issues with Assignment 3 so I'm not surprised those results didn't change. Assignment 4 is the first place where I've found the differences matter - in particular parts 2 and 3.
Part 2 results with current scipy:
(67.540185986036676, 86.357263422471334)
(89.510506656299285, 306.15434282915095)
(74.54538266065947, 92.905493637786222)
with old scipy:
(67.57748352378475, 304.6840486621561)
(89.510506656299285, 306.18696743762951)
(74.631476225366825, 304.26970909967974)
Submitting with the current scipy gives a partial pass for this part.
from sms-tools.
Related Issues (20)
- module 'utilFunctions_C' has no attribute 'twm' HOT 3
- Sms
- HPS Morph transformation error HOT 1
- some button labels invisible in osx
- Sjd
- problem with cutilFunctions.pyx
- SMS
- Why the STFT calculated by SMS tool and Librosa is different? HOT 1
- Ubuntu installation instructions seem to not be working HOT 2
- Inconsistent use of tabs and spaces
- models_GUI.py and models_transformations_GUI.py don't run in python3 HOT 2
- Python 3.9 changes for compatibility and to remove exceptions. HOT 2
- Bad indices in stochastic and hpsmorph transformations HOT 1
- A4Part3 instruction exercice should be fixed
- NameError: name 'filedialog' is not defined HOT 1
- module 'numpy' has no attribute 'float' HOT 2
- Transformations_GUI issues on Python 3.11.1 (and 3.9.6) HOT 1
- utilsFunctions c function not compiling correctly on MacOSX? HOT 1
- ValueError: operands could not be broadcast together with shapes (100,) (0,)
- Transformation-GUI - HPS Morph
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from sms-tools.