Code Monkey home page Code Monkey logo

acoular's Introduction

Acoular Logo

PyPI PyPI Actions status DOI

Acoular

Acoular is a Python module for acoustic beamforming that is distributed under the new BSD license.

It is aimed at applications in acoustic testing. Multichannel data recorded by a microphone array can be processed and analyzed in order to generate mappings of sound source distributions. The maps (acoustic photographs) can then be used to locate sources of interest and to characterize them using their spectra.

Features

  • frequency domain beamforming algorithms: delay & sum, Capon (adaptive), MUSIC, functional beamforming, eigenvalue beamforming
  • frequency domain deconvolution algorithms: DAMAS, DAMAS+, Clean, CleanSC, orthogonal deconvolution
  • frequency domain inverse methods: CMF (covariance matrix fitting), general inverse beamforming, SODIX
  • time domain methods: delay & sum beamforming, CleanT deconvolution
  • time domain methods applicable for moving sources with arbitrary trajectory (linear, circular, arbitrarily 3D curved),
  • frequency domain methods for rotating sources via virtual array rotation for arbitrary arrays and with different interpolation techniques
  • 1D, 2D and 3D mapping grids for all methods
  • gridless option for orthogonal deconvolution
  • four different built-in steering vector formulations
  • arbitrary stationary background flow can be considered for all methods
  • efficient cross spectral matrix computation
  • flexible modular time domain processing: n-th octave band filters, fast, slow, and impulse weighting, A-, C-, and Z-weighting, filter bank, zero delay filters
  • time domain simulation of array microphone signals from fixed and arbitrarily moving sources in arbitrary flow
  • fully object-oriented interface
  • lazy evaluation: while processing blocks are set up at any time, (expensive) computations are only performed when needed
  • intelligent and transparent caching: computed results are automatically saved and loaded on the next run to avoid unnecessary re-computation
  • parallel (multithreaded) implementation with Numba for most algorithms
  • easily extendable with new algorithms

License

Acoular is licensed under the BSD 3-clause. See LICENSE

Citing

If you use Acoular for academic work, please consider citing both our publication:

Sarradj, E., & Herold, G. (2017). 
A Python framework for microphone array data processing.
Applied Acoustics, 116, 50–58. 
https://doi.org/10.1016/j.apacoust.2016.09

and our software:

Sarradj, E., Herold, G., Kujawski, A., Jekosch, S., Pelling, A. J. R., Czuchaj, M., Gensch, T., & Oertwig, S..
Acoular – Acoustic testing and source mapping software. 
Zenodo. https://zenodo.org/doi/10.5281/zenodo.3690794

Dependencies

Acoular runs under Linux, Windows and MacOS and needs Numpy, Scipy, Traits, scikit-learn, pytables, Numba packages available. Matplotlib is needed for some of the examples.

If you want to use input from a soundcard, you will also need to install the sounddevice package. Some solvers for the CMF method need Pylops.

Installation

Acoular can be installed via conda, which is also part of the Anaconda Python distribution. It is recommended to install into a dedicated conda environment. After activating this environment, run

conda install -c acoular acoular

This will install Acoular in your Anaconda Python environment and make the Acoular library available from Python. In addition, this will install all dependencies (those other packages mentioned above) if they are not already present on your system.

A second option is to install Acoular via pip. It is recommended to use a dedicated virtual environment and then run

pip install acoular

For more detailed installation instructions, see the documentation.

Documentation and help

Documentation is available here with a getting started section and examples.

The Acoular blog contains some tutorials.

If you discover problems with the Acoular software, please report them using the issue tracker on GitHub. Please use the Acoular discussions forum for practical questions, discussions, and demos.

Example

This reads data from 64 microphone channels and computes a beamforming map for the 8kHz third octave band:

from os import path
import acoular
from matplotlib.pylab import figure, plot, axis, imshow, colorbar, show

# this file contains the microphone coordinates
micgeofile = path.join(path.split(acoular.__file__)[0],'xml','array_64.xml')
# set up object managing the microphone coordinates
mg = acoular.MicGeom( from_file=micgeofile )
# set up object managing the microphone array data (usually from measurement)
ts = acoular.TimeSamples( name='three_sources.h5' )
# set up object managing the cross spectral matrix computation
ps = acoular.PowerSpectra( time_data=ts, block_size=128, window='Hanning' )
# set up object managing the mapping grid
rg = acoular.RectGrid( x_min=-0.2, x_max=0.2, y_min=-0.2, y_max=0.2, z=0.3, \
increment=0.01 )
# set up steering vector, implicitely contains also the standard quiescent 
# environment with standard speed of sound
st = acoular.SteeringVector( grid = rg, mics=mg )
# set up the object managing the delay & sum beamformer
bb = acoular.BeamformerBase( freq_data=ps, steer=st )
# request the result in the 8kHz third octave band from approriate FFT-Lines
# this starts the actual computation (data intake, FFT, Welch CSM, beamforming)
pm = bb.synthetic( 8000, 3 )
# compute the sound pressure level
Lm = acoular.L_p( pm )
# plot the map
imshow( Lm.T, origin='lower', vmin=Lm.max()-10, extent=rg.extend(), \
interpolation='bicubic')
colorbar()

result

acoular's People

Contributors

adku1173 avatar artpelling avatar esarradj avatar frankk20 avatar gherold avatar sjekosch avatar toddrme2178 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

acoular's Issues

Not Py3 compatible

More analysis and scientific packages, especially new packages, are becoming Py3 only. It would be nice to be able to use acoular with them.

One path forward would be to have 2.7/3.x dual compatibility via the tools at python-future.org.

Wrong coordinates in supplied UMA16 mic layout file

Hi and thank you for acoular

I did some simple tests and I suspect the UMA16 mic layout file minidsp_uma16.xml,
which comes with acoular has the wrong coordinates or the wrong coordinate system orientation.

I am assuming that the sound pressure map is supposed to map sources when looking from the microphone array onto the sources like a camera (A below)?
However, wher testing acoular with UMA16, I get a result, which looks like a map of the pressure of the sources looking into the array (B below).

(A) Me -> Array -> Src
(B) Array <- Src <- Me

This is how I tested the UMA16 array.

The UMA16 looks like a 3x3 tic-tac-toe game with a mic placed on each crossing of lines.

 - - -
| | | |
 - - -
| | | |
 - - -
| | | |
 - - -

For my test I placed a small speaker at the top-right corner of the UMA 16 array
(looking direction as in (A): Me -> Array -> Src), and I drove it with a 3.5 kHz sine wave.
When I run accoular (code below), the source appears as an image in the top-left corner.

 Test         Estimate
 - - -         - - - 
| | |O|       |O| | |
 - - -         - - - 
| | | |       | | | |
 - - -         - - - 
| | | |       | | | |
 - - -         - - - 

I can only explain this result, if for some reason, acoular implements the mapping as in (B): Array <- Src <- Me.
Is this how it is supposed to work?

To clarify things, I looked into the minidsp_uma16.xml.
Assuming the coordinates in the xml-file are in an XY coordinate system,
where X(left->right); Y(bottom->up) the arrangement of the microphones is as follows:

8-7-10-9
| | |  |
6-5-12-11
| | |  |
4-3-14-13
| | |  |
2-1-16-15

On the physical array this corresponds to looking into the array as in (B) Array <- Src <- Me.
When I use the array, however, I rotate the array 180 deg around the vertical axis
to point it to the source. This corresponds to configuraton (A): Me -> Array -> Src.
When I look from behind the array, I see the microphones arranged like this:

9--10-7-8
|  |  | |
11-12-5-6
|  |  | |
13-14-3-4
|  |  | |
15-16-1-2

Going back from the physical world to the world of arbitrary coordinate system choices,
this physical situation can still be described by the original XY coordinate system,
considering it being rotated 180 deg around the vertical Y-axis.
The coordinates system now has an inverse x-axis: X(left<-right); Y(bottom->up).

However, in actual test, this produced the flipped image of the sound source as described above.
Flipping the microphone position matrix left-to-right solves the problem.

So, this makes me think that the UMA16 mic layout file minidsp_uma16.xml has the mic coordinates mistaken.

Would you please clarify this point?

My code is below:

# Note: filenames and paths may need re-adjustment

temp_h5_datafile = 'uma16_test_quadrant1.h5'
micgeofile = 'minidsp_uma16.xml'
mg = acoular.MicGeom(from_file=micgeofile)
ts = acoular.TimeSamples(name=temp_h5_datafile) 
ps = acoular.PowerSpectra(time_data=ts, block_size=128, window='Hamming')
rg = acoular.RectGrid(x_min=-0.2, x_max=0.2, y_min=-0.2, y_max=0.2, z=0.12,
                      increment=0.01)
st = acoular.SteeringVector(grid=rg, mics=mg)
bb = acoular.BeamformerBase(freq_data=ps, steer=st)
pm = bb.synthetic(3500, 3)
Lm = acoular.L_p(pm)

# Generate the figure
plt.imshow(Lm.T, origin='lower', vmin=Lm.max()-10, extent=rg.extend(),
       interpolation='bicubic')
plt.colorbar()
plt.figure(1)

plt.plot(mg.mpos[0], mg.mpos[1], 'o')
# for i in range(0, mg.mpos[0].size):
for i in range(0,5):  # label only the first 5 of the points
    x = mg.mpos[0][i]
    y = mg.mpos[1][i]
    plt.text(x+0.001, y+0.005, i+1, fontsize=9, color='yellow')
plt.axis('equal')
plt.show()

# Close all open .h5 files if any and remove the temporary data file
tables.file._open_files.close_all()

How to feed my data into acoular?

Hi,

What is the simplest way to feed my own data into acoular time series?

I was tinkering with the basic example of 64 microphones and 3 sources and I did not find a way to feed my own data as numpy arrays. So I had to manually create .xml and .h5 files with my own time series and microphone positions. However, I keep having an issue with my h5 file: I get

AttributeError: Attribute 'sample_freq' does not exist in node: '/time_data'

I use h5py to create my .h5 file and feed my array there. However,h5py does not create an additional attribute, so I can't specify 'sample frequency', and I have been trying to figure out how to feed this sample frequency into the h5 file. I can't seem to do it in h5, does it mean I have to go into the acoular source code to override this? Is there an easier way for me to feed my own data into acoular without having to go through these hoops? I wonder why it isn't anywhere in the tutorials.

Thank you.

Beamformer Output

Hi,

I'm trying to do Beamforming w/ Acoular. I successfully plotted the rectangular grid out which correctly inferred the potential location of my signal source, but what I'm interested in is to beamform the raw microphone-array signals into single-channel signal and convert into wavfile for higher SNR and better sound quality. I'm wondering if Acoular supports this as well or perhaps it supports only calculating the source location in specific frequency range instead.

https://groups.google.com/forum/#!topic/acoular-users/bp5Xd9n8fJQ
My question is extremely similar to this, but I didn't see any useful answer in this discussion either. Therefore I decided to post this out anyway.

Thank you for reviewing my issue!!

Best,
Nana Chang

Questions about example 2

Dear sir or madam
I have some qustions to want communication with you. First , my English is very bad. So i'm very grateful you can read this message.
One question, the example2 have a moving source and fixed source.When i was changed the location of fixed source , whatever how to change, the fixed map doesn't show the fixed location. In this map ,Only show the trace of moving source,why?
Another question, in figure2(moving focus map) and figure3(time domain fixed focus and time domain moving focus ),why the location of (0,0) has been marked. what does this mean?
At the end, i hope you can solve my questions.

Beamform

Hi.
I wonder how can I get output beamforming signal as a one channel wav file.

I have multichannel signal that I want to calculate its beamforming and save the output as single channel wav file.

Please advise.

DOA estimation

Hi all.

I have multichannel signal that recorded at some coordinate and I want to estimate the DOA, I'm using the MUSIC beamformer but I got incorrect coordinates, is there any explaination how to estimate the correct direction?

failing tests

While packaging this for openSUSE/Factory I have four tests (well, it is actually one test) failing:

[  111s] ======================================================================
[  111s] FAIL: test_beamformer_freq_results (test_beamformer_results.acoular_beamformer_test) [BeamformerCMF global_caching = none]
[  111s] ----------------------------------------------------------------------
[  111s] Traceback (most recent call last):
[  111s]   File "/home/abuild/rpmbuild/BUILD/acoular-21.05/acoular/tests/test_beamformer_results.py", line 91, in test_beamformer_freq_results
[  111s]     np.testing.assert_allclose(actual_data, ref_data, rtol=1e-5, atol=1e-8)
[  111s]   File "/usr/lib64/python3.9/site-packages/numpy/testing/_private/utils.py", line 1530, in assert_allclose
[  111s]     assert_array_compare(compare, actual, desired, err_msg=str(err_msg),
[  111s]   File "/usr/lib64/python3.9/site-packages/numpy/testing/_private/utils.py", line 844, in assert_array_compare
[  111s]     raise AssertionError(msg)
[  111s] AssertionError:
[  111s] Not equal to tolerance rtol=1e-05, atol=1e-08
[  111s]
[  111s] Mismatched elements: 81 / 338 (24%)
[  111s] Max absolute difference: 0.00439613
[  111s] Max relative difference: 55.84388
[  111s]  x: array([[[-4.597212e-04,  0.000000e+00,  0.000000e+00,  0.000000e+00,
[  111s]           0.000000e+00,  0.000000e+00,  0.000000e+00,  0.000000e+00,
[  111s]           0.000000e+00,  0.000000e+00,  0.000000e+00,  0.000000e+00,...
[  111s]  y: array([[[0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,
[  111s]          0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,
[  111s]          0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,...
[  111s]
[  111s] ======================================================================
[  111s] FAIL: test_beamformer_freq_results (test_beamformer_results.acoular_beamformer_test) [BeamformerCMF global_caching = none]
[  111s] ----------------------------------------------------------------------
[  111s] Traceback (most recent call last):
[  111s]   File "/home/abuild/rpmbuild/BUILD/acoular-21.05/acoular/tests/test_beamformer_results.py", line 100, in test_beamformer_freq_results
[  111s]     np.testing.assert_allclose(actual_data, ref_data, rtol=1e-5, atol=1e-8)
[  111s]   File "/usr/lib64/python3.9/site-packages/numpy/testing/_private/utils.py", line 1530, in assert_allclose
[  111s]     assert_array_compare(compare, actual, desired, err_msg=str(err_msg),
[  111s]   File "/usr/lib64/python3.9/site-packages/numpy/testing/_private/utils.py", line 844, in assert_array_compare
[  111s]     raise AssertionError(msg)
[  111s] AssertionError:
[  111s] Not equal to tolerance rtol=1e-05, atol=1e-08
[  111s]
[  111s] Mismatched elements: 81 / 338 (24%)
[  111s] Max absolute difference: 0.00439613
[  111s] Max relative difference: 55.84388
[  111s]  x: array([[[-4.597212e-04,  0.000000e+00,  0.000000e+00,  0.000000e+00,
[  111s]           0.000000e+00,  0.000000e+00,  0.000000e+00,  0.000000e+00,
[  111s]           0.000000e+00,  0.000000e+00,  0.000000e+00,  0.000000e+00,...
[  111s]  y: array([[[0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,
[  111s]          0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,
[  111s]          0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,...
[  111s]
[  111s] ======================================================================
[  111s] FAIL: test_beamformer_freq_results (test_beamformer_results.acoular_beamformer_test) [BeamformerCMF global_caching = none]
[  111s] ----------------------------------------------------------------------
[  111s] Traceback (most recent call last):
[  111s]   File "/home/abuild/rpmbuild/BUILD/acoular-21.05/acoular/tests/test_beamformer_results.py", line 109, in test_beamformer_freq_results
[  111s]     np.testing.assert_allclose(actual_data, ref_data, rtol=1e-5, atol=1e-8)
[  111s]   File "/usr/lib64/python3.9/site-packages/numpy/testing/_private/utils.py", line 1530, in assert_allclose
[  111s]     assert_array_compare(compare, actual, desired, err_msg=str(err_msg),
[  111s]   File "/usr/lib64/python3.9/site-packages/numpy/testing/_private/utils.py", line 844, in assert_array_compare
[  111s]     raise AssertionError(msg)
[  111s] AssertionError:
[  111s] Not equal to tolerance rtol=1e-05, atol=1e-08
[  111s]
[  111s] Mismatched elements: 81 / 338 (24%)
[  111s] Max absolute difference: 0.00439613
[  111s] Max relative difference: 55.84388
[  111s]  x: array([[[-4.597212e-04,  0.000000e+00,  0.000000e+00,  0.000000e+00,
[  111s]           0.000000e+00,  0.000000e+00,  0.000000e+00,  0.000000e+00,
[  111s]           0.000000e+00,  0.000000e+00,  0.000000e+00,  0.000000e+00,...
[  111s]  y: array([[[0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,
[  111s]          0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,
[  111s]          0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,...
[  111s]
[  111s] ======================================================================
[  111s] FAIL: test_beamformer_freq_results (test_beamformer_results.acoular_beamformer_test) [BeamformerCMF global_caching = overwrite]
[  111s] ----------------------------------------------------------------------
[  111s] Traceback (most recent call last):
[  111s]   File "/home/abuild/rpmbuild/BUILD/acoular-21.05/acoular/tests/test_beamformer_results.py", line 124, in test_beamformer_freq_results
[  111s]     np.testing.assert_allclose(actual_data, ref_data, rtol=1e-5, atol=1e-8)
[  111s]   File "/usr/lib64/python3.9/site-packages/numpy/testing/_private/utils.py", line 1530, in assert_allclose
[  111s]     assert_array_compare(compare, actual, desired, err_msg=str(err_msg),
[  111s]   File "/usr/lib64/python3.9/site-packages/numpy/testing/_private/utils.py", line 844, in assert_array_compare
[  111s]     raise AssertionError(msg)
[  111s] AssertionError:
[  111s] Not equal to tolerance rtol=1e-05, atol=1e-08
[  111s]
[  111s] Mismatched elements: 81 / 338 (24%)
[  111s] Max absolute difference: 0.00439613
[  111s] Max relative difference: 55.84388
[  111s]  x: array([[[-4.597212e-04,  0.000000e+00,  0.000000e+00,  0.000000e+00,
[  111s]           0.000000e+00,  0.000000e+00,  0.000000e+00,  0.000000e+00,
[  111s]           0.000000e+00,  0.000000e+00,  0.000000e+00,  0.000000e+00,...
[  111s]  y: array([[[0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,
[  111s]          0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,
[  111s]          0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00,...
[  111s]
[  111s] ----------------------------------------------------------------------
[  111s] Ran 20 tests in 93.292s
[  111s]

Complete build log with all packages versions and steps taken.

hardware: sinus apollo typhoon tornado uma16

Hello, I'm using spectacoular. I want to use the sinus device you mentioned and uma16. I found that you used several Python packages: sinus and acuma16, but I couldn't find them. You can provide the website of the device, or how can I use these to make full use of spectral for signal processing?
Can I use a self-made microphone array to meet the data input format of spectral? I can transmit and receive data through serial port, USB and network cable.
Thanks!

HDF5 and xml file interface

I want to use my own array to locate noise source , but I do not know how to change my test data from excel to HDF5.I have created a HDF5 file (using HDFview) and a xml file (using XMLpy ) ,but acoular can not open it .......I am not a student from computer science and not good at programming......so read the source code and change it to meet my need is a bit difficult to me......Can you tell me how to create the correct file?

SteeringVector not properly utilizing reference point

The ref parameter of SteeringVector is not used in SteeringVector._get_r0() when ref is a 3-D position.

I believe line 162 of fbeamform.py should be something akin to
return self.env._r(self.grid.pos(), self._ref[:,newaxis])

Update GitHub CI action

After changing from setup.py based install to Hatch, CI on GitHub is broken.
.github/workflows/python-package.yaml needs to changed for new install.

All packages installed but ImportError("packages h5py and pytables are missing!")

Hello, I'm new using Acoular and I'm having some issues with it.
One of them is that yesterday I ran the example "Airfoil in open jet – CMF" and everything went fine, but today I ran the exact same code again and it's giving this weird error, that I quote below.
And I do have all the packages that Acoular require installed, I don't know what is happening.
But the weirdest thing is that when I run the code in Spyder (using Anaconda) it magically works again.
I wish that someone can help me figure it out.
Thanks in advance for your attention.

Traceback (most recent call last):
File "D:\Softwares\Python\lib\site-packages\acoular\configuration.py", line 88, in assert_h5library
import tables
File "D:\Softwares\Python\lib\site-packages\tables_init
.py", line 99, in
from .utilsextension import (
File "init.pxd", line 206, in init tables.utilsextension
File "D:\Softwares\Python\lib\site-packages\numpy_init_.py", line 152, in
from . import random
File "D:\Softwares\Python\lib\site-packages\numpy\random_init_.py", line 181, in
from . import _pickle
File "D:\Softwares\Python\lib\site-packages\numpy\random_pickle.py", line 1, in
from .mtrand import RandomState
File "_bit_generator.pxd", line 14, in init numpy.random.mtrand
File "_bit_generator.pyx", line 255, in init numpy.random._bit_generator
AttributeError: type object 'numpy.random._bit_generator.SeedSequence' has no attribute 'reduce_cython'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Softwares\Python\lib\site-packages\acoular\configuration.py", line 92, in assert_h5library
import h5py
File "D:\Softwares\Python\lib\site-packages\h5py_init
.py", line 34, in
from . import version
File "D:\Softwares\Python\lib\site-packages\h5py\version.py", line 19, in
import numpy
File "D:\Softwares\Python\lib\site-packages\numpy_init_.py", line 152, in
from . import random
File "D:\Softwares\Python\lib\site-packages\numpy\random_init_.py", line 181, in
from . import _pickle
File "D:\Softwares\Python\lib\site-packages\numpy\random_pickle.py", line 1, in
from .mtrand import RandomState
File "_bit_generator.pxd", line 14, in init numpy.random.mtrand
File "_bit_generator.pyx", line 255, in init numpy.random._bit_generator
AttributeError: type object 'numpy.random._bit_generator.SeedSequence' has no attribute 'reduce_cython'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Workspace\Python\acoular testing\acoular_example_2.py", line 19, in
import acoular
File "D:\Softwares\Python\lib\site-packages\acoular_init_.py", line 59, in
from .configuration import config
File "D:\Softwares\Python\lib\site-packages\acoular\configuration.py", line 99, in
config = Config()
File "D:\Softwares\Python\lib\site-packages\acoular\configuration.py", line 37, in init
self._assert_h5library()
File "D:\Softwares\Python\lib\site-packages\acoular\configuration.py", line 95, in _assert_h5library
raise ImportError("packages h5py and pytables are missing!")
ImportError: packages h5py and pytables are missing!

Tags for releases

Currently there are no tags for any release. Tags are helpful since they allow people to download the full, unaltered version of a release, rather than just what is chosen to be included in an sdist.

acoular install error

I'm trying to install Acoular on fresh installed Ubuntu 14.04.

I've followed installation steps, but in the end stuck in following error.


user@user-System-Product-Name:~$ pip install acoular
Collecting acoular
/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#snimissingwarning.
  SNIMissingWarning
/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
  Using cached acoular-16.5.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-build-Wi1Ipw/acoular/setup.py", line 21, in <module>
        import scipy.weave
    ImportError: No module named weave
    
    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-Wi1Ipw/acoular/

installation of Scipy and Weave seems fine, they can be imported but cannot install Acoular.

Plotting beampattern

Hi,
is there any visualization tool to plot the beam pattern of a linear/rectangular array?

Thanks!

Missing documentation on coordinate system and units used for MicGeom

Documentation is not available on how to define the microphone geometry.
Pieces of information from forum posts and comments in the provided examples are not a substitute for documenting the fact that acoular uses a left-handed coordinate system to define the microphone geometry.
The documementation is also the place to explain the reasons for such a choice, when most of our world (and some other phased array software) is defined in right-hand coordinates.
Finally, the documentation can be improved with an example of how to convert between different coordinate systems, when placing a test object and microphone array in a room. It can be assumed that most users would not be interested in describing the room in the coordinate system of the mic array, but would prefer a regular right-handed coordinate system with origin at one of the corners of the room to place the test object and the array.

What changes are needed to support streaming/real-time operation?

Hello! I am looking for a library to perform analysis on a live stream from a microphone. Based on the example scripts, Acoular seems to do generally what I need, but it seems like it is only designed for prerecorded data. And in fact, in #12 you mentioned that it isn't designed for real-time use.

What changes would need to be made to do live processing? I am a programmer and can make modifications, but I would like to get a general idea about the scale of the work involved.

Cannot connect to X server

Fresh install with conda 2.7, running on a remote machine, raises this error upon import acoular. Does this require one to be running acoular locally?

Wrong line division in LineGrid

The grid points positions returned by LineGrid are wrong.

For a line along x-axis, from -1.0 m to +1.0 m (i.e. length = 2 m ), when placing 3 points on a grid, I expect to find them at x = -1.0, 0, +1.0.

Acoular disagrees:

>>>lg = acoular.LineGrid(loc=(-1.0, 0, 1), direction=(1.0, 0.0, 0.0), length=2, numpoints=3) 
>>>lg.pos()[0,:]
array([-1.        , -0.33333333,  0.33333333])

The length between the end coordinates is only 1.33 m.

Error when running acoular_demo.py

acoular is installed successfully through conda.
However,when I try to run the demo,it shows ,

Traceback (most recent call last):
File "D:\ProgramFiles\anaconda2\Scripts\acoular_demo-script.py", line 27, in
from acoular import file as bpath, MicGeom, WNoiseGenerator, PointSource,

File "D:\ProgramFiles\anaconda2\lib\site-packages\acoular_init_.py", line 80, in
from .spectra import PowerSpectra, EigSpectra, synthetic
File "D:\ProgramFiles\anaconda2\lib\site-packages\acoular\spectra.py", line 27, in
from .fastFuncs import calcCSM
File "D:\ProgramFiles\anaconda2\lib\site-packages\acoular\fastFuncs.py", line 196, in
'(m,m),(),(m),(),()->()', nopython=True, target=parallelOption, cache=cachedOption)
File "D:\ProgramFiles\anaconda2\lib\site-packages\numba\npyufunc\decorators.py", line 165, in wrap
guvec.add(fty)
File "D:\ProgramFiles\anaconda2\lib\site-packages\numba\npyufunc\ufuncbuilder.py", line 170, in add
self.nb_func, targetoptions, sig)
File "D:\ProgramFiles\anaconda2\lib\site-packages\numba\npyufunc\ufuncbuilder.py", line 92, in _compile_element_wise_function
cres = nb_func.compile(sig, **targetoptions)
File "D:\ProgramFiles\anaconda2\lib\site-packages\numba\npyufunc\ufuncbuilder.py", line 63, in compile
self.targetdescr.options.parse_as_flags(flags, topt)
File "D:\ProgramFiles\anaconda2\lib\site-packages\numba\targets\options.py", line 26, in parse_as_flags
opt.from_dict(options)
File "D:\ProgramFiles\anaconda2\lib\site-packages\numba\targets\options.py", line 19, in from_dict
raise KeyError(fmt % k)
KeyError: "Does not support option: 'cache'"

From wav files to h5

I am wondering how can I pass from a bungh of audio files from the microphones to an h5 file.

Thank you !

Pierre

New SoundDeviceMaskedSamplesGenerator class?

class acoular.sdinput.SoundDeviceSamplesGenerator has acoular.tprocess.SamplesGenerator as its base. Do we better have a similar new class acoular.sdinput.SoundDeviceMaskedSamplesGenerator based on acoular.tprocess.MaskedSamplesGenerator? Yes, it is not an issue, but will be more convenient if acoustic array has invalid channels or invalid samples.

calculation result be NaN when big gamma in functional beamforming

eva = array(self.freq_data.eva[i], dtype='float64') ** (1.0 / self.gamma)
when big gamma occours, the precision maybe not enough, then the eva will have some NaN in the result.
This will cause the final beamforming result not right

change the NaNs to zero will fix the problem and get the right beamforming result
eva[isnan(eva)] = 0

PyQt5 incorrect API

Hello,

I've created a specific anaconda environment for acoular,
In this environment PyQt5 is installed as PyQt, on top of python 2.7.13
image

Although, when i launch python acoular_demo.py i get this error
image

I've found that pyface 4.4.0 is installed, and even the latest pyface version does not support qt5,
So i guess i should not use pyqt5.

EDIT: FYI I also tried to recreate a clean anaconda 2.7 environment and only install conda install -c acoular acoular Then run acoular_demo.py and i get exactly the same error.

Could anyone advise?

thanks in adavance.

PS, I'm on windows 10,

issue installing with conda

Hi!

When using conda install -c acoular acoular I get 3 subsequent SafetyErrors:

SafetyError: The package for acoular located at ... appears to be corrupted. The path X has a sha256 mismatch.

with X:

  • 'Scripts/CalibHelper-script.py'
  • 'Scripts/ResultExplorer-script.py'
  • 'Scripts/acoular_demo-script.py'

followed by 4 ClobberErrors

ClobberError: This transaction has incompatible packages due to a shared path.
packages: conda-forge::jupyter_core-4.4.0-py_0, conda-forge::jupyter-1.0.0-py_1
path: 'lib/site-packages/jupyter.py'

ClobberError: This transaction has incompatible packages due to a shared path.
packages: conda-forge::jupyter_core-4.4.0-py_0, conda-forge::jupyter-1.0.0-py_1
path: 'lib/site-packages/pycache/jupyter.cpython-36.pyc'

ClobberError: This transaction has incompatible packages due to a shared path.
packages: conda-forge::ipykernel-5.1.0-py36h39e3cac_1001, conda-forge::widgetsnbextension-3.4.2-py36_1000
path: 'scripts/jupyter-kernel-script.py'

ClobberError: This transaction has incompatible packages due to a shared path.
packages: conda-forge::ipykernel-5.1.0-py36h39e3cac_1001, conda-forge::widgetsnbextension-3.4.2-py36_1000
path: 'scripts/jupyter-kernel.exe'

I've tried removing the cached package, and also just installing on another system that didn't have acoular from before. This didn't help, so I wonder if something is wrong with the latest Acoular library on conda.

Not building under OS X

Python: Anaconda
Several errors related to gcc:

acoular/beamformer.cpp:3022:36: error: variable length array of non-POD element type 'std::complex'
std::complex e1[nc];

    ~~~~ ^ ~~~~

181 warnings and 16 errors generated.
error: Command "gcc -fno-strict-aliasing -I/anaconda/include -arch x86_64 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DMAJOR_VERSION=16 -DMINOR_VERSION=5 -I/anaconda/lib/python2.7/site-packages/scipy/weave -I/anaconda/lib/python2.7/site-packages/scipy/weave/scxx -I/anaconda/lib/python2.7/site-packages/scipy/weave/blitz -I//anaconda/lib/python2.7/site-packages/numpy/core/include -I/anaconda/include/python2.7 -c acoular/beamformer.cpp -o build/temp.macosx-10.6-x86_64-2.7/acoular/beamformer.o -O3 -ffast-math -msse3 -Wno-write-strings" failed with exit status 1

Try to run demo but raise error about QT

I am following the instruction in the doc to install the acoular package in my anaconda. However, when I reach the final step:

import acoular
acoular.demo.acoular_demo.run()

I got the error below:
'''
RuntimeError: The environment variable QT_API has the unrecognized value 'pyqt'; valid values are {'pyqt6', 'pyqt5', 'pyside6', 'pyside2'}
'''
Does anyone know how to solve that? THX!

nbi and nbc files differ between builds

While working on the reproducible builds effort, I found that
when building the python-acoular package for openSUSE Linux, there were slight differences between each build:

--- filter/RPMS.2/usr/lib/python3.8/site-packages/acoular/__pycache__/guf-fastFuncs.damasSolverGaussSeidel-875.py38.1.nbc
+++ filter/RPMS.1/usr/lib/python3.8/site-packages/acoular/__pycache__/guf-fastFuncs.damasSolverGaussSeidel-875.py38.1.nbc
@@ -1,9 +1,9 @@
 00000000: 8005 95da 2700 0000 0000 008c 413c 6e75  ....'.......A<nu
 00000010: 6d62 612e 6e70 2e75 6675 6e63 2e77 7261  mba.np.ufunc.wra
 00000020: 7070 6572 732e 5f47 7566 756e 6357 7261  ppers._GufuncWra
 00000030: 7070 6572 206f 626a 6563 7420 6174 2030  pper object at 0
-00000040: 7837 6666 6665 6131 6664 3064 303e 948c  x7fffea1fd0d0>..
+00000040: 7837 6666 6665 6136 3434 3136 303e 948c  x7fffea644160>..
 00000050: 066f 626a 6563 7494 42d0 0f00 007f 454c  .object.B.....EL
 00000060: 4602 0101 0000 0000 0000 0000 0001 003e  F..............>
 00000070: 0001 0000 0000 0000 0000 0000 0000 0000  ................
 00000080: 0000 0000 0010 0d00 0000 0000 0000 0000  ................

--- filter/RPMS.2/usr/lib/python3.8/site-packages/acoular/__pycache__/guf-fastFuncs.damasSolverGaussSeidel-875.py38.nbi
+++ filter/RPMS.1/usr/lib/python3.8/site-packages/acoular/__pycache__/guf-fastFuncs.damasSolverGaussSeidel-875.py38.nbi
@@ -1,6 +1,6 @@
 00000000: 8005 950a 0000 0000 0000 008c 0630 2e34  .............0.4
 00000010: 392e 3094 2e80 0595 3305 0000 0000 0000  9.0.....3.......
-00000020: 4741 d7aa 7c17 0000 004d c7f9 8694 7d94  GA..|....M....}.
+00000020: 4741 d7aa 7c29 c000 004d c7f9 8694 7d94  GA..|)...M....}.
 00000030: 8c1b 6e75 6d62 612e 636f 7265 2e74 7970  ..numba.core.typ
 00000040: 696e 672e 7465 6d70 6c61 7465 7394 8c09  ing.templates...
 00000050: 5369 676e 6174 7572 6594 9394 2981 9428  Signature...)..(

See https://reproducible-builds.org/ for why this matters.

The diff even occurs when trying to make 2 builds as similar as possible.

Input time data value

Hello! Which unit do the HDF5 files with time_data store the sound pressure values in? Is it Pascal or Pascal RMS?

macOS build steps

Hi!
Thanks for the update of acoular - looks super nice!

Managed to build from source on macOS and wanted to share my steps (sorry for opening a non-issue)

Here's what I needed to do on macOS 10.13.1 with an up-to-date anaconda distribution:

  • brew install swig
  • pip install pyqt5
  • git clone https://github.com/acoular/acoular.git
  • Goto /acoular/setup.py
  • Find install_requires and uncomment like so:
      install_requires = [
      'setuptools',
      #'pyqt5',  # <- had issues with setuptools, comment.
      'numpy>=1.10.2',
      'numba >=0.21.0',
      'scipy>=0.13',
      'scikit-learn>=0.15',
      'tables>=3.1', # <- notice it should be 'tables' and not 'pytables'
      'traits>=4.4.0',
      'traitsui>=4.4.0',
      'chaco>=4.4'
	],
      setup_requires = [
      'setuptools',
      #'pyqt5', # <- see above
      'numpy>=1.10.2',
      'numba >=0.21.0',
      'scipy>=0.13',
      'scikit-learn>=0.15',
      'tables>=3.1', # <- see above
      'traits>=4.4.0',
      'traitsui>=4.4.0',
      'chaco>=4.4'
	],
  • Inside /acoular :pip install -e .

Hope this is useful for others!

conda info

Current conda install:

               platform : osx-64
          conda version : 4.3.30
       conda is private : False
      conda-env version : 4.3.30
    conda-build version : 3.0.29
         python version : 3.6.3.final.0
       requests version : 2.18.4
       root environment : /anaconda3  (writable)
    default environment : /anaconda3
       envs directories : /anaconda3/envs
                          /Users/oliver/.conda/envs
          package cache : /anaconda3/pkgs
                          /Users/oliver/.conda/pkgs
           channel URLs : https://repo.continuum.io/pkgs/main/osx-64
                          https://repo.continuum.io/pkgs/main/noarch
                          https://repo.continuum.io/pkgs/free/osx-64
                          https://repo.continuum.io/pkgs/free/noarch
                          https://repo.continuum.io/pkgs/r/osx-64
                          https://repo.continuum.io/pkgs/r/noarch
                          https://repo.continuum.io/pkgs/pro/osx-64
                          https://repo.continuum.io/pkgs/pro/noarch
            config file : /Users/oliver/.condarc
             netrc file : None
           offline mode : False
             user-agent : conda/4.3.30 requests/2.18.4 CPython/3.6.3 Darwin/17.2.0 OSX/10.13.1
                UID:GID : 501:20

Dealing with namespaces and cache files is very confusing (as a consequence of lazy evaluation)

As someone coming from procedural programming, I find it very confusing to create a function (or a set of functions), which create a beamformer and then loop through a list of filenames to apply it to.

Conceptually, I would like to achieve:

my_beamformer = create_my_beamformer()
for input_file in file_list:
     output_file = my_beamformer(input_file)

The catch is that instead of converting all my measurement files to *.h5, I want to convert them one by one to a temporary *.h5 file, which then gets overwritten every time I call the beamformer function with a newly converted measurement file.

I tried two approaches.

Approach 1

def create_temp_h5_file(h5fname_out, data, fs, series_title):
    import tables

    acoularh5 = tables.open_file(fh5fname_out, mode="w", title=series_title)
    # Note that we create 'EArray'
    acoularh5.create_earray('/', 'time_data', atom=None, title='',
                            filters=None, expectedrows=100000,
                            chunkshape=[256, 64], byteorder=None,
                            createparents=False, obj=data)
    acoularh5.set_node_attr('/time_data', 'sample_freq', fs)
    acoularh5.close()  # <- here the *.h5 file is closed after creation
    return

def create_my_beamformer(layout_file, datafile)
    mg = acoular.MicGeom(from_file=layout_fullfile)
    data, fs, series_title = read_from_datafile(datafile)  # <- pseudocode to return the data
    create_temp_h5_file(temp_h5_datafile, data)
    ts = acoular.TimeSamples(name=temp_h5_datafile)  # This leaves the .h5 file open, despite closing it at creation
    ts.calib = acoular.Calib(from_file=calib_fullfile)

    ps = acoular.PowerSpectra(time_data=ts, block_size=128, window='Hamming')
    rg = acoular.RectGrid(x_min=-0.2, x_max=0.2, y_min=-0.2, y_max=0.2, z=0.235,
                          increment=0.01)
    st = acoular.SteeringVector(grid=rg, mics=mg)
    bb = acoular.BeamformerBase(freq_data=ps, steer=st)
    return bb

With this approach, the returned beamformer knows nothing of the TimeSamples object ts on which it operates by lazy evaluation.
In order for it to work, do I need to return everything to the caller: ts, ps, st, rg, bb?
If that is the case putting it in a separate function is not very meaningful. So, I tried

Approach 2

In a second approach, I tried putting everything in one big function, which:

  1. takes as input argument only the filename (of a file, which is not *.h5)
  2. converts to temporary *.h5 file
  3. creates ts, ps, st, rg, bb
  4. evaluates the result of bb

On the first filename the function works OK, on the second run I get

'H5CacheFileTables' object has no attribute 'root'

In this approach cleaning-up the cache files is impossible, unless I restart the kernel.
I tried to find the blocking process using:

proc = psutil.Process()
open_fname_list = proc.open_files()
for opfile in open_fname_list:
    if Path(opfile.path).suffix == '.h5':
        print(opfile.path)
        opfile.kill()
        os.close(opfile.fd)

however, although I can find the open files, I cannot close them, because they are blocked by some process.
I also can't find the blocking process to kill it.
Finally, I am not sure killing the process is the best course of action.

Any advice?

Question regarding example 1

As far as I see the simple FFT with delay is used for the data in combination with steering vector type 1. This implicitly assumes a monopole source, however trailing edge noise is known to be a dipole source. Are the results still valid after considering the above?

readthedocs build fails

build fails because of sphinx extensions requires at least traits
could possibly be solved by using a virtual environment with a pip-style requirements.txt

Acoular now depends on kiwisolver

Because new versions of enable (from 4.5 ?) depend on kiwisolver without actually installing it, chaco seems to need that too. Workaround could be to add the dependency on kiwisolver to Acoular itself.

numpy RandomState.standard_normal needs int input

Hello,

I found a bug, which affects newer numpy versions (at least since 14.2 and 15.1):

The method standard_normal from RandomState class needs an integer input, otherwise a TypeError is thrown: TypeError: 'float' object cannot be interpreted as an integer.

s[ind:] += repeat( rnd_gen.standard_normal(nums / dind+1 ), dind)[:lind]

A possible fix is:

 s[ind:] += repeat( rnd_gen.standard_normal(int(round(nums / dind+1))), dind)[:lind]

Thanks for your great work on beamforming!

Andreas

Realization of acoustic camera

Hello, I think acoular is gradually becoming more and more exciting. I want to make an acoustic camera based on acoular. Because I am not an expert in this field, I have encountered some problems. I want to ask for your help.

  1. Can acoular support real-time acquisition of microphone array data stream? I read the document and learned that there is a class sounddevicesamplesgenerator and a time calibration function, but I don't know whether these can solve my problem.

  2. I learned from the data that acoustic cameras are generally based on beamforming technology, and the process generally has four steps: array signal processing, array beamforming, image contour and image mapping. I want to implement it based on acoular. I ran examples of 2D and 3D beamforming according to the document. However, I found that there is a RG = rectgrid (x_min = - 30.0, x_max = 30.0, y_min = - 30.0, y_max = 30.0, z = 20.0, increment = 0.5) in the process of 2D beamforming. The value of Z is specified here, but the sound camera does not know the distance from the sound source to the microphone in advance. How should it be solved? Can 3D beamforming be used to solve this problem?

  3. Acoular is written by python, so I'm worried about its calculation speed. If I want to use 2D beamforming, can I achieve the effect of real-time acoustic camera?

  4. If you have ever made an acoustic camera, can you give me some suggestions? I am very interested in the acoustic camera and look forward to the acoular report. I would appreciate it if you could reply to me.

readthedocs builds incomplete docs

autosummary does not work because of:
WARNING: [autosummary] failed to import u'acoular.calib': no module named acoular.calib
the path in conf.py may be wrong

Where is the fixed point source about example2.py's result?

Hello,the developer team! I want to use the moving source and fixed point source.After I read the example2.py code, I can't understand the result meaning of example2.py. I tried to modify the code:P1 = PointSource(signal=n1, mpos=m, loc=(0,2.5,4)). The modification of loc=(0,2.5,4) is loc=(0,1,4). But I find the position of the fixed piont source has not changed, and the results have been found to be the same as those given by example2.py at http://www.acoular.org/.
Is it the impact of the cache mechanism, if it is the impact of the cache mechanism, how to eliminate this cache mechanism?
In the all ,I can't find the location of the fixed sound source in your example2.py's results. Can you tell me where is the fixed point source in the output example2.py's three pictures ? Ask your team again and thank you for your generosity!

SineGenerator uses variable called rms as peak amplitude

Using acoular for simulating virtual array data, my results differed from what I would have expected.

Investigating the code, I quickly realised that the SineGenerator().rms trait is used as the peak amplitude of the created signal:

return self.rms*sin(2*pi*self.freq*t+self.phase)

I was abele to fix it for my code by actually providing the peak amplitude (multiplying by a factor of sqrt(2)).

If this behaviour isn't intentional (and/or I am missing something) I suggest two possible fixes to prevent confusion for other users:

  • Rename the trait to peak_amplitude, p_amp or something similar, and adding it to the documentation for SineGenerator. This might not be feasible, as rms is inherited from SignalGenerator.
  • better (IMHO): Add a factor of sqrt(2) to the line mentioned above. This way, the value provided by rms actually is used as the root mean squared.

I don't know if the latter breaks any example, but the ones I saw were mostly using the WNoiseGenerator. Also, the former would break even more code.

Undocumented units of the factor in the mic array calibration file.

It is great to be able to account for the variability of microphone sensitivities in the mic array by means of the calib.xml file.
Still, it is unclear if the individual calibratation factor for each microphone in the XML file should be in [Pa/unit] or [unit/Pa].

I had to make a few experiments, sound level meter in hand, to determine that the correct dimension of the factor is [Pa/unit].
Please, add this to the documentation.

Beamformed single channel output and choosing location to filter for

Hi,

I read the original "Beamformer Output" issue thread and am still confused about its implementation. Specifically, when choosing grid points using the "channels" parameter as indices in the WriteWAV function.

For example, if I have a 1 x 1 x 1 room split up by .1 increments and want to filter for the (x = .1, y = .2, z = .3) voxel, the original thread seems like it would have me use:

acoular.WriteWAV (source=bt, channels =[1, 2, 3])

which then exports a 3 channel wav file.

How could I export a single channel filtered signal at that specified location?

Thanks

Not building with MSVC for python 27 "no-write-string" option incorrect

Hello,
First, this is a great project

I tried to install this on my pc runnning windows 10.
I was asked to install windows compiler for python 27 which i did,
The option '/Wno-write-string' seems to be a GCC option passed to windows compiler which can not work, therefore the builds fail.

Thanks

    copying acoular\xml\array_84_10_9.xml -> build\lib.win-amd64-2.7\acoular\xml
    copying acoular\xml\array_84_bomb_v3.xml -> build\lib.win-amd64-2.7\acoular\xml
    copying acoular\xml\calib_vw_ring32.xml -> build\lib.win-amd64-2.7\acoular\xml
    copying acoular\xml\gfai_ring32.xml -> build\lib.win-amd64-2.7\acoular\xml
    running build_ext
    building 'acoular.beamformer' extension
    creating build\temp.win-amd64-2.7
    creating build\temp.win-amd64-2.7\Release
    creating build\temp.win-amd64-2.7\Release\acoular
    C:\Users\adrie\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -DMAJOR_VERSION=16 -DMINOR_VERSION=5 -IC:\Users\adrie\Anaconda2\envs\olive\lib\site-packages\scipy\weave -IC:\Users\adrie\Anaconda2\envs\olive\lib\site-packages\scipy\weave\scxx -IC:\Users\adrie\Anaconda2\envs\olive\lib\site-packages\scipy\weave\blitz -IC:\Users\adrie\Anaconda2\envs\olive\lib\site-packages\numpy\core\include -IC:\Users\adrie\Anaconda2\envs\olive\include -IC:\Users\adrie\Anaconda2\envs\olive\PC /Tpacoular\beamformer.cpp /Fobuild\temp.win-amd64-2.7\Release\acoular\beamformer.obj -O3 -ffast-math -msse3 -Wno-write-strings
    Found executable C:\Users\adrie\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\cl.exe
    cl : Command line error D8021 : invalid numeric argument '/Wno-write-strings'
    error: Command "C:\Users\adrie\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -DMAJOR_VERSION=16 -DMINOR_VERSION=5 -IC:\Users\adrie\Anaconda2\envs\olive\lib\site-packages\scipy\weave -IC:\Users\adrie\Anaconda2\envs\olive\lib\site-packages\scipy\weave\scxx -IC:\Users\adrie\Anaconda2\envs\olive\lib\site-packages\scipy\weave\blitz -IC:\Users\adrie\Anaconda2\envs\olive\lib\site-packages\numpy\core\include -IC:\Users\adrie\Anaconda2\envs\olive\include -IC:\Users\adrie\Anaconda2\envs\olive\PC /Tpacoular\beamformer.cpp /Fobuild\temp.win-amd64-2.7\Release\acoular\beamformer.obj -O3 -ffast-math -msse3 -Wno-write-strings" failed with exit status 2

    ----------------------------------------
Command "C:\Users\adrie\Anaconda2\envs\olive\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\adrie\\appdata\\local\\temp\\pip-build-gj9h_a\\acoular\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record c:\users\adrie\appdata\local\temp\pip-ds7skb-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\adrie\appdata\local\temp\pip-build-gj9h_a\acoular\

How to import .mat file format into acoular?The question of fileimport.py codes

Hello developer! I use the .mat data format to do the sound source localization. At present, I'm doing the conversion of the .mat data format to the .hdf5 file format. I check out the class bk_mat_import class that you wrote in fileimport.py. I tried to call this class , but I failed! Excuse me, can you give a example of importing mat files? If you can, thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.