Code Monkey home page Code Monkey logo

liblsl's People

Contributors

arthurbiancarelli avatar borismansencal avatar cboulay avatar chausner avatar chkothe avatar dmedine avatar gisogrimm avatar jchen-dawnscene avatar jfrey-xx avatar kyucrane avatar mesca avatar mgrivich avatar morningf avatar noah-roberts avatar phfix avatar pmaanen avatar samuelpowell avatar tobiasherzke avatar tstenner avatar xloem avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

liblsl's Issues

Raise minimum VC++ version to VC2015 for liblsl compilation

As part of the multicast PR (#31) I've encountered several problems with MSVC2008 that all boil down to the lack of standard int types (e.g. int32_t), inability to have two typedefs for the same type (i.e. int32_t in lsl_c.h and lslboost::int32_t) and the need for the internal files to include lsl_c.h. Visual Studio 2010 has the cstdint and therefore none of these problems.

End users will still be able to compile against liblsl with Visual Studio 2008, but compiling liblsl itself will require Visual Studio 2010 or later.

Does anyone (especially @mgrivich) have any objections?

Remove 127.0.0.1 from multicast addresses

multicast.MachineAddresses is set to {127.0.0.1} by default. In some cases, this is exactly what an end user wants (i.e. a local stream can be discovered even when the network isn't there.
This doesn't use multicast, however, so the first socket to open the default port will be the only one to receive the discovery (and reply to them (even when multicast is enabled on the loopback interface):

$ sudo ip netns add lsl_localmc
$ sudo ip -n lsl_localmc link set lo up
$ sudo ip -n lsl_localmc link set lo up multicast on
$ sudo ip netns exec lsl_local build/testing/lsl_test_exported "resolve multiple streams"
-------------------------------------------------------------------------------
resolve multiple streams
-------------------------------------------------------------------------------
../testing/test_ext_discovery.cpp:7
...............................................................................

../testing/test_ext_discovery.cpp:14: FAILED:
  REQUIRE( found_stream_info.size() == n )
with expansion:
  1 == 3

So even though everything seems to work with a single stream, the stream discovery is broken. I therefore suggest to remove 127.0.0.1 from the local addresses and add an explanation how to fix it to the documentation (short version: add a multicast route to the device, i.e. sudo ip route add multicast 224.0.0.0/4 dev lo).

floating point conversion is locale dependent

Hi there,
I have encountered a rather persistent issue with the liblsl-library on Ubuntu 16.04 LTS, compiling with GCC 5.4.0:
While other hosts in the network (including the localhost) can detect outgoing streams, there seems to be a problem with the input parsing: Every field of the stream_info object that is returned by the lsl::resolve_streams function is empty (except for IPv4 and IPv6 addresses). My Ubuntu machine can receive data perfectly fine, this issue seems to concern outgoing connections only. This behavior is not influenced by the receiving machine, I get the same error on Ubuntu and Windows 10. When trying to build a stream_inlet with this faulty stream_info, it says:

Note: The stream named '(invalid: bad lexical cast: source type value could not be interpreted as target)' could not be recovered automatically if its provider crashed because it does not specify a unique data source ID.

The part mentioning the failed cast seems to originate in the boost-backend.

This is an excerpt from the sender code:

// the issue persists when I change data format or frequency
lsl::stream_info info("SimpleStream", "EEG", 90, 1000.0f, lsl::channel_format_t::cf_double64, "DUMMY_ID");
lsl::stream_outlet outlet(info);
outlet.wait_for_consumers(600);

When I print the XML-info for the outlet on the sending machine, everything seems to be correct:

<?xml version="1.0"?>
<info>
    <name>SimpleStream</name>
    <type>EEG</type>
    <channel_count>90</channel_count>
    <nominal_srate>1000</nominal_srate>
    <channel_format>double64</channel_format>
    <source_id>DUMMY_ID</source_id>
    <version>1,1000000000000001</version>
    <created_at>340,01528632399999</created_at>
    <uid>c7a69651-42af-4749-ae12-706dbae31731</uid>
    <session_id>default</session_id>
    <hostname>schrottpad</hostname>
    <v4address></v4address>
    <v4data_port>16572</v4data_port>
    <v4service_port>16572</v4service_port>
    <v6address></v6address>
    <v6data_port>16573</v6data_port>
    <v6service_port>16573</v6service_port>
    <desc />
</info>

The XML for the detected stream_info on the receiving machine is completely useless:

<?xml version="1.0"?>
<info>
    <name></name>
    <type></type>
    <channel_count>0</channel_count>
    <nominal_srate>0</nominal_srate>
    <channel_format>undefined</channel_format>
    <source_id></source_id>
    <version>0</version>
    <created_at>0</created_at>
    <uid></uid>
    <session_id></session_id>
    <hostname></hostname>
    <v4address>192.168.178.120</v4address>
    <v4data_port>0</v4data_port>
    <v4service_port>0</v4service_port>
    <v6address>2003:d1:ef0c:5700:d42c:6e43:2908:3998</v6address>
    <v6data_port>0</v6data_port>
    <v6service_port>0</v6service_port>
    <desc />
</info>

I have already played around with the lsl_api.conf file, but I was not able to resolve the issue. I remember that during the compilation of liblsl, GCC put out a warning about several outdated headers, including one dealing with endianess.

I am running out of ideas, so I hope you can help me.
Thanks in advance, sheinke.

Build the lib under x64

hi, i see that seem we just have cmake config for building x86. do we have plan to update it to x64?
or did i miss something?

New distribution channels

It would be great if liblsl could be installed simply with a one-liner on Mac and Linux platforms, and with an installer on Windows.

  • Debian: apt-get install liblsl
  • Mac: brew install labstreaminglayer/tap/lsl
  • Windows: chocolatey? NuGet? winget? Just a good installer? Please not vcpkg.
  • conda: conda install -c conda-forge liblsl
  • Android - pydroid needs "python-for-android" recipe

Debian / Ubuntu

The ultimate goal is to get liblsl into the official list of managed Debian packages. But that might take some time and requires mentorship and sponsorship.

In the interim, we can setup our own PPA that users can add, then they can install and update liblsl like any other package. What's the equivalent for raspbian?

More info:

Mac

We now have our own Tap: https://github.com/labstreaminglayer/homebrew-tap. This has a formula for liblsl and LabRecorder. We can add other formulae too. It would be nice if LabRecorder was a cask so it was added to /Applications, but that can come later.

Would we ever want to request to move the liblsl formula to homebrew/core? They've deprecated homebrew/science. They have brewsci/science but I'm not sure this is better for us than labstreaminglayer/tap.

Windows

The benefits to putting LSL into a package manager in Windows are small relative to the other platforms because the available package managers (in their simplest use-case) don't put libraries on the PATH. So this would only benefit developers. This isn't a bad goal, but liblsl is already pretty easy to get for any Windows developer so the gains here are small(er).

Discussion on NuGet package: labstreaminglayer/liblsl-Csharp#17

I think we should work toward making an installer (.exe or .msi) and install to Program Files. Once that's in place then we can think about distribution via chocolatey or winget, both of which can be used as an installer delivery service.

Conda

There's already a conda-build file thanks to @tstenner. I think Tristan has a conda channel setup but we should probably make a labstreaminglayer org conda channel, then we can manage multiple applications there without hijacking Tristan's credentials.

Or even better would be to get it in conda-forge: conda-forge/staged-recipes#15252

More info:

Windows, some configs produce liblsl64.dll, but most produce lsl.dll

Fresh clone.

Visual Studio Version CMake Result
Visual Studio 2017 Version 15.9.18 integrated cmake liblsl64.dll
x64 Native Tools Command Prompt for VS 2017 cmake .. -G "Visual Studio 15 2017 Win64" lsl.dll
Visual Studio 2019 Version 16.4.4 integrated cmake lsl.dll
x64 Native Tools Command Prompt for VS 2019 cmake .. -G "Visual Studio 16 2019" -A x64 lsl.dll

Windows, cmake `--target package` puts package in {root}/package

I realize that there have been a bunch of changes to the cmake files to help with CI builds. It's an iterative process and I'm just making a note of what's currently broken.

D:\Tools\Neurophys\labstreaminglayer\LSL\liblsl\build_win>cmake --build . --config Release -j --target package

CPack: - package: /package/liblsl-1.14.0-Win64.zip generated.
CPack: - package: /package/lsltests-1.14.0-Win64.zip generated.

This ends up in my D:\package folder.

I guess we should have a more sane package directory.

[For Discussion] SPoT for clock synchronization?

During the BCI unConference today, Pierre Clisson gave a presentation on Timeflux.io. At the end, in future work, he mentioned that he thinks SPoT, a clock synchronization approach for IoT, might be better than NTP used in LSL. For those of you who are used to looking at these kinds of things, here is its paper.

I don't know how long the talk will stay online, but for now you can watch it here. Pierre Clisson's section starts at starts at 1:25:45.

If you watch all the way to the end, you can see that I asked him about data formats. I didn't ask explicitly about XDF, but he brought it up, unable to remember the name, and said that he didn't like it because it depended on XML. ๐Ÿคท Of all the reasons to not like XDF, that one surprised me.

Error using chol Matrix must be positive definite on liblsl 1.14 + LabRecorderCLI

Environment Info

debian@sr-imx8:~/liblsl-build/liblsl/build/install/bin$ ./lslver
LSL version: 114
git:v1.14.0b4-2-g1eaaf08c/branch:master/build:/compiler:GNU-10.2.0/link:shared

Tried both LabRecorderCLI tagged v1.13.1 and the latest from [master] branch.

Issue

When using LSL v1.14 with LabRecorder (master branch from https://github.com/labstreaminglayer/App-LabRecorder), the following error occurred in 1 out of 5 XDF recordings. It is really hard to reproduce, and Google search points me to labstreaminglayer/App-LabRecorder#15, which mentioned a clock offset bug introduced in liblsl1.13. Is it still unsolved and somehow creep to 1.14?

I'm running the consumer using the sample code, SendDataC. I also make sure to press 'Enter' to correctly closed the file, as follows:

debian@sr-imx8:~$ LabRecorderCLI SendDataC4.xdf 'type="EEG"'
Found SendDataC@sr-imx8 matching 'type="EEG"'
Starting the recording, press Enter to quit
2020-10-16 00:09:39.348 (   1.003s) [        AE0941C0]             common.cpp:50    INFO| git:v1.14.0b4-2-g1eaaf08c/branch:master/build:/compiler:GNU-10.2.0/link:shared
Opened the stream SendDataC.
Received header for stream SendDataC.
Started data collection for stream SendDataC.

Offsets thread is finished
Wrote footer for stream SendDataC.
Closing the file.

Not sure if I should downgrade my liblsl to 1.12 or even earlier.

Release planning for 1.13

The last release was over two years ago and even though there was nothing groundbreaking (see the changelog and add anything I missed), there are several small improvements and fixes that break compilation on ARM devices otherwise.

So I'd propose the following plan:

Release 1.13

  • several bugfixes and smaller improvements
  • last version to support VS2008 and retain ABI compatilibity with 1.12 and below
  • hopefully more robust clock offset handling :-)

The release after that

  • require C++14
  • drop support for the old Boost subset and gradually remove Boost dependencies wherever possible
  • not ABI/API compatible (#6) with 1.13 and below on 64bit Windows

That leaves two questions: 1) Is there anything else that has to be included for 1.13 and 2) what kind of integration tests can everyone provide?

StreamInfo ctor does not always return a handle on Win 64 with C#

Many days I have been looking at a strange problem, of which I suspect it has to do something with threading. The setup is this: lsllib 1.13 64 bit, Windows 10, source code in C# on .NET3.5. with Visual Studio 2008
In our Windows Forms application, an LSL outlet is to be created on a background thread with Single Threaded Apartment (STA). We use this method (simplified for the purpose of this description):

private void DoConnect()
{
	// we keep a global reference to the outlet object
	if (mOutlet != null)
	{
		// we have added the IDisposable interface to the C# API
		mOutlet.Dispose();
		mOutlet = null;
	}
	
	// TEST MARK 1
	
	// Note: we have renamed StreamInfo to LSLStreamInfo in the LSL API
	liblsl.LSLStreamInfo outStreamInfo = new liblsl.LSLStreamInfo(
		mUniqueStreamName,
		mStreamType,
		mChannelCount,
		mSampleRate,
		liblsl.channel_format_t.cf_double64,
		mSourceID);
	
	// TEST MARK 2
	
	// Sometimes creating the outlet fails. It appeared, that 'outStreamInfo' did
	// not have a valid Handle. Therefore test if the object handle is valid.
	if (outStreamInfo.Handle != IntPtr.Zero)
	{
		// internal helper to construct the XML header
		LSLHelpers.SignalInfoToXMLElement(outStreamInfo.desc(), mMyChannels);
		Trace.WriteLine("LSL: create new outlet:\n" + outStreamInfo.as_xml());
		mOutlet = new liblsl.StreamOutlet(outStreamInfo);

		// Success!				
	}
	else
	{
		Trace.WriteLine("LSL Outlet: PROBLEM: Handle StreamInfo null");
		
		// Fail... :-(
	}
}

The first call to DoConnect() for the lifetime of the application runs okay. On some computers, the second call fails. The third time always fails. What happens, is that the LSL StreamInfo constructor returns without handle. In the LSL framework, an exception must have occurred, so that no object handle is returned. Strange enough, looking at the LSL source code, the StreamInfo ctor parameters should not lead to an exception.

But why is the object handle null? Why does it work once or twice, and after that no StreamInfo can be created anymore...?

I have tried a number of things. Firstly, I have put above code and references in a simple console app. From the main thread, but also from a background thread I try to create multiple outlets. No problem. That works!

Then I have tried to call 'DoConnect()' in the above code in the intended application from a ThreadPool thread. Now something strange happens: if I try to call DoConnect() from 4 ThreadPool threads that are called shortly after eachother (I mean within microseconds) I get 4 outlets! If I try to add another some time later in the same application instance, the call fails.

In the debugger, something unexpected happens as well: if I put a breakpoint directly after TEST MARK 1, and from there I step to the next line (after TEST MARK 2), I always have a handle in the StreamInfo (so outlet creation succeeds). However, if I break on TEST MARK 1 and then from there I continue normal program execution, often no handle is returned.

I assume there is some threading problem, but I cannot find any hints to what is faul there.

Maybe one of you guys can help! Keep up with the good work!

inlet max_chunklen and pull_chunk max_samples not working as described

I'm using the pylsl wrapper but there's nothing python specific about this problem (unless it's calling arguments out of order?)

Here's the description of max_chunklen in the pylsl documentation:

max_chunklen -- Optionally the maximum size, in samples, at which 
                        chunks are transmitted (the default corresponds to the 
                        chunk sizes used by the sender). Recording programs  
                        can use a generous size here (leaving it to the network 
                        how to pack things), while real-time applications may 
                        want a finer (perhaps 1-sample) granularity. If left 
                        unspecified (=0), the sender determines the chunk 
                        granularity. (default 0)

and the kwargs for pull_chunk

timeout -- The timeout of the operation; if passed as 0.0, then only 
                   samples available for immediate pickup will be returned. 
                   (default 0.0)
max_samples -- Maximum number of samples to return. (default 
                       1024)

For a real-time application, I want to get new samples as fast as possible so I'm setting max_chunklen=nextpow2(srate/60) which for the 44 kHz audio app turns out to be 1024.

But, in case my real-time application has a hiccup, I want to make sure I can pull as much data as are waiting so I can catch up quickly, so I'm setting max_samples=8192.

My expectation is that if my application is running quickly then most of my pull_chunk operations will return 0 or 1024 samples. If there's a hiccup then I might get anywhere between 1024 and 8192.

What I end up getting is mostly 0 with the ocassional 4096 chunk (sometimes the 4096 chunk is split across 2 pull_chunk operations). e.g.

4096, 0,0,0,0,0,0,4096,0,0,0,3940,156,0,0,4096

I take this to mean that max_chunklen isn't working as I expected. This happens for max_chunklen 8, 16, up to 8192. If I go much higher (e.g., 16384) then I end up getting clusters of 8192 (max_samples) with 0's in between.

If I set max_chunklen=1 then it does work closer to what I expected:

9,0,1047,1271,1539,239,0,0,434,0,1285,

Here is where max_chunklen_ actually does anything:
server_stream << "Max-Chunk-Length: " << max_chunklen_ << "\r\n";

Received on the other end here.

for (auto &c : hdrline) c = ::tolower(c);
std::string type = trim(hdrline.substr(0, colon)),
		            rest = trim(hdrline.substr(colon + 1));
if (type == "max-chunk-length") chunk_granularity_ = std::stoi(rest);

And ultimately:
if (chunk_granularity_) samp->pushthrough = (((++seqn) % (uint32_t)chunk_granularity_) == 0);

This all looks OK. So what's going on?

macOS fails to join IPv6 multicast groups

Output of ifconfig:

lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
        options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
        inet 127.0.0.1 netmask 0xff000000 
        inet6 ::1 prefixlen 128 
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 
        nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
EHC250: flags=0<> mtu 0
EHC253: flags=0<> mtu 0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=b<RXCSUM,TXCSUM,VLAN_HWTAGGING>
        ether c8:bc:c8:96:ec:2a 
        inet6 fe80::1c83:5086:dc41:c9ec%en0 prefixlen 64 secured scopeid 0x6 
        inet6 2001::1 prefixlen 72 
        inet XXX.X.XX.40 netmask 0xfffffc00 broadcast 172.22.15.255
        nd6 options=201<PERFORMNUD,DAD>
        media: autoselect (100baseTX <full-duplex,flow-control>)
        status: active

With the default setting of 0 for ipv6mr_interface in the IPV6_JOIN_GROUP request struct, joining the group(s) fails with error 49 (Can't assign requested address) and no IPv6 multicast packets are received.

After changing it to 6 (the interface index), the multicast join works without a hitch and the multicast packets are received.

Test code:

import asyncio
import socket
import struct

addrs = {socket.AF_INET: ['239.0.0.183'],
        socket.AF_INET6: ['FF02:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2']
}
class MulticastListener(asyncio.DatagramProtocol):
    def __init__(self, name):
        self.transport = None
        self.name = name

    def datagram_received(self, data, addr):
        if data.startswith(b'LSL:shortinfo'):
            _, query, returnaddr = data.splitlines()
            returnaddr, queryid = returnaddr.split(b' ')
            print(f'{self.name} received {query} from {addr} -> {returnaddr}:')

    def error_received(self, exc):
        print('Error received:', exc)

    def connection_lost(self, exc):
        asyncio.get_event_loop().stop()

    @staticmethod
    def server_socket(family):
        mcastaddrs = addrs[family]
        # (_, _, _, _, sockaddr) = socket.getaddrinfo(mcastaddrs[0], MCastHelper.port, family, socket.SOCK_DGRAM)[0]

        s = socket.socket(family, socket.SOCK_DGRAM)
        s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
        if hasattr(socket, 'SO_REUSEADDR'):
            s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
        if family == socket.AF_INET:
            s.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 2)
        elif family == socket.AF_INET6:
            s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_MULTICAST_HOPS, 2)

        s.bind(('', 16571))
        for group in mcastaddrs:
            try:
                binaddr = socket.inet_pton(family, group)
                if family == socket.AF_INET:
                    s.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, binaddr + struct.pack('=I', socket.INADDR_ANY))
                elif family == socket.AF_INET6:
                    s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_JOIN_GROUP, binaddr + struct.pack('@I', 0))
                print(f'Bound IP family {family} socket to {group}')
            except OSError as e:
                print(f'Error joining socket to group {group}: {e.errno}, {e.strerror}')
        return s


async def main():
    loop = asyncio.get_running_loop()
    tasks = []
    for name, family in [('IPv4', socket.AF_INET), ('IPv6', socket.AF_INET6)]:
        tasks.append(await loop.create_datagram_endpoint(lambda: MulticastListener(name=name),
                                                         sock=MulticastListener.server_socket(family)))
    try:
        await asyncio.sleep(60)
    finally:
        for transport, protocol in tasks:
            transport.close()

if __name__ == '__main__':
    asyncio.run(main())

[REQ] Incorporate clock device<->host offsets

There's been a review of LSL discussion here that pointed out one problem with LSL: the latency between the inlet and outlet is well accounted for, but there's also some varying latency between the actual device and the host.

For directly connected devices, this latency is typically quite small and has almost no jitter. Wireless connections to the devices (i.e. Bluetooth and Wifi), on the other hand, can add a significant amount of jitter. In an ideal world, LSL would run directly on the device, but the requirement to store up to 6 minutes of data per client in case it reconnects is quite steep and (other than some ESP32 devices?) the C++/ASIO support is limited at best.

One way to work around this would be a separate channel that transmits the round-trip times from the host to the device and (if present) offsets (the TobiiPro devices calculate their own clock offsets that โ€“ on Windows and Linux at least โ€“ use the same OS clock as LSL, but others might reference an entirely different hardware clock), so that some variance in timestamps can be accounted for and if the device has its own timestamps these can be used with LSL.

Github Actions Ubuntu 18.04 produces broken .deb packages

AppVeyor is using a new compiler that has dependencies that don't exist on Ubuntu 18.04 by default.

(Reading database ... 338602 files and directories currently installed.)
Preparing to unpack liblsl.deb ...
Unpacking liblsl (1.14.0) over (1.14.0) ...
dpkg: dependency problems prevent configuration of liblsl:
 liblsl depends on libgcc-s1 (>= 3.0); however:
  Package libgcc-s1 is not installed.
dpkg: error processing package liblsl (--install):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 liblsl

This can be worked around by installing the more modern dependencies on my older OS:

sudo apt update
sudo apt --fix-broken install
cd ~/Desktop/
wget http://mirrors.kernel.org/ubuntu/pool/main/g/gcc-10/gcc-10-base_10-20200411-0ubuntu1_amd64.deb http://mirrors.kernel.org/ubuntu/pool/main/g/gcc-10/libgcc-s1_10-20200411-0ubuntu1_amd64.deb
sudo apt install ./gcc-10-base_10-20200411-0ubuntu1_amd64.deb ./libgcc-s1_10-20200411-0ubuntu1_amd64.deb

Then sudo dpkg -i liblsl.deb works as expected.

A better medium-term solution, as suggested by @tstenner , might be to specify a more compatible compiler.

For reference, I recently setup libxdf to build on GitHub actions and the resulting .deb file didn't have any problems on my Ubuntu 18.04.

Recover a stream with the same uid

Minimal non-working example:

lsl::stream_outlet outlet(lsl::stream_info("resolvetest", "from_streaminfo"));
	lsl::stream_inlet in(outlet.info());
in.open_stream(2);

At this point, the stream info has the service ports set, but not the IP address (after all, the host could have multiple IP addresses) so the recovery kicks in and searches for a stream named "resolvetest".

This fails, because the stream's uid is known so the recovery process is aborted as soon as the stream is found:

for (std::size_t k=0;k<infos.size();k++)
if (infos[k].uid() == host_info_.uid())
return; // in this case there is no need to recover (we're still fine)

Possible solutions:

  • unset the UID when constructing an inlet from a stream info
  • (re)connect to the stream anyways, not a good idea in case the between-sample interval is bigger than the watchdog interval
  • add a force_reconnect parameter to try_recover

Where is the "LSLTargets.cmake"?

When I tried to build LSL, it made a compile error.
(I followed this instruction : https://github.com/sccn/labstreaminglayer/wiki/INSTALL
@in tree builds)

CMake Error at LSL/liblsl/cmake/LSLConfig.cmake:1 (include):
include could not find load file:

D:/labstreaminglayer/LSL/liblsl/cmake/LSLTargets.cmake

but, there're no LSLTargets.cmake in liblsl.

Here's the code in LSLConfig.cmake.

include("${CMAKE_CURRENT_LIST_DIR}/LSLTargets.cmake")

What can I do to solve it?

Thanks,

samples_available does not return the number of samples, only 1 or 0.

According to the docstring, samples_available() is supposed to return the number of samples in the buffer.

/**
* Query the current size of the buffer, i.e. the number of samples that are buffered.
* Note that this value may be inaccurate and should not be relied on for program logic.
*/
std::size_t samples_available() { return (std::size_t)(!data_receiver_.empty()); }

But it returns (std::size_t)(!data_receiver_.empty());, and data_receiver_.empty() returns a boolean sample_queue_.empty();

bool empty() { return sample_queue_.empty(); }

So samples_available() will only ever return 0 or 1.

Drop 32/64 postfix from library names

The distinction between liblsl32 and liblsl64 was useful when there were only two platforms (32 and 64 bit), but nowadays it's more complicated - liblsl32.so could be for any x86/ARM/MIPS binary compiled for Android, Linux with static libc embedded or a specific distribution.

On Windows, there's less ARM devices but both the traditional Windows SDK as well as UWP binaries get the same name.

On the other hand, in every language binding there's a block of code to determine the bitness and the correct library name. For Python, we're starting to ship liblsl in the Python package for a specific version and platform (e.g. python 3.6 on a x86_64 distribution newer than CentOS 6), so we already bundle the liblsl binary with the package we need.

Since we're planning an ABI break for the version after 1.13, we might as well name all library liblsl.so / lsl.dll / liblsl.dylib or go overboard and put the target system in the filename, e.g. liblsl-x86-win32.dll, liblsl-arm7-android-ndk17.so and so on.

(Edit by Chad to change liblsl.dll to lsl.dll, following comment below)

MinGW builds require additional libraries

Additional libraries are required to enable the build of liblsl, with unit tests, under MinGW. The resultant library and executables test good on x64, but not on x86.

I do not really grok CMakeLists, and I note that you prefer an inline test for MinGW rather than a long form if( MINGW ) in your build tooling, so I am attaching the diffs (which are against the 1.13 release branch) rather than opening a PR.

liblsl_mingw.diff.txt
lsl_test_internal_mingw.diff.txt

7zip is annoying

I was helping someone via remote control. I went to get the latest liblsl and was faced with a 7zip file that I couldn't open on their computer. Asking them to install 7zip because we're too cool for zip is annoying.

Word size clarification

I am currently finishing up an LSL wrapper for Julia (https://github.com/samuelpowell/LSL.jl) which is based on an autogenerated wrapper of your C API. The library is cross compiled for lots of different architectures (https://github.com/samuelpowell/LSLBuilder) and things are working well on Linux (x64) and Mac, and we're about to test ARMv7 and ARMv8 support.

I have a query regarding data types: I note that liblsl uses int and long (e.g. lsl_push_sample_i and lsl_push_sample_l).

  1. On 32-bit UNIX like architectures, both data types are 32-bits. Does this mean that you do not support 64-bit integer channels on 32-bit architectures?

  2. On 32- and 64-bit Windows both data types are 32-bits. Does this mean there is no support whatsoever for 64-bit integer channels on Windows?

It may be that you are doing some sort of preprocessor magic to redefine the _l suffixed functions to be int64_t, but I couldn't find that. Any clarifications helpful such that I can guard the wrapper function by architecture.

[macOS] 100% cpu load when running sample app.

I'm using the latest release https://github.com/sccn/liblsl/releases/tag/1.13.0-b13 and include the .dylib in my Xcode project as an embedded framework.

Screen Shot 2019-12-16 at 20 40 34

I created a bridging header in oder to be able to use the c library from Swift code.

Screen Shot 2019-12-16 at 20 42 11

I can use the library and invoke lsl_* functions so I assume the lib is linked and included properly in my app target.

However, when I invoke lsl_resolve_all the function does not return and the app consumes 100% cpu.

Screen Shot 2019-12-16 at 21 32 44

I understand that the latest release is targeting macOS 10.12. I'm running 10.15(.1). Is that the problem? If so, why are some functions of the lib working just fine but lsl_resolve_all hangs the app? Any idea on how to resolve this?

Possible to use localhost/127.0.0.1 as KnownPeers in lsl_api.cfg?

Hello,

I'm trying to use the settings on lsl_api.cfg, however, I'm not sure that they are actually read.
Is there a way to check if the settings are being loaded?

Finally, I want to be able to run lsl without any real network interfaces up.
Is it possible to use only the local host 127.0.0.1 to send and receive packages?

Thank you in advance

Outlet connection fails ("received test-pattern samples do not match the specification")

Hi SCCN,
I'm just following on this thread even though the problem is a bit different.
When connecting from MacOS Catalina (10.15.4) to a Windows 10 based LSL stream I get the following error:
data_receiver.cpp:340 ERR| Stream transmission broke off (The received test-pattern samples do not match the specification. The protocol formats are likely incompatible.); re-connecting...

This only happens if I connect across operating systems.
Any idea what I do wrong?

Originally posted by @MichaelUM in sccn/labstreaminglayer#57 (comment)

lsl_continuous_resolver failing to find streams that resolve_streams could find.

After debugging an issue in Slack with users of LSL4Unity, we figured out that the lsl_continuous_resolver was failing to find a marker stream that was easily found with resolve_streams. This only happened when the resolver was trying to find a stream from a different computer on the network and there was no issue when everything was on a local computer.

Add support for building a liblsl binary for the new architecture of Notion 2

Hi Liblsl team

We've had success running LSL natively and enabling it with a toggle button on the Notion 1. We're ramping up to ship out Notion 2 but don't have access to the proper binary.

We're using a 64 bit arm core and need a .so

What do we need to do to get the binary automatically build by liblsl?

Thanks,

AJ

[REQ] TCP-based service discovery + WebSockets

Hello,

I'm attempting to build a browser plugin (WASM) for LSL for my research work, but encountered a few issues that is preventing me from doing so.

  1. It appears that LSL uses UDP+multicast for service discovery. Unless I go in a WebRTC route, I'm stuck at implementing this at browser-level. Is there any work to build a TCP-based service discovery into LSL?

  2. LSL communicates data through pure sockets, and currently do not support WebSockets. Is there any work on adding WebSocket support into it?

Unless I'm mistaken, it's not possible to use LSL in a browser environment without these features. (For WebSockets, however, we could temporarily use the WebSockets proxy included in Emscripten, but it seems very hacky)

Any ideas / suggestions around this?

Require CMake 3.12

Right now, there's an option to also build a static liblsl, compiling each source file twice, once for the shared library and once for the static library.

Starting with CMake 3.12, we could build an object library and use that to create both a shared and a static library.
Homebrew packages CMake 3.13, Ubuntu 18.10 CMake 3.12 and Windows users will install that latest CMake anyway.

The CI systems and Linux users will need to install CMake manually, e.g. like this:

CMAKE_URL="https://cmake.org/files/v3.12/cmake-3.12.4-Linux-x86_64.tar.gz"
mkdir cmakebin && wget --quiet -O - ${CMAKE_URL} | tar --strip-components=1 -xz -C cmakebin
export CMAKE_EXE=$PWD/cmakebin/bin/cmake
$CMAKE_EXE --version

In order to make it easier for new Linux developers, I'd revive the build.sh script I had in the repository once so it checks the installed CMake version and - if it's too old - downloads and uses a recent CMake build in the background.

We already have to do this on Linux for the Python and the Ubuntu 16.04 builds, so we might as well include Ubuntu 18.04 and simplify the whole shared / static situation in one go.

CMake export - "LSLConfig.cmake" or "liblslConfig.cmake"?

CMake's find_package in config mode looks for a file named <PackageName>Config.cmake.

In the template application, cmake/Findliblsl.cmake calls find_package(LSL....

So it's looking for LSLConfig.cmake

In liblsl, the name of the exported file is set here:
https://github.com/sccn/liblsl/blob/master/CMakeLists.txt#L230-L233

which becomes liblslConfig.cmake

Note that the template application parent find_package is to find_package(liblsl).

For consistency, I think we should make the template application call find_package(LSL), rename the find-module to FindLSL.cmake, and also modify the export name to LSLConfig.cmake.

Thoughts?

manylinux binaries - how to make them available to python setup?

Sorry I can't remember what the latest status was on the manylinux shared objects necessary for pylsl.

We have an azure-pipelines file that is building them:
https://github.com/sccn/liblsl/blob/master/azure-pipelines.yml

But the .so's aren't being attached to releases. The last one to have them is 1.13.0-b13. Since then, the manylinux so's are missing from 1.13.0-b14 , 1.13.1, and the 1.14-bXXs. The latter maybe because they are 'pre-releases'. The1.13.1 maybe didn't upload because the azure build failed due to old MacOS version.

I just re-triggered a pipeline build on release 1.13.1, this failed out due to MacOS version, so I downloaded the manylinux artifacts. Edit - turns out I can upload the .so files.

I've published a new pylsl release. I hope it doesn't break anything!
labstreaminglayer/pylsl#6

iOS support in CMake

A user in the Slack channel indicated that they tried the old unmaintained xcodeproject and that didn't work.
I suggested trying the CMake build.
They made some progress here but had to hack inside the generated files to fix some settings.

I edited the project.pbxproj and made lslver product-type.library.dynamic and that succeeded although Im not sure thats the right way to do it

there was a bunch of stuff in the build settings under linking that had "x86_64" in the path name so I replaced those with arm64 and it appears to have succeeded building the library I'm trying to test it in a single view iPhone app now

LSLConfig.cmake forces default install folder to be ${CMAKE_BINARY_DIR}/install

Hi,

I just faced an issue with the LSLConfig which always includes the LCLCmake.cmake which in turn forces the CMAKE_INSTALL_PREFIX to point to ${CMAKE_BINARY_DIR}/install when no values are passed to it. This behavior reflects on my project too, which is importing LSL using the find_package utility of cmake and thus the default installation folder becomes ${CMAKE_BINARY_DIR}/install for my project too.

Would it be possible to leave the CMAKE_INSTALL_PREFIX pointing to its default value when no other values are assigned to it?

Enable IPv6 for macOS

In api_config.cpp, IPv6 is disabled on Macs by default:

		std::string ipv6_str = pt.get("ports.IPv6",
#ifdef __APPLE__
		"disable"); // on Mac OS (10.7) there's a bug in the IPv6 implementation that breaks LSL when it tries to use both v4 and v6

In the Git history, this file was changed twice: first, when @dmedine copied the source code to Github and then again when the formatting was changed automatically.
So, I have no idea what has been the problem, if it still happens on more recent macOS versions or if we can remove the workaround.

Replace src/endian with boost endian

Duplicate of https://github.com/sccn/labstreaminglayer/issues/289

Endian conversion is done by conversion functions in src/endian copied from an older boost version.
Newer Boost versions use intrinsics for better performance, especially on weaker devices (e.g. arm).

Benchpress shows the following results (on a recent i5):

endian conversion src/endian		500000000           3 ns/op
endian conversion src/endian inplace	100000000          10 ns/op
endian conversion boost			1000000000           0 ns/op
endian conversion boost inplace		1000000000           0 ns/op

The benchmark code:

#define BENCHPRESS_CONFIG_MAIN
#include "../src/endian/conversion.hpp"
#include "benchpress.hpp"
#include 
#include 

/// Measure the endian conversion performance of this machine.
BENCHMARK("endian conversion src/endian\t\t", [](benchpress::context* ctx) {
	double data = 12335.5+ctx->num_iterations();
	for (size_t i = 0; i < ctx->num_iterations(); ++i) 
		lslboost::endian::reverse_value(data);
	benchpress::escape(&data);
})
BENCHMARK("endian conversion src/endian inplace\t", [](benchpress::context* ctx) {
	double data = 12335.5+ctx->num_iterations();
	for (size_t i = 0; i < ctx->num_iterations(); ++i) 
		lslboost::endian::reverse(data);
	benchpress::escape(&data);
})

BENCHMARK("endian conversion boost\t\t", [](benchpress::context* ctx) {
	double data = 12335.5+ctx->num_iterations();
	for (size_t i = 0; i < ctx->num_iterations(); ++i) 
		boost::endian::endian_reverse_inplace((uint64_t&) data);
	benchpress::escape(&data);
})

BENCHMARK("endian conversion boost inplace\t\t", [](benchpress::context* ctx) {
	double data = 12335.5+ctx->num_iterations();
	for (size_t i = 0; i < ctx->num_iterations(); ++i) 
		boost::endian::endian_reverse_inplace(*((uint64_t*) &data));
	benchpress::escape(&data);
})
//double measure_native_endian_performance() {

//}

BENCHMARK("correctness test", [](benchpress::context* ctx) {
	double data = 12335.5;
	std::cout << std::hex
	          << data << '\t';
	boost::endian::endian_reverse_inplace(*((uint64_t*) &data));
	std::cout << data << '\t';
	lslboost::endian::reverse(*((uint64_t*) &data));
	std::cout << data << '\t';
	*((uint64_t*) &data) = lslboost::endian::reverse_value(*((uint64_t*) &data));
	std::cout << data << '\t';
	*((uint64_t*) &data) = boost::endian::endian_reverse(*((uint64_t*) &data));
	std::cout << std::endl;
})

Don't build as a shared library if LSL_BUILD_STATIC is set

Currently, enabling LSL_BUILD_STATIC allows to build LSL as a static library but the names of the generated libraries are different (-static suffix) and shared builds are still getting created. The latter is an issue for the vcpkg port because vcpkg poses quite strong restrictions on what should be part of the package and it does not allow shared libraries to be part of it when static builds are enabled. Also, simply deleting the shared library files does not work because there are references to it in the generated CMake target. For this reason, the vcpkg port only supports LSL as a shared library at the moment.

Also for Conan, builds are usually either static or shared and there is no real usecase for building them both at the same time.

My suggestion is to change the behavior of LSL_BUILD_STATIC to disable building shared libraries completely (or simply use the pre-defined BUILD_SHARED_LIBS variable in CMake). Then we also don't need the -static suffix and the CMakeList.txt should become simpler overall. It would allow me to support static builds in vcpkg and remove one of the hurdles in creating a Conan package.

Require C++11 for the C++ API

The C++ API has wrapper types with lots of custom / disabled copy operations / destructors for the opaque C API handles.
With C++11, it's possible to store the handles in shared_ptrs with custom deleters, so copying, moving and destroying objects is all handled by the compiler.

Another benefit is that handing out raw handles (see #105) is perfectly safe:

shared_ptr<lsl_struct_outlet_> raw_handle = stream_outlet(โ€ฆ).get_raw_handle();
// the stream outlet goes out of scope
lsl_push_foo(raw_handle.get());
// the handle to the outlet is still valid, until raw_handle goes out of scope, too.

Stream transmission broke off

So I have like 6 different outlets in Unity interfacing through LSL.cs from the liblsl-Csharp repo.

Everything works fine when I use the 1.13 DLL however when using the 1.14 DLL instead I get the following error when trying to read the streams in python or using LabRecorder:
Stream transmission broke off (The received test-pattern samples do not match the specification. The protocol formats are likely incompatible.); re-connecting...

This error then repeats itself over and over.

Linking via qMake lsl.lib vs liblsl64.lib

I am trying to link against the lsl lib via qmake, see here. I am building on Windows 10 and downloaded liblsl-1.14.0-Win64. During compilation I get the error that linkage to liblsl64.lib is not possible. The pre built binaries include lsl.lib without the 64 or 32 keyword. I also tried building lsl from source, but all I got is lsl.lib and not liblsl64.lib. Am I missing some steps here? The lsl_cpp.h also states that liblsl32.lib or liblsl64.lib need to be linked.

Default liblsl search path: build folder vs. install folder

In find_package(LSL, CMake searches in the path set in LSL_INSTALL_ROOT first and then some app-defined paths.

I prefer the liblsl build path, but an alternative is the install path.

The build path has several advantages:

  • the corresponding install paths (build/ -> build/install, out/arch/ -> out/install/arch) will be empty unless liblsl has been built and installed locally. In any case (except where someone unzips an archive to build/install) the build folder will be newer than the install folder
  • rebuilds of liblsl will be picked up immediately, rather than needing a separate installation step
  • make clean / cmake --build build --target clean cleans the build folder, but not the install folder

Provide vcpkg package for Windows

vcpkg is a package manager that greatly facilitates building and installing C++ libraries on Windows. Providing a package for liblsl should be relatively easy as both vcpkg and liblsl are based on CMake. I started by forking the vcpkg repo and adding a preliminary package definition for the latest beta release 1.13-b4. I am currently testing it and, if successful, will submit a PR to the main vcpkg repo.

vcpkg intends packages to be installable as either shared or static libraries. I encountered some issues while trying to add support for the latter which is why it's currently disabled in my fork. In particular, the current CMakeList.txt in liblsl only installs the shared target but not the static one. I would welcome changes in liblsl that provide better support for both shared and static targets as this would make it easier for me to adapt it for vcpkg. Also, please let me know if anybody has already worked on this or wants to help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.