Code Monkey home page Code Monkey logo

genmc's People

Contributors

michaliskok avatar vafeiadis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

genmc's Issues

LKMM dependency tracking issues

Here is small example that seems to identify issues in the LKMM dependency tracking of GenMC.

#include <pthread.h>
#include <assert.h>
#include <stdbool.h>
#include <lkmm.h>
void __VERIFIER_spin_start(void);
void __VERIFIER_spin_end(bool);
#define await_while(cond)                                        \
	for (int tmp = 0; __VERIFIER_spin_start(), tmp = (cond), \
			__VERIFIER_spin_end(!tmp), tmp;)
/******************************************************************************/
int locked;
int *next;
void *alice(void *arg)
{
	WRITE_ONCE(locked, 1);
	smp_store_release(&next, &locked);
	await_while (READ_ONCE(locked));
	return 0;
}
void *bob(void * arg)
{
	int *i;
	await_while((i = READ_ONCE(next)) == 0);
	WRITE_ONCE(*i, 0);
	return 0;
}
/******************************************************************************/
int main()
{
	pthread_t ta, tb;
	pthread_create(&ta, 0, alice, 0);
	pthread_create(&tb, 0, bob, 0);
	pthread_join(ta, 0);
	pthread_join(tb, 0);
	assert(locked == 0);
	return 0;
}

GenMC detects the following liveness violation:

user@d0d1f66d6eaf:~$ genmc -check-liveness -disable-spin-assume -disable-load-annotation -lkmm test-mp.c 
WARNING: LKMM support is still at an experimental stage!
Error detected: Liveness violation!
Event (2, 8) in graph:
<-1, 0> main:
	(0, 0): B
	(0, 1): TC [forks 1] L.40
	(0, 2): TC [forks 2] L.41
	(0, 3): TJ L.42
<0, 1> alice:
	(1, 0): B
	(1, 1): M
	(1, 2): Wrlx (locked, 1) L.19
	(1, 3): Wna (, 0x39720e0) L.20
	(1, 4): Rna (, 60236000) [(1, 3)] L.20
	(1, 5): Wrel (next, 60236000) L.20
	(1, 6): SPIN_START L.21
	(1, 7): Rrlx (locked, 1) [(1, 2)] L.21
<0, 2> bob:
	(2, 0): B
	(2, 1): M
	(2, 2): SPIN_START L.29
	(2, 3): Rrlx (next, 60236000) [(1, 5)] L.29
	(2, 4): Wna (, 60236000) L.29
	(2, 5): Rna (, 0x39720e0) [(2, 4)] L.29
	(2, 6): Wrlx (locked, 0) L.31
	(2, 7): D L.32
	(2, 8): E

Non-terminating spinloop
Number of complete executions explored: 0
Number of blocked executions seen: 1
Total wall-clock time: 0.02s

However, Alice should not hang. As I understand, (2,6) must happen after (1,2) because

  1. (1,2) happens before (1,5)
  2. (1,5) is release
  3. (2,3) reads from (1,5)
  4. there is an address dependency between (2,3) and (2,6)

Interestingly, if I run the same example with the flags -check-consistency-point=step and -check-consistency-type=full, then GenMC does not detect the liveness violation:

genmc -check-liveness -disable-spin-assume -disable-load-annotation -lkmm -check-consistency-point=step -check-consistency-type=full test-mp.c 
WARNING: LKMM support is still at an experimental stage!
Number of complete executions explored: 1
Number of blocked executions seen: 2
Total wall-clock time: 0.03s

Also, if I use the automatic spin-assume transformation (instead of await_while+disable flags), the presumably incorrect liveness violation is still detected even with the -check-consistency flags above.

Is my understanding of the address dependency/LKMM incorrect or is this an issue with the dependency tracking in GenMC?

Which flags should I use for being sure I won't have false positives?

Bug in liveness-violation report

GenMC 0.7 has a small bug in the reporting of liveness violations. It seems to always point to thread 1.

Here is a simple example:

#include <pthread.h>

int x,y;

void* run(void *arg) {
	if (y) 
		x = 1;
	return 0;
}

int main()
{
	pthread_t t;
	pthread_create(&t, 0, run, 0);
	y = 1;
	while (!x);  // thread 0 (main thread) may hang here
	pthread_join(t, 0);
	return 0;
}

With GenMC 0.7 I get:

$ genmc -check-liveness -mo -imm bug-liveness.c 
Error detected: Liveness violation!
Event (1, 2) in graph:
<-1, 0> main:
        (0, 0): B
        (0, 1): TC [forks 1] L.14
        (0, 2): Wna (y, 1) L.16
        (0, 3): LOOP_BEGIN
        (0, 4): SPIN_START
        (0, 5): Rna (x, 0) [INIT] L.17
<0, 1> run:
        (1, 0): B
        (1, 1): Rna (y, 0) [INIT] L.6
        (1, 2): E

Non-terminating spinloop: thread 1     <==== HERE
Number of complete executions explored: 1
Number of blocked executions seen: 2
Total wall-clock time: 0.02s

But the thread that hangs is thread 0.

With GenMC 0.6.1 the report was correct:

$ genmc -check-liveness -mo -imm  bug-liveness.c 
Error detected: Liveness violation!
Event (1, 2) in graph:
<-1, 0> main:
        (0, 0): B
        (0, 1): TC [forks 1] L.14
        (0, 2): Wna (y, 1) L.16
        (0, 3): LOOP_BEGIN
        (0, 4): SPIN_START
        (0, 5): Rna (x, 0) [INIT] L.17
<0, 1> run:
        (1, 0): B
        (1, 1): Rna (y, 0) [INIT] L.6
        (1, 2): E

Non-terminating spinloop: thread 0
Number of complete executions explored: 0
Number of blocked executions seen: 1
Total wall-clock time: 0.02s

I found also a bit confusing what this means: Event (1, 2) in graph. Is this correct?

free(NULL)

Is a nop in C, but segfaults in GenMC

Failure at EventLabel.cpp:202/operator<<()! with Linux mcs_spinlock.h (LKMM)

I'm having a weird error with GenMC 0.6.1 (-mo -lkmm) when checking the mcs_spinlock.h from Linux. The output is incomplete and terminates with a "bug" message:

BUG: Failure at EventLabel.cpp:202/operator<<()!

Note that I manually annotate the spinloops because spin-assume doesn't seem to work here. I also disable load-annotation because that crashes GenMC with another error. My issue is not related to those. Actually, I am disabling everything I can like this:

genmc -check-liveness -mo -lkmm \
    -disable-race-detection \
    -disable-load-annotation \
    -disable-cast-elimination \
    -disable-spin-assume \
    -- -DSPIN_ANNOTATION -I/usr/share/genmc/include mcs-check.c

The full output is this:

WARNING: LKMM support is still at an experimental stage!
WARNING: Atomic xchg support is experimental under dependency-tracking models!
Error detected: Safety violation!
Event (0, 12) in graph:
<-1, 0> main:
        (0, 0): B
        (0, 1): M
        (0, 2): TC [forks 1] L.114
        (0, 3): Wna (t[0], 1) L.114
        (0, 4): TC [forks 2] L.114
        (0, 5): Wna (t[1], 2) L.114
        (0, 6): Rna (t[0], 1) [(0, 3)] L.115
        (0, 7): TJ L.115
        (0, 8): Rna (t[1], 2) [(0, 5)] L.115
        (0, 9): TJ L.115
        (0, 10): Rna (x, 1) [(2, 25)] L.116
        (0, 11): Rna (y, 1) [(2, 27)] L.116
        (0, 12): Rna (x, 1) [(2, 25)] L.116
<0, 1> run:
        (1, 0): B
        (1, 1): M
        (1, 2): M
        (1, 3): M
        (1, 4): Wna (nodes[0].locked, 0) L.70: mcs_spinlock.h
        (1, 5): Wna (nodes[0].next, 0x0) L.71: mcs_spinlock.h
        (1, 6): FBUG: Failure at EventLabel.cpp:202/operator<<()!

The problem seems to be related to the smp_mb__{before,after}_atomic fences in xchg() macro. If I replace these with smp_mb, then GenMC does not crash, but then I get an execution that tells me that the mcs_spinlock.h would be buggy. I dont understand how this execution would be possible.

WARNING: LKMM support is still at an experimental stage!
WARNING: Atomic xchg support is experimental under dependency-tracking models!
Error detected: Safety violation!
Event (0, 12) in graph:
<-1, 0> main:
        (0, 0): B
        (0, 1): M
        (0, 2): TC [forks 1] L.155
        (0, 3): Wna (t[0], 1) L.155
        (0, 4): TC [forks 2] L.155
        (0, 5): Wna (t[1], 2) L.155
        (0, 6): Rna (t[0], 1) [(0, 3)] L.156
        (0, 7): TJ L.156
        (0, 8): Rna (t[1], 2) [(0, 5)] L.156
        (0, 9): TJ L.156
        (0, 10): Rna (x, 1) [(2, 25)] L.157
        (0, 11): Rna (y, 1) [(2, 27)] L.157
        (0, 12): Rna (x, 1) [(2, 25)] L.157
<0, 1> run:
        (1, 0): B
        (1, 1): M
        (1, 2): M
        (1, 3): M
        (1, 4): Wna (nodes[0].locked, 0) L.70: mcs_spinlock.h
        (1, 5): Wna (nodes[0].next, 0x0) L.71: mcs_spinlock.h
        (1, 6): Fmb L.79: mcs_spinlock.h
        (1, 7): Wna (, 0x1b76990) L.79: mcs_spinlock.h
        (1, 8): Rna (, 28797328) [(1, 7)] L.79: mcs_spinlock.h
        (1, 9): Urlx (lock, 0) [INIT] L.79: mcs_spinlock.h
        (1, 10): Urlx (lock, 28797328) L.79: mcs_spinlock.h
        (1, 11): Wna (, 0) L.79: mcs_spinlock.h
        (1, 12): Rna (, 0x0) [(1, 11)] L.79: mcs_spinlock.h
        (1, 13): Fmb L.79: mcs_spinlock.h
        (1, 14): D L.95: mcs_spinlock.h
        (1, 15): D L.95: mcs_spinlock.h
        (1, 16): D L.95: mcs_spinlock.h
        (1, 17): Rna (x, 0) [INIT] L.147
        (1, 18): Wna (x, 1) L.147
        (1, 19): Rna (y, 0) [INIT] L.147
        (1, 20): Wna (y, 1) L.147
        (1, 21): M
        (1, 22): M
        (1, 23): M
        (1, 24): M
        (1, 25): Rrlx (nodes[0].next, 0) [(1, 5)] L.104: mcs_spinlock.h
        (1, 26): Wna (, 0) L.104: mcs_spinlock.h
        (1, 27): Rna (, 0x0) [(1, 26)] L.104: mcs_spinlock.h
        (1, 28): Wna (_o_, 0x1b76990) L.110: mcs_spinlock.h
        (1, 29): Wna (, 0x0) L.110: mcs_spinlock.h
        (1, 30): Rna (_o_, 28797328) [(1, 28)] L.110: mcs_spinlock.h
        (1, 31): Rna (, 0) [(1, 29)] L.110: mcs_spinlock.h
        (1, 32): Crel (lock, 28797344) [(2, 10)] L.110: mcs_spinlock.h
        (1, 33): Wna (_o_, 28797344) L.110: mcs_spinlock.h
        (1, 34): Rna (_o_, 0x1b769a0) [(1, 33)] L.110: mcs_spinlock.h
        (1, 35): LOOP_BEGIN L.113: mcs_spinlock.h
        (1, 36): SPIN_START L.113: mcs_spinlock.h
        (1, 37): Rrlx (nodes[0].next, 28797344) [(2, 16)] L.113: mcs_spinlock.h
        (1, 38): Wna (, 28797344) L.113: mcs_spinlock.h
        (1, 39): Rna (, 0x1b769a0) [(1, 38)] L.113: mcs_spinlock.h
        (1, 40): Wrel (nodes[1].locked, 1) L.118: mcs_spinlock.h
        (1, 41): D L.119: mcs_spinlock.h
        (1, 42): D L.119: mcs_spinlock.h
        (1, 43): D L.119: mcs_spinlock.h
        (1, 44): D L.119: mcs_spinlock.h
        (1, 45): E
<0, 2> run:
        (2, 0): B
        (2, 1): M
        (2, 2): M
        (2, 3): M
        (2, 4): Wna (nodes[1].locked, 0) L.70: mcs_spinlock.h
        (2, 5): Wna (nodes[1].next, 0x0) L.71: mcs_spinlock.h
        (2, 6): Fmb L.79: mcs_spinlock.h
        (2, 7): Wna (, 0x1b769a0) L.79: mcs_spinlock.h
        (2, 8): Rna (, 28797344) [(2, 7)] L.79: mcs_spinlock.h
        (2, 9): Urlx (lock, 28797328) [(1, 10)] L.79: mcs_spinlock.h
        (2, 10): Urlx (lock, 28797344) L.79: mcs_spinlock.h
        (2, 11): Wna (, 28797328) L.79: mcs_spinlock.h
        (2, 12): Rna (, 0x1b76990) [(2, 11)] L.79: mcs_spinlock.h
        (2, 13): Fmb L.79: mcs_spinlock.h
        (2, 14): Wna (, 0x1b769a0) L.91: mcs_spinlock.h
        (2, 15): Rna (, 28797344) [(2, 14)] L.91: mcs_spinlock.h
        (2, 16): Wrlx (nodes[0].next, 28797344) L.91: mcs_spinlock.h
        (2, 17): LOOP_BEGIN L.94: mcs_spinlock.h
        (2, 18): SPIN_START L.94: mcs_spinlock.h
        (2, 19): Rrlx (nodes[1].locked, 1) [(1, 40)] L.94: mcs_spinlock.h
        (2, 20): Frmb L.94: mcs_spinlock.h
        (2, 21): D L.95: mcs_spinlock.h
        (2, 22): D L.95: mcs_spinlock.h
        (2, 23): D L.95: mcs_spinlock.h
        (2, 24): Rna (x, 0) [INIT] L.147
        (2, 25): Wna (x, 1) L.147
        (2, 26): Rna (y, 0) [INIT] L.147
        (2, 27): Wna (y, 1) L.147
        (2, 28): M
        (2, 29): M
        (2, 30): M
        (2, 31): M
        (2, 32): Rrlx (nodes[1].next, 0) [(2, 5)] L.104: mcs_spinlock.h
        (2, 33): Wna (, 0) L.104: mcs_spinlock.h
        (2, 34): Rna (, 0x0) [(2, 33)] L.104: mcs_spinlock.h
        (2, 35): Wna (_o_, 0x1b769a0) L.110: mcs_spinlock.h
        (2, 36): Wna (, 0x0) L.110: mcs_spinlock.h
        (2, 37): Rna (_o_, 28797344) [(2, 35)] L.110: mcs_spinlock.h
        (2, 38): Rna (, 0) [(2, 36)] L.110: mcs_spinlock.h
        (2, 39): Crel (lock, 28797344) [(2, 10)] L.110: mcs_spinlock.h
        (2, 40): Crel (lock, 0) L.110: mcs_spinlock.h
        (2, 41): Rna (_o_, 0x1b769a0) [(2, 35)] L.110: mcs_spinlock.h
        (2, 42): D L.119: mcs_spinlock.h
        (2, 43): D L.119: mcs_spinlock.h
        (2, 44): D L.119: mcs_spinlock.h
        (2, 45): D L.119: mcs_spinlock.h
        (2, 46): E

Assertion violation: x == y && x == N
Number of complete executions explored: 1
Number of blocked executions seen: 2
Total wall-clock time: 0.03s

Here is a gist to reproduce the issue: https://gist.github.com/db7/0ceb91058ac64d7eb0027d53f50f6c52

Support pointer alignment

Some concurrent data structures rely on pointer alignment to ensure that certain bits in the binary representation of an address are always 0 (e.g., stamped pointers, marked pointers).

While not portable, this feature works on many (all?) compilers for most modern architectures, and supporting it (possibly with an optional flag) would be useful.

Right now, GenMC will always generate word-aligned addresses and consequently detect "bugs" in code that uses lower order bits.

Feature request: support for argc, argv

How hard would it be to pass arguments to the tested program, letting main() do its job with argc and argv? I imagine that getting those values in the interpreter and passing to the function is simple because lli also supports it, but I wonder if there would be any implications to the verification?

Segmentation fault with IMM

We have a family of experiments that cause GenMC 0.7 with IMM to often crashes with segfault. I can't pinpoint the reason. Depending the barriers used, it may work or not. Here is a gist to reproduce the crash

To crash genmc, just run genmc -imm example.c.

It does not crash with RC11, but we cannot use that model instead of IMM.

Instruction does not dominate all uses

This small piece of code causes GenMC to fail:

typedef struct {
	int val;
} node_t;

void foo(node_t *node, int mask) {
	if (node->val == 3)
		node->val &= (mask | 0x01);
	else
		node->val &= mask;
}

int main() {
	node_t n = {.val = 1};
	foo(&n, 2);
	return 0;
}

GenMC (also older versions) gives me weird errors such as:

user@35b78984b84b:~$ genmc test.c 
WARNING: Memory intrinsic found! Attempting to promote it...
Instruction does not dominate all uses!
  %or = or i32 %mask, 1, !dbg !20
  %and = and i32 %2, %or, !dbg !22
genmc: LLVMModule.cpp:149: bool LLVMModule::transformLLVMModule(llvm::Module &, const Config *, ModuleInfo &): Assertion `!llvm::verifyModule(mod, &llvm::dbgs())' failed.
Aborted (core dumped)

Compiling it with clang and running with lli works.

LLVM-6 build is much faster than LLVM-14 build

I built GenMC with LLVM-6 and run it on a test. The total work time is less than a second. Then I built the same GenMC version with LLVM-14 and the work time is more than 430 second.

Is it a known issue? Is it a degradation or a bugfix?

Command line: ./src/genmc <test>
GenMC build according to instruction:

autoreconf --install
./configure
make

Unexpected failure of cmpxchg

Here is what seems to be a regression in version 0.7 or 0.8, which has broken a few test cases in our pipeline :-D
It definitely works with GenMC 0.6.1 and definitely fails with 0.8. Both compiled with LLVM 8.

#include <pthread.h>
#include <assert.h>
#define K (1<<31)

unsigned int x = K;
int main() {
    unsigned int c = K;
    assert(__atomic_compare_exchange_n(&x, &c, 0,
        0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST));
    return 0;
}

GenMC 0.8 tells me:

$ genmc bug.c
Error detected: Safety violation!
Event (0, 1) in graph:
<-1, 0> main:
        (0, 1): CRsc (x, -2147483648) [INIT] L.8

Assertion violation: __atomic_compare_exchange_n(&x, &c, 0, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)
Number of complete executions explored: 0
Number of blocked executions seen: 1
Total wall-clock time: 0.02s

That makes no sense, the cmpxchg should be successful. GenMC 0.6.1 works fine with this one.

Interestingly, if I move the declaration of the variable x inside the scope of main(), genmc 0.8 is also happy.

bug: Anomaly with seq_cst and assert

genmc 0.8 allows store buffering behavior even if seq_cst atomics or fences are used.

sb-access.c
#include <stdlib.h>
#include <pthread.h>
#include <stdatomic.h>
#include <assert.h>

atomic_int x = 0;
atomic_int y = 0;

void *thread_1(void *unused)
{
  x = 1;
  int a = y;
  return NULL;
}

void *thread_2(void *unused)
{
  y = 1;
  int b = x;
  return NULL;
}

int main()
{
  pthread_t t1, t2;
  if (pthread_create(&t1, NULL, thread_1, NULL)) abort();
  if (pthread_create(&t2, NULL, thread_2, NULL)) abort();
  return 0;
}
$ genmc --print-exec-graphs sb-access.c
...
<-1, 0> main:
        (0, 1): THREAD_CREATE [thread 1]
        (0, 2): THREAD_CREATE [thread 2]
        (0, 3): THREAD_END
<0, 1> thread_1:
        (1, 1): Wsc (x, 1)
        (1, 2): Rsc (y, 0) [INIT]
        (1, 3): THREAD_END
<0, 2> thread_2:
        (2, 1): Wsc (y, 1)
        (2, 2): Rsc (x, 0) [INIT]
        (2, 3): THREAD_END

No errors were detected.
Number of complete executions explored: 4
sb-fence.c
#include <stdlib.h>
#include <pthread.h>
#include <stdatomic.h>
#include <assert.h>

atomic_int x = 0;
atomic_int y = 0;

void *thread_1(void *unused)
{
  atomic_store_explicit(&x, 1, memory_order_relaxed);
  atomic_thread_fence(memory_order_seq_cst);
  int a = atomic_load_explicit(&y, memory_order_relaxed);
  return NULL;
}

void *thread_2(void *unused)
{
  atomic_store_explicit(&y, 1, memory_order_relaxed);
  atomic_thread_fence(memory_order_seq_cst);
  int b = atomic_load_explicit(&x, memory_order_relaxed);
  return NULL;
}

int main()
{
  pthread_t t1, t2;
  if (pthread_create(&t1, NULL, thread_1, NULL)) abort();
  if (pthread_create(&t2, NULL, thread_2, NULL)) abort();
  return 0;
}
$ genmc --print-exec-graphs sb-fence.c
...
<-1, 0> main:
        (0, 1): THREAD_CREATE [thread 1]
        (0, 2): THREAD_CREATE [thread 2]
        (0, 3): THREAD_END
<0, 1> thread_1:
        (1, 1): Wrlx (x, 1)
        (1, 2): Fsc
        (1, 3): Rrlx (y, 0) [INIT]
        (1, 4): THREAD_END
<0, 2> thread_2:
        (2, 1): Wrlx (y, 1)
        (2, 2): Fsc
        (2, 3): Rrlx (x, 0) [INIT]
        (2, 4): THREAD_END

No errors were detected.
Number of complete executions explored: 4

But if I join the threads and add an assert in main that rules out the problematic execution, it is blocked as if assume is used.

sb-assert.c
#include <stdlib.h>
#include <pthread.h>
#include <stdatomic.h>
#include <assert.h>

atomic_int x = 0;
atomic_int y = 0;

int a = -1;
int b = -1;

void *thread_1(void *unused)
{
  x = 1;
  a = y;
  return NULL;
}

void *thread_2(void *unused)
{
  y = 1;
  b = x;
  return NULL;
}

int main()
{
  pthread_t t1, t2;
  if (pthread_create(&t1, NULL, thread_1, NULL)) abort();
  if (pthread_create(&t2, NULL, thread_2, NULL)) abort();
  if (pthread_join(t1, NULL)) abort();
  if (pthread_join(t2, NULL)) abort();
  assert(a == 1 || b == 1);
  return 0;
}
$ genmc sb-assert.c
No errors were detected.
Number of complete executions explored: 3
Number of blocked executions seen: 1

malloc + pthread_mutex_init does not seem to work

#include <stdio.h>
#include <pthread.h>
#include <stdlib.h>

typedef struct {
  pthread_mutex_t m;
} lock;

int main(void) 
{
 lock *t = malloc(sizeof(lock));
 pthread_mutex_init(&t->m, NULL);
 pthread_mutex_lock(&t->m);
 return 0;
}

Considered access to uninitialized memory

segmentation fault on array initialization, when array of size 9 or more

The following code causes segmentation fault

#include <stdio.h>
int
main(void)
{
    char buff[9] = {'a'};
}

core-dump

#0  0x00007f72c8d30d62 in llvm::PointerType::get(llvm::Type*, unsigned int) () from /usr/lib/x86_64-linux-gnu/libLLVM-8.so.1
#1  0x00007f72c8c52d9a in ?? () from /usr/lib/x86_64-linux-gnu/libLLVM-8.so.1
#2  0x00007f72c8c51ba1 in llvm::ConstantFoldGetElementPtr(llvm::Type*, llvm::Constant*, bool, llvm::Optional<unsigned int>, llvm::ArrayRef<llvm::Value*>) () from /usr/lib/x86_64-linux-gnu/libLLVM-8.so.1
#3  0x00007f72c8c6809e in llvm::ConstantExpr::getGetElementPtr(llvm::Type*, llvm::Constant*, llvm::ArrayRef<llvm::Value*>, bool, llvm::Optional<unsigned int>, llvm::Type*) () from /usr/lib/x86_64-linux-gnu/libLLVM-8.so.1
#4  0x000055f5ec5fc504 in llvm::ConstantExpr::getInBoundsGetElementPtr (IdxList=..., C=0x55f5ee0a7c58, Ty=0x0) at /usr/lib/llvm-8/include/llvm/IR/Constants.h:1194
#5  llvm::ConstantFolder::CreateInBoundsGetElementPtr (IdxList=..., C=0x55f5ee0a7c58, Ty=0x0, this=0x7fff9daca9a0) at /usr/lib/llvm-8/include/llvm/IR/ConstantFolder.h:174
#6  llvm::IRBuilder<llvm::ConstantFolder, llvm::IRBuilderDefaultInserter>::CreateInBoundsGEP (this=this@entry=0x7fff9daca960, Ty=Ty@entry=0x0, Ptr=0x55f5ee0a7c58, IdxList=..., Name=...) at /usr/lib/llvm-8/include/llvm/IR/IRBuilder.h:1491
#7  0x000055f5ec5fae22 in promoteMemCpy (builder=..., dst=0x55f5ee0a15d8, src=<optimized out>, args=std::vector of length 2, capacity 2 = {...}, typ=0x55f5ee00eb08) at /usr/include/c++/10/bits/stl_vector.h:918
#8  0x000055f5ec5fba20 in promoteMemIntrinsic<PromoteMemIntrinsicPass::tryPromoteMemCpy(llvm::MemCpyInst*, llvm::Module&)::<lambda(llvm::Type*, const std::vector<llvm::Value*, std::allocator<llvm::Value*> >&)> >(llvm::Type *, std::vector<llvm::Value*, std::allocator<llvm::Value*> > &, struct {...} &&) (typ=0x55f5ee026c10, args=std::vector of length 2, capacity 2 = {...}, promoteFun=...) at /usr/lib/llvm-8/include/llvm/IR/DerivedTypes.h:360
#9  0x000055f5ec5fbda3 in PromoteMemIntrinsicPass::tryPromoteMemCpy (this=this@entry=0x55f5ee0acb20, MI=MI@entry=0x55f5ee0a82f8, M=...) at /usr/lib/llvm-8/include/llvm/IR/DerivedTypes.h:486
#10 0x000055f5ec5fc051 in PromoteMemIntrinsicPass::tryPromoteMemCpy (M=..., MI=0x55f5ee0a82f8, this=0x55f5ee0acb20) at PromoteMemIntrinsicPass.cpp:139
#11 PromoteMemIntrinsicPass::runOnModule (M=..., this=0x55f5ee0acb20) at PromoteMemIntrinsicPass.cpp:215
#12 PromoteMemIntrinsicPass::runOnModule (this=0x55f5ee0acb20, M=...) at PromoteMemIntrinsicPass.cpp:203
#13 0x00007f72c8cf5b4e in llvm::legacy::PassManagerImpl::run(llvm::Module&) () from /usr/lib/x86_64-linux-gnu/libLLVM-8.so.1
#14 0x000055f5ec5e4c8c in LLVMModule::transformLLVMModule (mod=..., conf=0x55f5ee00d5e0, MI=...) at LLVMModule.cpp:131
#15 0x000055f5ec5b49a0 in GenMCDriver::GenMCDriver (this=0x55f5ee09f640, conf=..., mod=std::unique_ptr<llvm::Module> = {...}, start=<optimized out>) at /usr/include/c++/10/bits/unique_ptr.h:173
#16 0x000055f5ec60463d in RC11Driver::RC11Driver (this=0x55f5ee09f640, conf=..., mod=..., start=<optimized out>) at /usr/include/c++/10/bits/unique_ptr.h:172
#17 0x000055f5ec598d4d in DriverFactory::create (conf=std::unique_ptr<Config> = {...}, mod=std::unique_ptr<llvm::Module> = {...}, start=start@entry=18427) at DriverFactory.cpp:32
#18 0x000055f5ec58aed2 in main (argc=<optimized out>, argv=0x7fff9dacd0f8) at /usr/include/c++/10/bits/unique_ptr.h:172

If I move the array declaration outside the main function

#include <stdio.h>

char buff[9] = {'a'};
int
main(void)
{
    return 0;
}

I get the following output:

WARNING: Inadequate naming info for variable buff.
Please submit a bug report to [email protected]
Number of complete executions explored: 1
Total wall-clock time: 0.01s

GenMC crashes on .ll if debug info uses absolute path

Currently genmc checks that there is a filename and a directory when getting debug meta data.
However in case the file name is an absolute path, the directory is empty.
The fix is to move the check for empty directory into a later branch, in which the filename is not an absolute path.
Pull request will be submitted.

Unexpected behaviour under the unroll option.

If I understand correctly, the -unroll=X option makes the loop execute at most X times. So, if a loop does not finish by itself after X-th iteration, it would be logical for genMC to abandon such an execution path.

However, it seems that in such a case genMC finishes this execution path and reports it as a complete execution without any errors (call with -unroll=3), which does not seem right as it may mask some errors which appear later (like in this example).

I have observed the same result with -unroll=4, which is strange because the loop should have finished in this case. Only with -unroll=5 the result was the same as without the unroll flag.

Is this behavior correct?

#include <assert.h>

int main()
{
    for (int i = 0; i < 4; i++);
    assert(0);

    return 0;
}
user@df6e8e48578a:~$ genmc -version
GenMC (https://plv.mpi-sws.org/genmc):
  GenMC v0.8
  Built with LLVM 8.0.0 (RelWithDebInfo)

user@df6e8e48578a:~$ genmc -disable-spin-assume -mo skipped_assert.c
Error detected: Safety violation!
Event (0, 0) in graph:
<-1, 0> main:

Assertion violation: 0
Number of complete executions explored: 0
Number of blocked executions seen: 1
Total wall-clock time: 0.01s

user@df6e8e48578a:~$ genmc -disable-spin-assume -mo -unroll=3 skipped_assert.c
No errors were detected.
Number of complete executions explored: 1
Total wall-clock time: 0.01s

user@0d4f7144cd01:~$ genmc -disable-spin-assume -mo -unroll=4 skipped_assert.c
No errors were detected.
Number of complete executions explored: 1
Total wall-clock time: 0.01s

user@0d4f7144cd01:~$ genmc -disable-spin-assume -mo -unroll=5 skipped_assert.c
Error detected: Safety violation!
Event (0, 0) in graph:
<-1, 0> main:

Assertion violation: 0
Number of complete executions explored: 0
Number of blocked executions seen: 1
Total wall-clock time: 0.01s


BUG: Failure at PromoteMemIntrinsicPass.cpp:78/promoteMemSet()!

#include <stdio.h>
#include <string.h>

typedef struct my_struct_s {
    struct my_struct_s *prev;
    struct my_struct_s *next;
} my_struct;

typedef struct {
    int x;
    int y;
} my_struct_1;

int
main(void)
{
    my_struct s = {0}; // triggers BUG: Failure at
                       // PromoteMemIntrinsicPass.cpp:78/promoteMemSet()!
    my_struct_1 s1 = {0}; // ok
}

fprintf(stderr,...) triggers a crash in genmc

Crash message:

genmc: /usr/lib/llvm-14/include/llvm/IR/GlobalVariable.h:136: const llvm::Constant* llvm::GlobalVariable::getInitializer() const: Assertion `hasInitializer() && "GV doesn't have initializer!"' failed.

Code to reproduce the issue:

#include <stdio.h>

int
main(void)
{    
    fprintf(stderr, "hi %d\n", 1);
    return 0;
}

GenMC and LLVM version

GenMC (https://plv.mpi-sws.org/genmc):
  GenMC v0.8
  Built with LLVM 14.0.0 (Release)

Strange case with compare and swap

I ran GenMC on a test, and it shows, that all executions are blocked, there are no complete executions. I investigated the problem and found that GenMC stops on the following code (a bit simplified):

while (!done) {
    done = atomic_compare_exchange_weak_explicit(object, expected, desired, memory_order_release);
}

So, if I add assert before the loop, GenMC reports an error. And if I add the assert after the loop, GenMC reports, that all executions are blocked.

Note, that by specification of atomic_compare_exchange_weak_explicit
If the comparison result is false, this function updates the value pointed to by expected with the value pointed to by object.

The loop is a part of implementation of read-write lock. The lock is used in many places, which are carefully tested dynamically. So, unlikely that this is a real hanging in the target code.

I attached rwlock-test.zip with .ll output.

Best regards,
Pavel

segmentation fault for initializers on stack

Repeating the problem from memory: when placed on locals in a function call,

 int x[4] = {0};
 my_struct x = { .a = 3, .y = 2 };

and friends all crash.
Let me know if you need a full test case.

Some confusion about memory models in GenMC

I have a few questions about GenMC. For this toy example(store buffer):
int x =0, y =0;
t1:
{
y = 1;
m = x;
}

t2:
{
x = 1;
n = y;
}
assert(!(m==0&n==0));

This assertion is correct under SC, but wrong under TSO or weaker memory models. Verification tools such as CBMC, Nidhugg can set the parameter about the memory model explicitly (--mm tso in CBMC. Nidhugg is similar when using their tools)and returns different results under different memory models.

If I use GenMC to check this program, GenMC returns safety property violation, so :

  1. what is the default memory model that GenMC supports inherently? SC, TSO, PSO, or some other weak memory models?
  2. Is it possible to compare GenMC with, for example, CBMC or Nidhugg under TSO? Should I fix the example or just set some parameters about GenMC explicitly?

I would appreciate it if you can solve my confusion. Thx.

smp_acquire__after_ctrl_dep missing

I'm trying some LKMM-based code and miss some functions. Most of them I could implement myself, eg, atomic_try_cmpxchg_release, but I am not sure what to do with smp_acquire__after_ctrl_dep. To what should I map it? To __LKMM_FENCE(aa) as in after atomic?

sprintf causes seg fault, when the buffer is defined on the stack

The following code causes a segmentation fault in GenMC

version

genmc --version
GenMC (https://plv.mpi-sws.org/genmc):
  GenMC v0.6.1 (commit #ae1ff5c)
  Built with LLVM 8.0.1 (RelWithDebInfo)
#include <stdio.h>
int main(void)
{
    char buff[10] = {0};
    int x         = 10;
    sprintf(buff, "%d", x);
}

output

genmc sprintf-genmc-bug.c 
sprintf-genmc-bug.c:7:5: warning: implicitly declaring library function 'sprintf' with type 'int (char *, const char *, ...)' [-Wimplicit-function-declaration]
    sprintf(buff, "%d", x);
    ^
sprintf-genmc-bug.c:7:5: note: include the header <stdio.h> or explicitly provide a declaration for 'sprintf'
1 warning generated.
WARNING: Memory intrinsic found! Attempting to promote it...
Segmentation fault (core dumped)

core-dump details

Core was generated by `genmc sprintf-genmc-bug.c'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x0000563426b15aa9 in memcpy (__len=3, __src=0x7ffea2e6f200, __dest=0x2849b7a3) at /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34
34	  return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
(gdb) bt
#0  0x0000563426b15aa9 in memcpy (__len=3, __src=0x7ffea2e6f200, __dest=0x2849b7a3) at /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34
#1  lle_X_sprintf (FT=<optimized out>, Args=...) at ExternalFunctions.cpp:451
#2  0x0000563426b1823a in llvm::Interpreter::callExternalFunction (this=this@entry=0x56342849c3e0, F=F@entry=0x56342847bb98, ArgVals=std::vector of length 3, capacity 3 = {...}) at ExternalFunctions.cpp:297
#3  0x0000563426b011c1 in llvm::Interpreter::callFunction (this=0x56342849c3e0, F=0x56342847bb98, ArgVals=std::vector of length 123589140548868101, capacity 2931908988897 = {...}) at Execution.cpp:4371
#4  0x0000563426b018ce in llvm::Interpreter::callFunction (ArgVals=std::vector of length 3, capacity 3 = {...}, F=0x56342847bb98, this=0x56342849c3e0) at /usr/lib/llvm-8/include/llvm/Support/Casting.h:255
#5  llvm::Interpreter::visitCallInstWrapper (this=0x56342849c3e0, CS=...) at Execution.cpp:1633
#6  0x0000563426b02497 in llvm::Interpreter::run (this=this@entry=0x56342849c3e0) at Execution.cpp:4484
#7  0x0000563426a75d83 in llvm::Interpreter::runFunction (this=0x56342849c3e0, F=0x5634283d79a8, ArgValues=...) at Interpreter.cpp:750
#8  0x00007f8f7a7c69de in llvm::ExecutionEngine::runFunctionAsMain(llvm::Function*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, char const* const*) () from /usr/lib/x86_64-linux-gnu/libLLVM-8.so.1
#9  0x0000563426a552fa in GenMCDriver::explore (this=this@entry=0x563428473700) at GenMCDriver.cpp:633
#10 0x0000563426a55469 in GenMCDriver::run (this=0x563428473700) at GenMCDriver.cpp:464
#11 0x0000563426a31f28 in main (argc=<optimized out>, argv=0x7ffea2e70d18) at main.cpp:194

I also don't understand the warning about the implicit declaration of sprinf, since I already included stdio.h

genmc vs IMM paper (Remark 2)

According to the IMM paper, the following behaviour should not be allowed by IMM (Remark 2)

[y]_rlx := 1     || F_rel.                           || b := [x]_acq // 3 
[x]_rel := 1     || a := FADD_acq,rlx(x, 1) // 1     || c := [y]_rlx // 0
                 || [x]_rlx := 3                     || 

I think the following code is a correct translation from litmus to C. Notice that since the litmus correctness condition is global, but it refers to registers, in the C code I model this by loading to local variables and then writing those values to global variables that the main thread can assert after joining. The fences in each thread before propagating the value of registers to global variables is to guarantee that those writes are not reordered with the core of the litmus behavior

#include <stdlib.h>
#include <pthread.h>
#include <assert.h>
#include <stdatomic.h>

atomic_int x, y;

int a, b, c;

void *thread_1(void *unused)
{
	atomic_store_explicit(&y, 1, memory_order_relaxed);
	atomic_store_explicit(&x, 1, memory_order_release);
	return NULL;
}

void *thread_2(void *unused)
{
        atomic_thread_fence(memory_order_release);
	int r0 = atomic_fetch_add_explicit(&x, 1, memory_order_acquire);
	atomic_store_explicit(&x, 3, memory_order_relaxed);
        // End of litmus behaviour
	atomic_thread_fence(memory_order_seq_cst);
	a = r0;
	return NULL;
}


void *thread_3(void *unused)
{
	int r0 = atomic_load_explicit(&x, memory_order_acquire);
	int r1 = atomic_load_explicit(&y, memory_order_relaxed);
        // End of litmus behaviour
	atomic_thread_fence(memory_order_seq_cst);
	b = r0;
	c = r1;
	return NULL;
}

int main()
{
	pthread_t t1, t2, t3;

	pthread_create(&t1, NULL, thread_1, NULL);
	pthread_create(&t2, NULL, thread_2, NULL);
	pthread_create(&t3, NULL, thread_3, NULL);

	pthread_join(t1, NULL);
	pthread_join(t2, NULL);
	pthread_join(t3, NULL);

	assert(!(a == 1 && b == 3 && c == 0));

	return 0;
}

However, genmc (v0.8 (commit #9cfca6d)) reports that the behaviour is allowed

genmc -imm paper-R2-alt.c 
Error detected: Safety violation!
Event (0, 9) in graph:
<-1, 0> main:
	(0, 1): THREAD_CREATE [thread 1] L.42
	(0, 2): THREAD_CREATE [thread 2] L.43
	(0, 3): THREAD_CREATE [thread 3] L.44
	(0, 4): THREAD_JOIN [thread 1] L.46
	(0, 5): THREAD_JOIN [thread 2] L.47
	(0, 6): THREAD_JOIN [thread 3] L.48
	(0, 7): Rna (a, 1) [(2, 6)] L.50
	(0, 8): Rna (b, 3) [(3, 4)] L.50
	(0, 9): Rna (c, 0) [(3, 5)] L.50
<0, 1> thread_1:
	(1, 1): Wrlx (y, 1) L.12
	(1, 2): Wrel (x, 1) L.13
	(1, 3): THREAD_END
<0, 2> thread_2:
	(2, 1): Frel L.19
	(2, 2): URacq (x, 1) [(1, 2)] L.20
	(2, 3): UWacq (x, 2) L.20
	(2, 4): Wrlx (x, 3) L.21
	(2, 5): Fsc L.22
	(2, 6): Wna (a, 1) L.23
	(2, 7): THREAD_END
<0, 3> thread_3:
	(3, 1): Racq (x, 3) [(2, 4)] L.30
	(3, 2): Rrlx (y, 0) [INIT] L.31
	(3, 3): Fsc L.32
	(3, 4): Wna (b, 3) L.33
	(3, 5): Wna (c, 0) L.34
	(3, 6): THREAD_END
Coherence:
x: [ (1, 2) (2, 3) (2, 4) ]

Assertion violation: !(a == 1 && b == 3 && c == 0)
Number of complete executions explored: 1
Number of blocked executions seen: 1
Total wall-clock time: 0.01s

The acquire tag in (2, 3): UWacq (x, 2) L.20 looks a little bit suspicious, but I would assume genmc treats it as a relaxed (since W+acq makes no sense).

Avoid blocked executions when reading with __VERIFIER_assume

The possibility of reading from values that immediately result in blocked executions results in an exponential number of blocked executions.

In case a __VERIFIER_assume condition only depends on one remaining atomic read, and between the atomic read and the __VERIFIER_assume no other memory operations are generated, we do not need to explore the graphs in which the atomic read results in a blocked execution.

The code below should result in 1 execution but results in 2^NTHREADS many executions (of which all but 1 are blocked).

#include <pthread.h>
#include <assert.h>

#include <stdatomic.h>

#ifndef NTHREADS
#define NTHREADS 3
#endif

#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>

#include <genmc.h>



struct task {
    atomic_bool run_worker;
} TASK[NTHREADS];

void *run(void * task_v) {
    struct task *task = task_v;
    __VERIFIER_assume(task->run_worker);
    return 0;
}



int main()
{
        pthread_t t[NTHREADS];
        for (size_t i = 0; i < NTHREADS; i++)
        {
            TASK[i].run_worker = false;
            pthread_create(&t[i], 0, run, &TASK[i]);
            TASK[i].run_worker = true;
        }

        
        for (uintptr_t i = 0; i < NTHREADS; i++)
                pthread_join(t[i], NULL);

        return 0;
}

Macro `__CONFIG_GENMC_INODE_DATA_SIZE` not defined

When compiling C code to LLVM-IR using GenMC v0.8 include directory, compilation fails because __CONFIG_GENMC_INODE_DATA_SIZE macro is not defined. This happens for every program that includes GenMC version of pthread.h header file.

Example:

$ clang  -I ~/src/genmc/include -Xclang -disable-O0-optnone -g -emit-llvm -S test.c 
In file included from test.c:2:
In file included from /home/lucky/src/genmc/include/pthread.h:30:
/home/lucky/src/genmc/include/genmc_internal.h:33:3: error: "Internal error: inode size not defined!"
# error "Internal error: inode size not defined!"
  ^
/home/lucky/src/genmc/include/genmc_internal.h:50:12: error: use of undeclared identifier '__CONFIG_GENMC_INODE_DATA_SIZE'
        char data[__CONFIG_GENMC_INODE_DATA_SIZE]; // <-- used as data block mapping
                  ^
2 errors generated.

False positive in LKMM race detection?

GenMC 0.7 detects a race in the following code with LKMM. I am unsure whether that is a real race or something else:

#include <lkmm.h>
int spin, lock;

T0( ) {
	spin = 0;
	smp_wmb();
	WRITE_ONCE(lock, 1);
}

T1( ) {
	if (READ_ONCE(lock) != 0) {
		smp_mb();
		spin = 1;
	}
}

GenMC detects a race between both updates of spin (line number wont match, sorry):

$ genmc -lkmm test.c 
WARNING: LKMM support is still at an experimental stage!
Error detected: Non-Atomic race!
Event (1, 3) conflicts with event (0, 2) in graph:
<-1, 0> main:
        (0, 0): B
        (0, 1): TC [forks 1] L.39
        (0, 2): Wna (spin, 0) L.9
        (0, 3): Fwmb L.11
        (0, 4): Wrlx (lock, 1) L.12
        (0, 5): TJ L.41
<0, 1> run1:
        (1, 0): B
        (1, 1): Rrlx (lock, 1) [(0, 4)] L.25
        (1, 2): Fmb L.26
        (1, 3): Wna (spin, 1) L.27

Number of complete executions explored: 0
Number of blocked executions seen: 1
Total wall-clock time: 0.01s

The issue can be fixed if T0 writes to lock with smp_store_release or if we use smp_mb() instead of smp_wmb(), but I thought the smp_wmb() should guarantee the order of both writes.

Here is a gist with the complete code and a few other variants.

PS: The data race is also detected in GenMC 0.6.1.

BUG: Failure at Execution.cpp:242/getStaticAddr()! in GenMC 0.7, IMM and RC11

I am getting some weird error in a rather simple code with mallocs.

BUG: Failure at Execution.cpp:242/getStaticAddr()!

And sometimes

The allocating operation (malloc()) does not happen-before the memory access!

Here is a gist that can reproduce this cases.

Interestingly the behavior or imm changes with -disable-race-detection. I thought race detection was completely disabled with IMM by default.

Problems with while(x--)

If the program being verified uses the value of x-- directly as a condition, genmc raises an error. However, if the decrement operation and the condition are separated, the code is correctly verified by genmc.
This is somewhat suspicious, because both approaches are equivalent (the decremented variable is local).

Consider the following example:

atomic_int data;
void *thread()
{
#ifdef USE_DEC_WHILE1
    int lmt = N_ITER;
    while(lmt--) data++;
}
#elif USE_LONG_WHILE
    int lmt = N_ITER;
    while(lmt) {
        data++;
        lmt--;
    }
#endif

The first variant produces an error and the other does not:

user@8976719904df:~/cds/test/scq$ genmc --version
GenMC (https://plv.mpi-sws.org/genmc):
  GenMC v0.8
  Built with LLVM 8.0.0 (RelWithDebInfo)
user@8976719904df:~/cds/test/scq$ genmc -- -DUSE_DEC_WHILE1 minimal.c 
Only PHI nodes may reference their own value!
  %dec = add nsw i32 %dec, -1, !dbg !50
genmc: LLVMModule.cpp:202: bool LLVMModule::transformLLVMModule(llvm::Module &, ModuleInfo &, const std::shared_ptr<const Config> &): Assertion `!llvm::verifyModule(mod, &llvm::dbgs())' failed.
Aborted (core dumped)
user@8976719904df:~/cds/test/scq$ genmc -- -DUSE_LONG_WHILE minimal.c 
No errors were detected.
Number of complete executions explored: 1
Total wall-clock time: 0.01s

This is the full code of the program producing the issue (with additional examples):

#include <stdlib.h>
#include <pthread.h>
#include <stdatomic.h>
#include <assert.h>

atomic_int data;

#define N_ITER 2

void *thread(void *unused)
{	
#ifdef USE_DEC_WHILE1
	int lmt = N_ITER;
	while (lmt--) data++;

	// error

#elif USE_DEC_WHILE2
	int lmt = N_ITER + 1;
	while (--lmt) data++;

	// error

#elif USE_LONG_WHILE
	int lmt = N_ITER;
	while (lmt) {
		data++;
		lmt--;
	}

	// ok

#elif USE_INC_WHILE
	int lmt = 0;
	while (lmt++ < N_ITER) data++;

	// error

#elif USE_FOR
	for (int i = 0; i < N_ITER; i++) data++;

	// ok

#elif USE_IF
	int lmt = N_ITER;
	while (1) {
		if (lmt--) data++;
		else break;
	}

	// error

#else
	DO NOT COMPILE -- missing variant
#endif

	return NULL;
}

int main()
{
	pthread_t t;
	data = 0;

	pthread_create(&t, NULL, thread, NULL);

	pthread_join(t, NULL);

	assert(data == N_ITER);
}

Bug with -imm (dependency broken after revisit?)

The following program triggers the bug, the bug appears only on IMM with RC11 the program is considered correct. @JonasOberhauser helped debuggin it, maybe he can elaborate more.

#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include <pthread.h>
#include <stdint.h>
#include <stdatomic.h>

#define N 2

typedef struct {
    void *data;    
} student;


typedef struct {
    uintptr_t key;
} data_t;


atomic_int count = 0;

static inline void *
alloc(student **p)
{
    data_t *data = malloc(sizeof(data_t));
    if (p) { /* the if is necessary for the bug to appear*/
        *p = malloc(sizeof(student));
        assert(*p);        
    }
    /* relaxed or acquire access is also necessary for the bug to happen */
    atomic_fetch_add_explicit(&count, 1, memory_order_relaxed);
    return data;
}


void *
run(void *arg)
{
    int x        = 0;
    student *p   = NULL;
    data_t *data = alloc(&p);
    p->data      = data;
    (void)arg;
    return NULL;
}

int
main(void)
{
    pthread_t t[N];

    for (size_t i = 0; i < N; i++) {
        pthread_create(&t[i], NULL, run, NULL);
    }

    for (size_t i = 0; i < N; i++) {
        pthread_join(t[i], NULL);
    }

    return 0;
}

It complains about accessing non-allocated memory, although the memory has been allocated.

Error detected: Attempt to access non-allocated memory!
Event (2, 10) in graph:
<-1, 0> main:
        (0, 1): MALLOC t
        (0, 2): THREAD_CREATE [thread 1] L.53
        (0, 3): Wna (t[0], 1) L.53
        (0, 4): THREAD_CREATE [thread 2] L.53
        (0, 5): Wna (t[1], 2) L.53
        (0, 6): Rna (t[0], 1) [(0, 3)] L.57
        (0, 7): THREAD_JOIN [thread 1] L.57
        (0, 8): Rna (t[1], 2) [(0, 5)] L.57
        (0, 9): THREAD_JOIN [thread 2] L.57
<0, 1> run:
        (1, 1): MALLOC p
        (1, 2): Wna (p, 0x0) L.40
        (1, 3): MALLOC  L.25
        (1, 4): MALLOC  L.27
        (1, 5): Wna (p, 0x20) L.27
        (1, 6): Rna (p, 0x20) [(1, 5)] L.28
        (1, 7): URrlx (count, 1) [(2, 8)] L.31
        (1, 8): UWrlx (count, 2) L.31
        (1, 9): Rna (p, 0x20) [(1, 5)] L.42
        (1, 10): Wna (, 0x10) L.42
        (1, 11): FREE L.44
        (1, 12): THREAD_END
<0, 2> run:
        (2, 1): MALLOC p
        (2, 2): Wna (p, 0x0) L.40
        (2, 3): MALLOC  L.25
        (2, 4): MALLOC  L.27
        (2, 5): Wna (p, 0x50) L.27
        (2, 6): Rna (p, 0x50) [(2, 5)] L.28
        (2, 7): URrlx (count, 0) [INIT] L.31
        (2, 8): UWrlx (count, 1) L.31
        (2, 9): Rna (p, 0x50) [(2, 5)] L.42
        (2, 10): Wna (???, 0x60) L.42
Coherence:
p: [ (2, 2) (2, 5) ]
count: [ (2, 8) (1, 8) ]
p: [ (1, 2) (1, 5) ]

Number of complete executions explored: 1
Number of blocked executions seen: 1
Total wall-clock time: 0.01s

LAPOR implementation is not complete

Generates one execution without lapor, 0 with -lapor

#include <pthread.h>
#include <assert.h>

#include <stdatomic.h>

#define NTHREADS 2



#include <stdint.h>
#include <stdio.h>



static atomic_int x = 0, y = 0;
static pthread_mutex_t mylockA,mylockB,mylockC;

#define CS {                                        \
        iter++;                                     \
        int t = x+1;                                \
        /*printf("%dT %ld: t = %d\n", iter, id, t); */  \
        x = t;                                      \
    }


static void *run(void *arg)
{
    intptr_t id = (intptr_t)arg;
    int iter = 0;
    pthread_mutex_lock(&mylockA);
    CS
    if (id == 1)
        pthread_mutex_unlock(&mylockA);
    pthread_mutex_lock(&mylockB);
    CS
    if (id == 0)
        pthread_mutex_unlock(&mylockB);    

    return NULL;
}

int main()
{
        pthread_t t[NTHREADS];
    //printf("===\n");

        for (intptr_t i = 0; i < NTHREADS; i++)
                pthread_create(t+i, 0, run, (void*)i);
        for (intptr_t i = 0; i < NTHREADS; i++)
                pthread_join(t[i], NULL);


        return 0;
}

Incorrect detection of Non-termination with two reads to the same location

Unfortunately I don't have a minimal example but the graph on which the liveness violation is detected looks something like this:

// do { 
R(x, 0)
R(y, ...) // some more reads actually
R_CAS(x,1) //fail, reads from mo-maximal store to x
// } while (...);

After the R_CAS, GenMC reports a liveness violation.
However, if the thread would continue to run, it would read (x,1) and (possibly then do CAS (x,1 --> 2)) and eventually leave the loop.

The condition of the loop does not directly depend on the result of the CAS, but reading x=1 or x=2 will eventually break out of the loop.

Using __VERIFIER_spin_{start,end}

I found these two markers here. But it seems that the interface of __VERIFIER_spin_end takes a boolean as argument. I have a few questions about them.

  1. What is the correct usage of these markers? Is it something like this?
__VERIFIER_spin_start();
while (atomic_load(&me->next) == NULL) {
   // do something
   __VERIFIER_spin_end(0);
}
__VERIFIED_spin_end(1);
  1. Does it work if I use non-atomic accesses in the loop?
  2. Does it work if I wait on multiple variables (eg as in Peterson's lock)?
  3. Does it work with nested loops, each calling their __VERIFIER_spin_* ?

LKMM xchg as a full barriers

In LKMM, xchg should work as a full barrier according to this paper, table 3:

Primitive Event
xchg_relaxed() R[once],W[once]
xchg_acquire() R[acquire],W[once]
xchg_release() R[once],W[release]
xchg() F[mb],R[once],W[once],F[mb]

A use of that "feature" can be seen in Linux mcs_spinlock.h:

static inline
void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
{
	struct mcs_spinlock *prev;

	/* Init node */
	node->locked =  0;
	node->next = NULL;

	/*
	 * We rely on the full barrier with global transitivity implied by the
	 * below xchg() to order the initialization stores above against any
	 * observation of @node. And to provide the ACQUIRE ordering associated
	 * with a LOCK primitive.
	 */
	prev = xchg(lock, node);
	if (likely(prev == NULL)) {
		return;
	}
	WRITE_ONCE(prev->next, node);

But in lkmm.h xchg() maps to seq_cst implicit barriers:

#define xchg(p, v)         atomic_exchange_explicit(p, v, memory_order_seq_cst)

If that were the case, the mcs_spinlock.h would be wrong and a release barrier would be necessary between the initialization of node->locked and WRITE_ONCE(prev->next, node) to avoid deadlocks.

I replaced xchg() with the following in the lkmm.h and it fixes this issue:

#ifdef FIX_XCHG
#define xchg(p, v) ({                                                          \
	__typeof__((v)) _v_ = (v);                                             \
	(smp_mb(), _v_ = atomic_exchange_explicit(p, v, memory_order_seq_cst), \
	 smp_mb(), _v_);})
#else
#define xchg(p, v)         atomic_exchange_explicit(p, v, memory_order_seq_cst)
#endif

GenMC crashes if built against assertions-enabled LLVM

It seems like GenMC crashes when it's compiled against an assertions-enabled build of LLVM with the following message:

genmc: /home/magnus/repos/llvm-monorepo/llvm/lib/Support/CommandLine.cpp:1344: bool {anonymous}::CommandLineParser::ParseCommandLineOptions(int, const char* const*, llvm::StringRef, llvm::raw_ostream*, bool): Assertion `hasOptions() && "No options specified!"' failed.

This happens with all the different LLVM versions I have lying around, but this particular one is configured with the following cmake flags, in case it helps with reproduction -DLLVM_OPTIMIZED_TABLEGEN=y -DLLVM_LINK_LLVM_DYLIB=y -DLLVM_USE_SPLIT_DWARF=y -DLLVM_TARGETS_TO_BUILD="X86" -DPython_ADDITIONAL_VERSIONS=2 -DLLVM_BUILD_MODE=RelWithDebInfo -DCMAKE_BUILD_TYPE=RelWithDebInfo -DLLVM_ENABLE_ASSERTIONS=y -DCMAKE_INSTALL_PREFIX=/scratch/magnus/llvm/11-rc2 -DLLVM_ENABLE_PROJECTS="compiler-rt;clang" -DCMAKE_CXX_STANDARD=14. I am using Arch Linux.

Here's a GDB backtrace when running genmc tests/correct/data-structures/mutex/variants/main0.c

#0  0x00007ffff3e74d22 in raise () from /usr/lib/libc.so.6
#1  0x00007ffff3e5e862 in abort () from /usr/lib/libc.so.6
#2  0x00007ffff3e5e747 in __assert_fail_base.cold () from /usr/lib/libc.so.6
#3  0x00007ffff3e6d616 in __assert_fail () from /usr/lib/libc.so.6
#4  0x00007ffff49d3873 in llvm::cl::ParseCommandLineOptions(int, char const* const*, llvm::StringRef, llvm::raw_ostream*, char const*, bool) () at /home/magnus/repos/llvm-monorepo/llvm/lib/Support/CommandLine.cpp:325
#5  0x0000555555ae2dfc in setCommandLineOpts(clang::CodeGenOptions const&) ()
    at /home/magnus/repos/llvm-monorepo/llvm/include/llvm/ADT/SmallVector.h:152
#6  0x0000555555afcc52 in (anonymous namespace)::EmitAssemblyHelper::EmitAssembly(clang::BackendAction, std::unique_ptr<llvm::raw_pwrite_stream, std::default_delete<llvm::raw_pwrite_stream> >) ()
    at /home/magnus/repos/llvm-monorepo/clang/lib/CodeGen/BackendUtil.cpp:861
#7  0x0000555555afe867 in clang::EmitBackendOutput(clang::DiagnosticsEngine&, clang::HeaderSearchOptions const&, clang::CodeGenOptions const&, clang::TargetOptions const&, clang::LangOptions const&, llvm::DataLayout const&, llvm::Module*, clang::BackendAction, std::unique_ptr<llvm::raw_pwrite_stream, std::default_delete<llvm::raw_pwrite_stream> >) () at /usr/include/c++/10.2.0/bits/move.h:76
#8  0x0000555555a943c6 in clang::BackendConsumer::HandleTranslationUnit(clang::ASTContext&) ()
    at /home/magnus/repos/llvm-monorepo/clang/include/clang/Basic/TargetInfo.h:1066
#9  0x00005555562e1db9 in clang::ParseAST(clang::Sema&, bool, bool) ()
    at /home/magnus/repos/llvm-monorepo/clang/lib/Parse/ParseAST.cpp:171
#10 0x0000555555f7e259 in clang::FrontendAction::Execute() ()
    at /home/magnus/repos/llvm-monorepo/clang/lib/Frontend/FrontendAction.cpp:950
#11 0x0000555555f35d96 in clang::CompilerInstance::ExecuteAction(clang::FrontendAction&) ()
    at /home/magnus/repos/llvm-monorepo/clang/lib/Frontend/CompilerInstance.cpp:984
#12 0x0000555555989e75 in main (argc=<optimized out>, argv=0x7fffffffe5d8) at /usr/include/c++/11.1.0/bits/unique_ptr.h:407

misdetection of spinloop in GenMC v0.6

We have a few examples where GenMC v0.6 cannot properly detect the spinloops. Below is a simple MCS lock test case using stdatomic with explicit notation.

Depending on how the following loop is implemented, genmc hangs or not:

#ifdef EXPLICIT
	while (atomic_load_explicit(&me->next, memory_order_seq_cst) == NULL);
#else
	while (me->next == NULL);
#endif
genmc -- -DEXPLICIT mcs-sc.c ## hangs
genmc -- mcs-sc.c ## terminates

Note that they are equivalent wrt to barrier mode (me->next is an atomic pointer). Also, whether or not I am using -mo, -imm, -check-liveness is irrelevant, it hangs as well. The same problem occurs if we are using __atomic builtins.

Is there a way how to help genmc to know what to monitor as spinloop?

Here is the test case:

#include <pthread.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdint.h>
#include <assert.h>
#include <stdatomic.h>
/*******************************************************************************
 * MCS lock implementation
 ******************************************************************************/
typedef struct mcs_node_s {
	_Atomic(struct mcs_node_s*) next;
	atomic_int spin;
} mcs_node_t;

typedef struct {
	_Atomic(struct mcs_node_s*) tail;
} mcs_lock_t;

static inline void
mcslock_acquire(mcs_lock_t *m, mcs_node_t *me)
{
	mcs_node_t *tail;

	atomic_store_explicit(&me->next, NULL, memory_order_seq_cst);
	atomic_store_explicit(&me->spin, 1, memory_order_seq_cst);

	tail = atomic_exchange_explicit(&m->tail, me, memory_order_seq_cst);
	if (!tail)
		return;

	atomic_store_explicit(&tail->next, me, memory_order_seq_cst);
	while (atomic_load_explicit(&me->spin, memory_order_seq_cst));
}

static inline void
mcslock_release(mcs_lock_t *m, mcs_node_t *me)
{
	if (!atomic_load_explicit(&me->next, memory_order_seq_cst)) {
		struct mcs_node_s* old = me;
		if (atomic_compare_exchange_strong_explicit(
					&m->tail, &old, NULL,
					memory_order_seq_cst,
					memory_order_seq_cst)) {
			return;
		}
#ifdef EXPLICIT
		while (atomic_load_explicit(&me->next, memory_order_seq_cst) == NULL);
#else
		while (me->next == NULL);
#endif
	}
	struct mcs_node_s * next = atomic_load_explicit(&me->next,
			memory_order_seq_cst);
	atomic_store_explicit(&next->spin, 0, memory_order_seq_cst);
}
/*******************************************************************************
 * test
 ******************************************************************************/
#define NTHREADS 3
static mcs_lock_t __lock;
static mcs_node_t __nodes[NTHREADS];
static unsigned int x;
static void* run(void *arg) {
	uint32_t cpu = (intptr_t) arg;
	mcslock_acquire(&__lock, &__nodes[cpu]);
	x++;
	mcslock_release(&__lock, &__nodes[cpu]);
	return NULL;
}
int main() {
	pthread_t t[NTHREADS];
	for (intptr_t i = 0; i < NTHREADS; i++) pthread_create(t+i, 0, run, (void*)i);
	for (intptr_t i = 0; i < NTHREADS; i++) pthread_join(t[i], NULL);
	assert(x == NTHREADS && "unexpected sum");
	return 0;
}

LKMM atomic_andnot and non-returning atomics

I'm trying to understand why the following piece fails with with __VERIFIER_atomicrmw_noret but not without. Is this behavior expected?

#include <pthread.h>
#include <assert.h>
#include <lkmm.h>
#define NTHREADS 3

int x;
atomic_t y;

#if 0
	#define atomic_andnot(i, v) do {						\
		(void) atomic_fetch_and_explicit(&(v)->counter, ~(i), memory_order_relaxed);	\
	} while(0)
#else
	#define atomic_andnot(i, v) do {						\
		(void) __VERIFIER_atomicrmw_noret();					\
		atomic_fetch_and_explicit(&(v)->counter, ~(i), memory_order_relaxed);	\
	} while(0)
#endif

void *run(void *arg)
{
    int tid = ((intptr_t) arg);
    switch (tid) {
    case 0:
        x = 1;
        smp_store_release(&y.counter, 0x3);
        break;
    case 1:
        atomic_andnot(0x2, &y);
        break;
    case 2:
        if (smp_load_acquire(&y.counter) == 0x1) {
            assert (x == 1);
        }
        break;
    }
    return NULL;
}
int main()
{
    pthread_t t[NTHREADS];
    for (intptr_t i = 0; i < NTHREADS; i++) pthread_create(&t[i], NULL, run, (void *) i);
    for (intptr_t i = 0; i < NTHREADS; i++) pthread_join(t[i], 0);
    return 0;
}

The error I get with GenMC 0.8 is this:

 genmc -lkmm weird-behavior.c 
WARNING: LKMM support is still at an experimental stage!
Error detected: Safety violation!
Event (3, 2) in graph:
<-1, 0> main:
        (0, 1): MALLOC t
        (0, 2): THREAD_CREATE [thread 1] L.42
        (0, 3): Wna (t[0], 1) L.42
        (0, 4): THREAD_CREATE [thread 2] L.42
        (0, 5): Wna (t[1], 2) L.42
        (0, 6): THREAD_CREATE [thread 3] L.42
        (0, 7): Wna (t[2], 3) L.42
        (0, 8): Rna (t[0], 1) [(0, 3)] L.43
        (0, 9): THREAD_JOIN [thread 1] L.43
        (0, 10): Rna (t[1], 2) [(0, 5)] L.43
        (0, 11): THREAD_JOIN [thread 2] L.43
        (0, 12): Rna (t[2], 3) [(0, 7)] L.43
        (0, 13): THREAD_JOIN [thread 3] L.43
<0, 1> run:
        (1, 1): Wna (x, 1) L.25
        (1, 2): Wrel (y.counter, 3) L.26
        (1, 3): THREAD_END
<0, 2> run:
        (2, 1): URrlx (y.counter, 3) [(1, 2)] L.29
        (2, 2): UWrlx (y.counter, 1) L.29
        (2, 3): THREAD_END
<0, 3> run:
        (3, 1): Racq (y.counter, 1) [(2, 2)] L.32
        (3, 2): Rna (x, 0) [INIT] L.33
Coherence:
y.counter: [ (1, 2) (2, 2) ]

Assertion violation: x == 1
Number of complete executions explored: 1
Number of blocked executions seen: 1
Total wall-clock time: 0.02s

Stack variable and printf cause crash

#include <assert.h>
#include <stdio.h>

#define LEN 4

int main()
{ 
    int i = 6;
    printf("%d",i); // works
    char str[LEN] = {'%', 'd', 0};
    printf("%s",str); // crash
    printf(str, i); // would also crash
}

The crash also occurs without the initializer (using some other way to set up str).
The crash does not occur if str is changed to static storage duration (I've tried both using the static keyword and moving it outside the function).

Print a non-termination warning if the graph becomes too large

Spin-Assume transformation does not apply to all blocking loops.
These loops will be unrolled over and over without any graph being produced.
Inexperienced users of GenMC get confused by the lack of output and have not idea what's happening.
My suggestion is that if the execution graph grows too much due to repeated unrolling (e.g., 1000 times) of a loop with atomic events, GenMC should print a single warning message informing the user that there might be a non-terminating blocking loop in lines XXX-YYY which might prevent GenMC from terminating on this program, and that the user can use "__VERIFIER_assume(false);" or "--unroll=XXX" to truncate the search space.

GenMC crash with -check-liveness and -disable-spin-assume

Here is a small example that crashes GenMC

#include <stdbool.h>

void __VERIFIER_spin_start(void);
void __VERIFIER_spin_end(bool);

#define await_while(cond)                                      \
	for (int tmp = 0; __VERIFIER_spin_start(), tmp = cond, \
			__VERIFIER_spin_end(!tmp), tmp;)

int *ptr;
int main()
{
	await_while(!__atomic_load_n(&ptr, __ATOMIC_RELAXED));
	return 0;
}

It uses an await_while macro as discussed in #16, but cases GenMC to give this error message:

user@d0d1f66d6eaf:~$ genmc -check-liveness -disable-spin-assume test.c
genmc: /usr/lib/llvm-8/include/llvm/Support/Casting.h:255: typename cast_retty<X, Y *>::ret_type llvm::cast(Y *) [X = llvm::IntegerType, Y = const llvm::Type]: Assertion `isa<X>(Val) && "cast<Ty>() argument of incompatible type!"' failed.
Aborted (core dumped)
user@d0d1f66d6eaf:~$ 

I know that for this small example the automatic spin-assume transformation works, but in original example where we started it does not. Is there something wrong with the use of __VERIFIER_spin_* or is there a bug in GenMC?

Access to freed struct not detected

#include <stdio.h>
#include <stdlib.h>

typedef struct {
	int x;
	int y;
} obj;

int main(void) {

	obj *o = malloc(sizeof(obj));
	o->x = 1;
	o->y = 2;
	free(o);
	/* this access to freed memory is not detected */
	o->y = 10;
	/* this access to freed memory is detected */
	o->x = 10;
}

Failure at GenMCDriver.cpp:2017/visitMalloc()! with no malloc

I keep getting this error (BUG: Failure at GenMCDriver.cpp:2017/visitMalloc()!) with LKMM in a piece of code that has no malloc calls calls. Do you have any suggestion how to figure out where this is happening in my code?

I tried -disable-stop-on-system-error but this seem to be an unrelated option.

IMM release operations do not correctly update PPoRf

The following test case crashes GenMC, whether USE_STORE is defined or not. The source issue is that the second release store in runA does not include the PPoRf prefix of the first release store.
Therefore the store "x=1" of runB is not included in G|_{a}Uprefix(a) on the revisit of R(y) in runB which reads from the second release store.

Will submit a fix once I have an issue number.

#include <stdatomic.h>
#include <pthread.h>


atomic_int x, y, z;


void* runA(void *p) {
    int a = atomic_load_explicit(&x,memory_order_relaxed);
#ifdef USE_STORE
    atomic_store_explicit(&z, 1, memory_order_release);
#else
    atomic_exchange_explicit(&z, 1, memory_order_release);
#endif
    atomic_store_explicit(&y, 1, memory_order_release);
    return 0;
}

void* runB(void *q) {
    int b = atomic_load_explicit(&y, memory_order_relaxed);
    atomic_store_explicit(&x, 1, memory_order_relaxed);
    return 0;
}

pthread_t t[2];

int main() {

  pthread_create(&t[1], NULL, &runB, NULL);
  pthread_create(&t[0], NULL, &runA, NULL);
}

Incorrect liveness violation detection

As far as I understand, this small example should not hang (no liveness violation).

#include <assert.h>
#include <pthread.h>
#include <stdatomic.h>
#include <stdbool.h>

int x;
atomic_int lock = 1;

void *run_t0(void *arg) {
    /* initial lock owner */
    x = 1; // critical section 
   
    /* release lock (SC) */
    lock = 0; 
    return NULL;
}

static void *run_t1(void *arg) {
    /* acquire lock */
    do {
        while (lock != 0);
    } while (atomic_exchange(&lock, 1));

    /* because atomic_exchange is acquire, and lock = 0 is release, the
     * following assertion must be true */
    assert(x == 1 && "unexpected value");

    return NULL;
}

int main() {
    pthread_t t1;
    pthread_create(&t1, 0, run_t1, 0);
    run_t0(NULL);
    pthread_join(t1, NULL);
    return 0;
}

Running this with -check-liveness gives me the following error:

user@94ae0962b2ea:~/genmc-0.6$ genmc -mo -imm --check-liveness liveness-bug.c 
Error detected: Liveness violation!
Event (1, 3) in graph:
<-1, 0> main:
	(0, 0): B
	(0, 1): TC [forks 1] L.33
	(0, 2): Wna (x, 1) L.11
	(0, 3): Wsc (lock, 0) L.14
	(0, 4): TJ L.35
<0, 1> run_t1:
	(1, 0): B
	(1, 1): LOOP_BEGIN
	(1, 2): SPIN_START
	(1, 3): SPIN_START

Non-terminating spinloop
Number of complete executions explored: 0
Number of blocked executions seen: 1
Total wall-clock time: 0.02s

Could this be a caused because of the nested loops?

installation of header files

When running make install, some header files are not installed. It seems that the current rule only picks files with .h extension, ignoring cassert and atomic and files in subdirectories sys and bits. Below is a patch that fixes the problem for me. I am not sure this is the best solution, so I won't open a pull request.

Another thing: the cassert file is a symbolic link pointing to an absolute path in a specific machine. The patch below also fixes that making the symlink relative.

diff --git a/Makefile.am b/Makefile.am
index 1411d809..97ee94fc 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -1,4 +1,4 @@
-SUBDIRS = src
+SUBDIRS = src include
 
 .PHONY: test ftest
 test:
diff --git a/configure.ac b/configure.ac
index 105fea21..8ea60139 100644
--- a/configure.ac
+++ b/configure.ac
@@ -729,5 +729,5 @@ AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[
 
 
 AC_CONFIG_HEADERS([config.h])
-AC_CONFIG_FILES([Makefile src/Makefile])
+AC_CONFIG_FILES([Makefile src/Makefile include/Makefile])
 AC_OUTPUT
diff --git a/include/Makefile.am b/include/Makefile.am
new file mode 100644
index 00000000..5c12415b
--- /dev/null
+++ b/include/Makefile.am
@@ -0,0 +1,2 @@
+genmcdir = $(includedir)/genmc
+nobase_genmc_HEADERS = *.h cassert atomic bits/*.h sys/*.h
diff --git a/include/cassert b/include/cassert
index 8089145f..d2dc72e2 120000
--- a/include/cassert
+++ b/include/cassert
@@ -1 +1 @@
-/home/michalis/Documents/genmc-tool/include/assert.h
\ No newline at end of file
+assert.h
\ No newline at end of file

cmpxchg in lkmm.h

The cmpxchg in lkmm.h does not have the same signature as in Linux.

According to this file and to its usage (like in the mcs lock ), the cmpxchg should return the old/new value instead of a bool as in lkmm.h and in C11. That is probably a bit tricky to make as a macro, though.

Besides that, the cmpxchg_{acquire, release} in lkmm.h have wrong arguments (x instead of p).

We are not actually using LKMM, so this is not super important, but I just wanted keep record.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.