Code Monkey home page Code Monkey logo

memtrace's Introduction

MemTrace

MemTrace is a dynamic binary analysis tool designed to report memory overlaps, that is uninitialized memory reads together with all the memory write accesses overlapping the same memory location.

MemTrace makes use of Dynamic Binary Instrumentation to analyze all the instructions performed during a program's execution and keep track of every memory access. Eventually, the tool will generate a report containing all the detected overlaps.

The main goal of MemTrace is to help the user identify instructions that can possibly lead to information disclosure (i.e. leaks). Indeed, by inspetting the generated report, the user will know where an uninitialized memory read is performed and which instructions previously wrote in the same memory location, thus understanding whether it is possible to control what or where to read.

Supported platforms

Currently, MemTrace only supports 64 bits x86 machines with a Linux distribution installed. The tool works as is if the installed operating system has glibc installed; otherwise, it will be required to implement a custom malloc handler header. More info can be found in section "Malloc handlers".

Intel PIN, which is used as a DBI framework, only support x86 and x86_64 architectures. Support for more operating systems will be added in the future.

Pre-requisites

  • Install from the system package manager (e.g. apt in Ubuntu) the following packages:

    • ninja-build
    • libglib2.0-dev
    • make
    • gcc
    • g++
    • pkg-config
    • python3
    • python3-pip
    • gdb

    In Ubuntu, this is done running the command "sudo apt-get install ninja-build libglib2.0-dev make gcc g++ pkg-config python3 python3-pip gdb".

  • Using pip, install the packages listed in requirements.txt. This can be easily done by running "python3 -m pip install -r requirements.txt".

Building

  1. Download MemTrace repository as 'git clone https://github.com/kristopher-pellizzi/MemTraceThesis'

  2. Digit 'cd MemTraceThesis'

  3. Run 'make'. Note: this operation may take a while.

Usage

After building, the tool and all the third-party tools (i.e. AFL++ and Intel PIN) will be compiled and MemTrace is ready to run.

The launcher script can be found in folder bin and it is called memTracer.py. There are 2 modes of launching MemTrace:

  • As a standalone tool: run 'memTracer.py -x -- path/to/executable arg1 arg2 --opt1 optarg1 --opt2'.

    This way, MemTrace will analyze the given executable using the passed arguments and will generate a report containing all the memory overlaps detected during that specific execution. The generated report will be in a custom binary format. To generate the human-readable version, launch script binOverlapsParser.py from folder bin passing the binary report as a parameter.

  • Combined with AFL++: this mode will launch some instances of the fuzzer to generate an input corpus for the given executable. The generated inputs will be used by some parallel processes of MemTrace to analyze the executable. There are many options available to configure this mode. The format of the command to use this mode is the following: memTracer.py [OPTIONS] -- path/to/executable [FIXED_ARGS] [@@].

    • FIXED_ARGS is an optional list of arguments to pass to the executable which are always the same over all the executions
    • @@ is an input file placeholder. When it is used, the tool will replace it with the path of a generated input, thus changing the content of the input file at each execution. NOTE: if @@ is not used, the tool will implicitly infer that the executable reads from stdin. So, both AFL++ and MemTrace will use the generated inputs as stdin for the executable.

    To launch MemTrace with this mode, it is required to manually create an output folder and, inside that, another folder containing all the files to be used as initial testcases for the fuzzer. By default, MemTrace will look for the following structure:

    • out
      • in
        • init_testcase1
        • init_testcase2
        • [...]
        • init_testcaseN

    After completion, the output folder will contain several files and folders. The most important ones are called (by default) fuzz_out and tracer_out. fuzz_out is created automatically by the fuzzer and will contain all the inputs generated during fuzzing. tracer_out is created automatically by MemTrace, and will contain a folder for each launched fuzzer instance, in turns containing all the information about the executed commands for each input generated by the fuzzer.

    The generated report will be stored in the current working directory. To generate it again, it is possible to use script merge_reports.py, passing as parameter the path of the tracer output folder (tracer_out by default).

    It is also possible to re-execute the tracer for every generated input. This is usually useful if the user requires to perform the same analysis again, but changing the parameters for the tracer (e.g. disable string optimizations heuristic).

    Finally, it is possible to disable argv fuzzing. This is useful if the user knows the program does not accept any command-line argument or wants to test some specific arguments. Moreover, argv fuzzing is not always feasible. In those cases, it is required to launch MemTrace also disabling argv fuzzing. Check section command-line arguments fuzzing to check when it is not possible to enable it.

Manual

Usage: memTracer.py [-h] [--disable-argv-rand] [--single-execution] [--keep-ld] [--unique-access-sets] [--disable-string-filter] [--str-opt-heuristic {OFF,ON,LIBS}] [--fuzz-out FUZZ_OUT] [--out TRACER_OUT] [--fuzz-dir FUZZ_DIR] [--fuzz-in FUZZ_IN] [--backup OLDS_DIR] [--admin-priv] [--time EXEC_TIME] [--slaves SLAVES] [--processes PROCESSES] [--ignore-cpu-count] [--experimental] [--no-fuzzing] [--stdin] [--store-tracer-out] [--dict DICTIONARY] -- /path/to/executable [EXECUTABLE_ARGS]

optional arguments:

  • -h, --help: show this help message and exit

  • --disable-argv-rand, -r: flag used to disable command line argument randomization (default: True)

  • --single-execution, -x: flag used to specify that the program should be executed only once with the given parameters.This may be useful in case the program does not read any input (it is useless to run the fuzzer in these cases) and it requires to run with very specific argv arguments (e.g. utility cp from coreutils requires correctly formatted, existing paths to work). Options -f, -d, -i, -b, -a, -t, -s, -p, --ignore-cpu-count, -e and --no-fuzzing (i.e. all options related to the fuzzing task) will be ignored. (default: False)

  • --keep-ld: flag used to specify the tool should not ignore instructions executed from the loader's library (ld.so in Linux). By default, they are ignored because there may be a degree of randomness which makes the instructions from that library always change between executions of the same command. This may cause some confusion in the final report. For this reason they are ignored. (default: False)

  • --unique-access-sets, -q: this flag is used to report also those uninitialized read accesses which have a unique access set. Indeed, by default, the tool only reports read accesses that have more access sets, which are those that can read different bytes according to the input. This behaviour is thought to report only those uninitialized reads that are more likely to be interesting. By enabling this flag, even the uninitialized read accesses that always read from the same set of write accesses are reported in the merged report. (default: False)

  • --disable-string-filter: this flag allows to disable the filter which removes all the uninitialized read accesses which come from a string function (e.g. strcpy, strcmp...). This filter has been designed because string functions are optimized, and because of that, they very often read uninitialized bytes, but those uninitialized reads are not relevant. An heuristic is already used to try and reduce this kind of false positives. However, when we use the fuzzer and merge all the results, the final report may still contain a lot of them. (default: False)

  • --str-opt-heuristic {OFF,ON,LIBS}, -u {OFF,ON,LIBS}: option used to specify whether to enable the heuristic thought to remove the high number of not relevant uninitialized reads due to the optimized versions of string operations (e.g. strcpy). The aforementioned heuristic can be either disabled, enabled or partially enabled. If the heuristic is disabled, it will never be applied. If it is enabled, it will be always applied. If it is partially enabled, it is applied only for accesses performed by code from libraries (e.g. libc). By default, the heuristic is partially enabled. Possible choices are: OFF => Disabled ON => Enabled LIBS => Partially enabled. WARNING: when the heuristic is enabled, many false negatives may arise. If you want to avoid false negatives due to the application of the heuristic, you can simply disable it, but at a cost. Indeed, in a single program there may be many string operations, and almost all of them will generate uninitialized reads due to the optimizations (e.g. strcpy usually loads 32 bytes at a time, but then checks are performed on the loaded bytes to avoid copying junk). So, the execution of the program with the heuristic disabled may generate a huge number of uninitialized reads. Those reads actually load uninitialized bytes from memory to registers, but they may be not relevant. Example: strcpy(dest, src), where |src| is a 4 bytes string, may load 32 bytes from memory to SIMD registers. Then, after some checks performed on the loaded bytes, only the 4 bytes belonging to |src| are copied in the pointer |dest|. (default: LIBS)

  • --fuzz-out FUZZ_OUT, -f FUZZ_OUT: Name of the folder containing the results generated by the fuzzer (default: fuzzer_out)

  • --out TRACER_OUT, -o TRACER_OUT: Name of the folder containing the results generated by the tracer (default: tracer_out)

  • --fuzz-dir FUZZ_DIR, -d FUZZ_DIR: Name of the folder containing all the requirements to run the fuzzer. This folder must already exist and contain all the files/directories required by the fuzzer. (default: out)

  • --fuzz-in FUZZ_IN, -i FUZZ_IN: Name of the folder containing the initial testcases for the fuzzer. This folder must already exist and contain the testcases (default: in)

  • --backup OLDS_DIR, -b OLDS_DIR: Name of the folder used to move old results of past executions before running the tracer again. (default: olds)

  • --admin-priv, -a: Flag used to specify the user has administration privileges (e.g. can use sudo on Linux). This can be used by the launcher in order to execute a configuration script that, according to the fuzzer's manual, should speedup the fuzzing task. It may require admin password to execute. (default: False)

  • --time EXEC_TIME, -t EXEC_TIME: Specify fuzzer's execution time. By default, the value is measured in seconds. The following modifiers can be used: 's', 'm', 'h', 'd', 'w', 'M', to specify time respectively in seconds, minutes, hours, days, weeks, Months (intended as 30 days months) (default: 60)

  • --slaves SLAVES, -s SLAVES: Specify the number of slave fuzzer instances to run. The fuzzer always launches at least the main instance. Launching more instances uses more resources, but allows to find more inputs in a smaller time span. It is advisable to use this option combined with -p, if possible. Note that the total amount of launched processes won't be higher than the total number of available cpus, unless --ignore-cpu-count flag is enabled. However, it is very advisable to launch at least 1 slave instance. (default: 0)

  • --processes PROCESSES, -p PROCESSES: Specify the number of processes executing the tracer. Using more processes allows to launch the tracer with more inputs in the same time span. It is useless to use many processes for the tracer if the fuzzer finds new inputs very slowly. If there are few resources available, it is therefore advisable to dedicate them to fuzzer instances rather then to tracer processes. Note that the total amount of launched processes won't be higher than the total number of available cpus, unless --ignore-cpu-count flag is enabled. (default: 1)

  • --ignore-cpu-count : Flag used to ignore the number of available cpus and force the number of processes specified with -s and -p to be launched even ifthey are more than that. (default: False)

  • --experimental, -e : Flag used to specify whether to use or not experimental power schedules (default: False)

  • --no-fuzzing Flag used to avoid launching the fuzzer. WARNING: this flag assumes a fuzzing task has already been executed and generated the required folders and files. It is possible to use a fuzzer different from the default one. However, the structure of the output folder must be the same as the one generated by AFL++. NOTE: this flag discards any fuzzer-related option (i.e. -s, -e, -t, -a). The other options may be still used to specify the paths of specific folders. (default: False)

  • --stdin: Flag used to specify that the input file should be read as stdin, and not as an input file. Note that this is meaningful only when '--no-fuzzing' is enabled. If this flag is used, but '--no-fuzzing' is not, it is simply ignored. (default: False)

  • --store-tracer-out This option allows the tracer thread to redirect both stdout and stderr of every spawned tracer process to a file saved in the same folder where the input file resides. (default: False)

  • --dict DICTIONARY: Path of the dictionary to be used by the fuzzer in order to produce new inputs (default: None)

After the arguments for the script, the user must pass '--' followed by the executable path and the arguments that should be passed to it. If it reads from an input file, write '@@' instead of the input file path. It will be automatically replaced by the fuzzer.

Example: ./memTracer.py -- /path/to/the/executable arg1 arg2 --opt1 @@

Remember to NOT PASS the input file as an argument for the executable, but use @@. It will be automatically replaced by the fuzzer starting from the initial testcases and followed by the generated inputs.

Command-line arguments fuzzing

Command-line arguments fuzzing has been implemented by extending an example that can be found in AFL++'s repository (https://github.com/AFLplusplus/AFLplusplus/tree/stable/utils/argv_fuzzing). As such, it has the same limitations. Namely:

AFL++ supports fuzzing file inputs or stdin. When source is available, argv-fuzz-inl.h can be used to change main() to build argv from stdin. argvfuzz tries to provide the same functionality for binaries. When loaded using LD_PRELOAD, it will hook the call to __libc_start_main and replace argv using the same logic of argv-fuzz-inl.h. A few conditions need to be fulfilled for this mechanism to work correctly:

  1. As it relies on hooking the loader, it cannot work on static binaries.
  2. If the target binary does not use the default libc's _start implementation (crt1.o), the hook may not run.
  3. The hook will replace argv with pointers to .data of argvfuzz.so. If the target binary expects argv to be living on the stack, things may go wrong.

If you get a segmentation fault while trying to use MemTrace with argv fuzzing enabled, try to launch it again disabling command-line arguments fuzzing.

Extending MemTrace

There are a few ways that allow to extend the tool very easily, if it is needed.

System call handlers

MemTrace uses a system call manager to keep track of memory accesses performed during the execution of system calls.

Since system calls are defined by the platform ABI, the system call manager in turns requires to have knowledge about the system calls available in the underlying operating system and about their behavior. So, a system call handler is implemented for each available system call which performs any memory access.

The system call handlers for Linux x86-64 machines are already implemented (at least most of them) into the header file src/x86_64_linux_syscall_handlers.h. If the user requires to add new syscall handlers or modify any of the existing ones, it is sufficient to modify that header file.

If instead the user wants to run MemTrace on a not supported platform, it is required to manually implement the required system call handlers. To do this, it is enough to copy *src/x86_64_linux_syscall_handlers.h, remove the existing handlers and implement the handlers for the platform in use. Then, it is necessary to include the created header file into src/SyscallHandler.h according to the platform in use (see src/Platform.h to check the macro defined when a certain architecture or OS is detected, if required).

Of course, after all the changes have been done, it is required to re-build MemTrace to apply them.

Malloc handlers

Similarly to what we said for system calls, MemTrace requires some specific information also about dynamic allocation functions like malloc, realloc, free, etc... More specifically, MemTrace requires to have information about the layout of allocated chunks and about how those functions store chunks metadata.

Dynamic allocations functions are not stadard, so every operating system may have its own implementation, and it is also possible to create a custom implementation. For this reason, src/x86_64_linux_malloc_handlers.h will implement some functions whose goal is to return to MemTrace the information it needs, when it requires it.

Currently, the malloc handlers for x86_64 machines using glibc-2.31 are implemented. If needed, it is possible to copy that header file and implement the handlers for different implementations of the dynamic allocation functions. After the required functions have been implemented, it is necessary to include the header file into src/MallocHandler.h according to the platform in use.

MemTrace must be re-built to apply the changes.

If a custom implementation of malloc is used, MemTrace must be built with the following command to define macro CUSTOM_ALLOCATOR: 'make tool_custom_malloc'.

Instruction handlers

MemTrace performs a taint analysis to keep track of the transfers of uninitialized bytes.

Each instruction transferring bytes may have a different way of performing the data transfer. For this reason, MemTrace makes use of instruction handlers which are meant to manage transfers performed by the executed instructions.

Instruction handlers are divided in 2 types:

  • Memory instruction handlers (set/instructions/mem): they manage instructions performing at least 1 memory access (either reading or writing)
  • Register instruction handlers (set/instructions/reg): they manage instructions transferring bytes from some src register to some est registers

For each type of instruction handler, there's a default handler designed to handle most of the instructions. For those instructions not correctly handled by the default ones, a specific instruction handler has been implemented.

Of course, it is not feasible to implement a specific handler for all the instructions of the ISA. So, only the most frequent instructions requiring a specific handler have been implemented.

If it is required to add more instructions handler, it is sufficient to copy one of the handlers in the corresponding folder, according to the type, and implement the handler so that it mimics the transfer of bytes performed by instruction itself.

After that, it is sufficient to re-build MemTrace.

Docker

MemTrace repository also contains a Dockerfile which easily allows to create a container configured to use MemTrace immediately.

The Dockerfile has also been used to create a testing environment, and therefore perform all the tests for the validation of the tool in a reproducible way.

In order to create the container, it is of course required to have Docker installed (https://www.docker.com/). Then, move into the root folder of the repository (where the Dockerfile is stored) and run 'docker build -t memtrace .'. NOTE: this operation may take a while to complete

Finally, simply run 'docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --rm -it memtrace'.

  • --cap-add=SYS_PTRACE is required to allow Intel PIN call ptrace to instrument binaries
  • --security-opt seccomp=unconfined is required to allow AFL++ to fuzz binaries
  • --rm will remove the container when it is closed
  • -it allows the container to use the terminal as stdin/stdout

memtrace's People

Contributors

kristopher-pellizzi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

alisacodedragon

memtrace's Issues

Missing syscall handlers

This issue will be used as a thread to list all the missing system call handlers to be implemented

brk to decrease program's data segment not managed

On some implementations, the main heap is allocated by using the brk system call.
This type of allocation is correctly managed by MemTrace.
However, besides happening quite rarely, sometimes it is possible that a program uses brk also during a call to free to deallocate memory and therefore reduce program's data segment.

If, after such a call to free, the program allocates memory again using brk and an uninitialized read accesses some location that was previously removed, the reported overlap will be incoherent.
Indeed, MemTrace will add in the access set of the uninitialized read the overlapping writes happened before the use of brk deallocated memory.
But when brk is used to allocate again some memory, it is allocated as filled with 0.

The only way to fix this would be remove all the writes accessing memory beyond the reduced data segment, so that when brk is used to increase it again, the memory it allocates is considered as uninitialized and never written before.

Syscall handlers tweaks

Some system call use structs as arguments.
For simplicity and speed, the corresponding syscall handlers will consider the whole struct as read.
However, those struct may have holes due to fields padding.
These paddings are not actually used by the struct, and is therefore highly probable that they are uninitialized when the system call is executed, thus causing MemTrace reporting uninitialized reads which must be considered false positives.
It is possible to slightly change syscall handlers to make them read only the used parts of the structs used as arguments.

This Issue will be used as a thread to report syscall handlers that caused some false positives during testing and possibly also the structure of the struct they use as a parameter.

merge_reports function can't merge access sets if their execution's libc base are different

Problem:
In some cases, even if ASLR is disabled, the base address of libc may change (due to the input file itself? or to arguments allocated on the stack?). If this happens, it is possible that 2 different executions have identical MemoryAccess sets, but with different actualIp for the entries. In this case, since the comparison function simply checks for equality of every field of the MemoryAccess objects, the 2 sets are considered to be NOT EQUAL, and they are not merged together.

Possible Solution:
Maybe, to solve this, it is possible to slightly change the comparison function to compute the offset from the library base.
Of course, in order to do this, we must pass the library base to the function, and try to understand which library the particular instruction belongs to (if more than 1 is loaded)

Mallocs performed before enty point execution

Sometimes, it is possible that some mallocs happen before the entry point is executed (probably due to some library initialization).
The tool starts tracing memory accesses from the entry point, until program termination.
This caused mainly 2 problems:

  • If the program accessed some of the heap areas allocated before the entry point, those accesses were always considered as uninitialized, regardless of the real state, thus causing many false positives.
  • If many allocations were performed before the entry point, it was possible that some heap accesses did not have a corresponding shadow memory. So, when the program tried to access it, a segmentation fault was raised.

In order to solve these problems, now the tool always keep track of mallocs and frees, but heap areas allocated before the entry point are only partially traced. Indeed, the tool still does not trace accesses performed there before the entry point is executed, but it simply assumes that any malloc performed before that initialized the whole chunk, and that frees de-initialize it. This way, we are solving the occasional segmentation faults, and we are mitigating the false positives problem.
This latter one is not completely solved, because if some chunk is freed before the entry point, it is possible that the program will reallocate it through a call to malloc. In glibc malloc, the function reads from inside the chunk to be allocated to check if there's any metadata left by the previous free. Since the free preceding the entry point simply reset the whole chunk as uninitialized, this access will generate an uninitialized read in the reports. However, this will not have any overlapping write.

Better duplicate overlap group detection

Currently, to check if a certain overlap group is already stored, we compare memory accesses by comparing the content of their corresponding shadow memory.
In some cases, (e.g. if access size is 4 B), this compares also some unrelated bytes status (i.e. the status of the 4 bytes stored in the same shadow byte, but not related to the memory access).
So, if they are different, the algorithm does not match them, thus creating a new overlap group, even if the hash and the memory accesses are actually the same.

To solve this, we could compute the uninitialized interval and compare that.
Of course, this may consume much more time for each detected uninitialized read.

Checj if the solution is worth the additional overhead.

Not relevant uninitialized reads due to glibc heap management

Since chunks headers may be reused without rewriting them, they are never re-initialized.
For what concerns the chunk's payload, instead, there are mainly 2 ways to re-initialize the shadow memory corresponding to a heap chunk to be freed:

  1. Re-initialize before actual execution of function Free
  2. Re-initialize after function Free has been executed and returned

Both of them have some drawbacks.

If we apply strategy 1, the free itself will generate some uninitialized reads, because it checks if a certain portion of the payload (i.e. <payload_ptr + 8> already contains a certain pointer.

If we apply strategy 2, instead, an uninitialized read may be generated by the execution of a new malloc which decides to reallocate a previously freed chunk. Indeed, in this case, malloc reads the first 16 bytes of the chunk's payload in order to check the value of the pointers stored there by the previous free.

So, any of the mentioned strategies will lead to additional not relevant uninitialized reads.

However, since we already required a platform specific header to gather some information about heap management functions, we can take advantage of that again, and add in the header's interface a function that returns a set of ranges of addresses that really require to be re-initialize when free is called. Of course, this requires some prior knowledge of how malloc and free manage the chunks.

In this way, we can apply strategy 2, but re-initialize the chunk's payload only starting from <payload_ptr + 16>, so that what the free wrote in the first 16 B will be still considered initialized by successive instructions and, therefore, by successive mallocs.

Commit 5d886cc has been thought to partially fix the aforementioned behavior, as it sometimes happens that not only the first 16 bytes are wrote and then read again by free, but also other bytes in the same chunk.
To fix this, we simply keep track of heap write accesses performed during a call to free and, after the shadow memory has been reset, we'll set again as initialized the portion of shadow memory corresponding to parts of the chunk written during the execution of free. This way, we are able to effectively re-initialize the shadow memory on free, while also avoiding the false positives of strategy 1 and keeping the writes performed by the free itself, so that we can also reduce the false positives of strategy 2.

Note, however, that there is still a small chance that free generates some uninitialized reads.
Indeed, if we don't write at least 16 Bytes (or we don't write at all) in an allocated chunk, the call to free will still read from uninitialized memory, thus reporting it. However, this is not usually the case, and the number of reported reads is highly reduced

Heuristic to reduce the number of reported accesses

Commit 3a31a3a implements a heuristic useful to reduce the number of reported uninitialized memory accesses.
This has been thought to try and recognize uninitialized accesses due to the usage of optimized versions of string operations.

The main idea is to avoid reporting an uninitialized memory access when these conditions are simultaneously true:

  • Access size is higher or equal to 16 bytes
  • The instruction is not a syscall (e.g. a call to write() may be incorrectly recognized as an access due to some string operation)
  • The accessed memory area contains at least 1 (initialized) null byte (i.e. '\0')
  • One of the following conditions is verified:
  1. There's more than 1 uninitialized interval
  2. There's 1 interval, and it begins at index 0 and ends before the position of the null byte
  3. There's 1 interval, starting after the null byte

Condition 1 means that the string pointer has an alignment different from that required by the SIMD extension instruction used and the initialized portion ends before the end of the access. In this case, the layout should be something like this: UNINITIALIZED - INITIALIZED - UNINITIALIZED. This usually happens when we are doing something on some short string.

Condition 2 means that again the string pointer has an alignment different from the one required by the instruction. However, this time there's only 1 uninitialized interval, meaning that probably we have again a short string, which has the terminator in its last byte. However, in memory, there's something adjacent to the string, which has been initialized.

Condition 3 means that we are probably managing the end of a long string, whose size is not a multiple of the access size. In this case, the memory area is usually initialized from index 0 up to the null byte (included), and after that there is at least 1 byte not initialized.

Note that indexes are meant to be relative to the access boundaries.
In practice, we are assuming that everytime there is an uninitialized read where some bytes are initialized, while some other are not, we are handling a string (as it is composed by a sequence of bytes).

These instructions, however, may be used by the compiler to optimize operations on arrays of numeric data. However, in that case, the number is usually either fully initialized or not initialized, thus not falling in any of our conditions. It may be the case, however, that the developer managed the numeric data byte by byte (probably performing some casts). In that case, some false negatives are expected.

Another source of false negatives may be the usage of memory management functions (e.g. memcpy).
In glibc-3.31, memcpy is implemented by using 8 byte integer moves, so it can't generate false negatives due to our heuristic. However, it merely depends on the implementation of the function, and therefore it is expected to generate some false negatives with some implementations which may use the SIMD extension instructions as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.