Yali
๐ Contents Table
๐ Introduction
Let D be a deep learning model that classifies programs according to the problem they solve. This project aims to evaluate how D behaves with obfuscated code. We want to know how much the accuracy of D is affected.
The top of the image above shows the histogram produced by a specific strategy for program 292. This program belongs to class 11 of the POJ-104 dataset. The bottom of the image shows how each model classifies the variations of program 292.
๐ Getting Started
In this section are the steps to reproduce our experiments.
Prerequisites
You need to install the following packages to run this project:
- Docker and Docker Compose to run our experiments
- Python-3 to plot the results in the project's Jupyter Notebook
- Wget, Tar and Sed to run the initial scripts to configure the repository
Setup
First, you should copy the .env.example
file and rename it to .env
.
You can now set environment variables in the .env
file at the project's root. You can change the following variables:
Variable | Description | Value |
---|---|---|
REPRESENTATION | Program embedding that will be used to represent a program. This variable is required. |
|
MODEL | Selected machine learning model. This variable is required. If REPRESENTATION is equal to `cfg`, `cfg_compact`, `cdfg`, `cdfg_compact`, `cdfg_plus` or `programl`, the model must be `dgcnn` or `gcn`. |
|
TRAINDATASET / TESTDATASET | Dataset that will be used in the training/testing phase. TRAINDATASET is required, but TESTDATASET must be empty if you want to use the same dataset in training and testing phase. |
|
OPTLEVELTRAIN / OPTLEVELTEST | Optimization level applied in the traning/testing dataset. OPTLEVELTRAIN is required, but OPTLEVELTEST must be empty if TESTDATASET is empty. |
|
NUMCLASSES | The number of classes of the dataset. This variable is required. | |
ROUNDS | The number of rounds to run the model. This variable is required. | |
MEMORYPROF | Indicate whether a memory profiler will be used. This variable is required. |
|
After that, you need to prepare the environment to run our experiments. Run the following command line:
$ ./setup.sh
This will download the datasets, build the docker image and create the necessary folders for the project.
Running
Now, you can run the following command line:
$ ./run.sh MODE
There are the following values for MODE
:
- all: Run all games, the resources analysis and embedding analysis
- speedup: Run the speedup analysis with the benchmark game
- embeddings: Run the embedding analysis
- resources: Run only the resources analysis
- malware: Run the experiment to detect classes of malware
- game0 Run the Game 0 (We will put the link later)
- game1: Run the Game 1 (We will put the link later)
- game2: Run the Game 2 (We will put the link later)
- game3: Run the Game 3 (We will put the link later)
- discover: Run the Discover Game (We will put the link later)
This will run the docker container with the configurations in the
.env
file.
๐ Statistics
The Statistics
folder contains Jupyter Notebooks that plot the data generated by the experiments. Each notebook describes each chart and the steps to develop them. There are the following notebooks:
- EmbeddingResults: Presents information about the accuracy of the dgcnn and cnn models with different representations
- GameResults: Presents information about the 4 games proposed in our work (We will put the link later).
- ResourceResults: Presents information about resource consumption (memory and time) of each model
- StrategiesResults: Presents the distance between the histograms of the original programs and the histograms generated by the obfuscators
๐๏ธ Structure
The repository has the following organization:
|-- Classification: "scripts for the classification process"
|-- Compilation: "Scripts for the compilation process"
|-- Docs: "Repository documentation"
|-- Entrypoint: "Container setup"
|-- Extraction: "Script to extract a program representation and convert CSV to Numpy"
|-- HistogramPass: "LLVM pass to get the histograms"
|-- MalwareDataset: "Malware dataset to support experiments in the project"
|-- Representations: "Scripts to extract different program representations"
|-- Statistics: "Jupyter notebooks"
|-- Volume: "Volume of the container"
|-- Csv: "CSVs with the histograms"
|-- Embeddings: "Different representations of programs in the Source folder"
|-- Histograms: "histograms in the Numpy format"
|-- Irs: "LLVM IRs of the programs"
|-- Results: "Results of the training/testing phase"
|-- Source: "Source code of the programs"
To Do
We are doing the following to increment our repository:
- Put the paper link in this ReadME