Code Monkey home page Code Monkey logo

pytm's Introduction

build+test

pytm: A Pythonic framework for threat modeling

pytm logo

Introduction

Traditional threat modeling too often comes late to the party, or sometimes not at all. In addition, creating manual data flows and reports can be extremely time-consuming. The goal of pytm is to shift threat modeling to the left, making threat modeling more automated and developer-centric.

Features

Based on your input and definition of the architectural design, pytm can automatically generate the following items:

  • Data Flow Diagram (DFD)
  • Sequence Diagram
  • Relevant threats to your system

Requirements

  • Linux/MacOS
  • Python 3.x
  • Graphviz package
  • Java (OpenJDK 10 or 11)
  • plantuml.jar

Getting Started

The tm.py is an example model. You can run it to generate the report and diagram image files that it references:

mkdir -p tm
./tm.py --report docs/basic_template.md | pandoc -f markdown -t html > tm/report.html
./tm.py --dfd | dot -Tpng -o tm/dfd.png
./tm.py --seq | java -Djava.awt.headless=true -jar $PLANTUML_PATH -tpng -pipe > tm/seq.png

There's also an example Makefile that wraps all these into targets that can be easily shared for multiple models. If you have GNU make installed (available by default on Linux distros but not on OSX), simply run:

make MODEL=the_name_of_your_model_minus_.py

You should either have plantuml.jar on the same directory as your model, or set PLANTUML_PATH.

To avoid installing all the dependencies, like pandoc or Java, the script can be run inside a container:

# do this only once
export USE_DOCKER=true
make image

# call this after every change in your model
make

Usage

All available arguments:

usage: tm.py [-h] [--sqldump SQLDUMP] [--debug] [--dfd] [--report REPORT]
             [--exclude EXCLUDE] [--seq] [--list] [--describe DESCRIBE]
             [--list-elements] [--json JSON] [--levels LEVELS [LEVELS ...]]
             [--stale_days STALE_DAYS]

optional arguments:
  -h, --help            show this help message and exit
  --sqldump SQLDUMP     dumps all threat model elements and findings into the
                        named sqlite file (erased if exists)
  --debug               print debug messages
  --dfd                 output DFD
  --report REPORT       output report using the named template file (sample
                        template file is under docs/template.md)
  --exclude EXCLUDE     specify threat IDs to be ignored
  --seq                 output sequential diagram
  --list                list all available threats
  --colormap            color the risk in the diagram
  --describe DESCRIBE   describe the properties available for a given element
  --list-elements       list all elements which can be part of a threat model
  --json JSON           output a JSON file
  --levels LEVELS [LEVELS ...]
                        Select levels to be drawn in the threat model (int
                        separated by comma).
  --stale_days STALE_DAYS
                        checks if the delta between the TM script and the code
                        described by it is bigger than the specified value in
                        days

The stale_days argument tries to determine how far apart in days the model script (which you are writing) is from the code that implements the system being modeled. Ideally, they should be pretty close in most cases of an actively developed system. You can run this periodically to measure the pulse of your project and the 'freshness' of your threat model.

Currently available elements are: TM, Element, Server, ExternalEntity, Datastore, Actor, Process, SetOfProcesses, Dataflow, Boundary and Lambda.

The available properties of an element can be listed by using --describe followed by the name of an element:


(pytm) ➜  pytm git:(master) ✗ ./tm.py --describe Element
Element class attributes:
  OS
  definesConnectionTimeout        default: False
  description
  handlesResources                default: False
  implementsAuthenticationScheme  default: False
  implementsNonce                 default: False
  inBoundary
  inScope                         Is the element in scope of the threat model, default: True
  isAdmin                         default: False
  isHardened                      default: False
  name                            required
  onAWS                           default: False

The colormap argument, used together with dfd, outputs a color-coded DFD where the elements are painted red, yellow or green depending on their risk level (as identified by running the rules).

Creating a Threat Model

The following is a sample tm.py file that describes a simple application where a User logs into the application and posts comments on the app. The app server stores those comments into the database. There is an AWS Lambda that periodically cleans the Database.

#!/usr/bin/env python3

from pytm.pytm import TM, Server, Datastore, Dataflow, Boundary, Actor, Lambda, Data, Classification

tm = TM("my test tm")
tm.description = "another test tm"
tm.isOrdered = True

User_Web = Boundary("User/Web")
Web_DB = Boundary("Web/DB")

user = Actor("User")
user.inBoundary = User_Web

web = Server("Web Server")
web.OS = "CloudOS"
web.isHardened = True
web.sourceCode = "server/web.cc"

db = Datastore("SQL Database (*)")
db.OS = "CentOS"
db.isHardened = False
db.inBoundary = Web_DB
db.isSql = True
db.inScope = False
db.sourceCode = "model/schema.sql"

comments = Data(
    name="Comments", 
    description="Comments in HTML or Markdown",  
    classification=Classification.PUBLIC,  
    isPII=False,
    isCredentials=False,  
    # credentialsLife=Lifetime.LONG,  
    isStored=True, 
    isSourceEncryptedAtRest=False, 
    isDestEncryptedAtRest=True 
)

results = Data(
    name="results", 
    description="Results of insert op",  
    classification=Classification.SENSITIVE,  
    isPII=False, 
    isCredentials=False,  
    # credentialsLife=Lifetime.LONG,  
    isStored=True, 
    isSourceEncryptedAtRest=False, 
    isDestEncryptedAtRest=True 
)

my_lambda = Lambda("cleanDBevery6hours")
my_lambda.hasAccessControl = True
my_lambda.inBoundary = Web_DB

my_lambda_to_db = Dataflow(my_lambda, db, "(λ)Periodically cleans DB")
my_lambda_to_db.protocol = "SQL"
my_lambda_to_db.dstPort = 3306

user_to_web = Dataflow(user, web, "User enters comments (*)")
user_to_web.protocol = "HTTP"
user_to_web.dstPort = 80
user_to_web.data = comments

web_to_user = Dataflow(web, user, "Comments saved (*)")
web_to_user.protocol = "HTTP"

web_to_db = Dataflow(web, db, "Insert query with comments")
web_to_db.protocol = "MySQL"
web_to_db.dstPort = 3306

db_to_web = Dataflow(db, web, "Comments contents")
db_to_web.protocol = "MySQL"
db_to_web.data = results

tm.process()

You also have the option of using pytmGPT to create your models from prose!

Generating Diagrams

Diagrams are output as Dot and PlantUML.

When --dfd argument is passed to the above tm.py file it generates output to stdout, which is fed to Graphviz's dot to generate the Data Flow Diagram:

tm.py --dfd | dot -Tpng -o sample.png

Generates this diagram:

dfd.png

Adding ".levels = [1,2]" attributes to an element will cause it (and its associated Dataflows if both flow endings are in the same DFD level) to render (or not) depending on the command argument "--levels 1 2".

The following command generates a Sequence diagram.

tm.py --seq | java -Djava.awt.headless=true -jar plantuml.jar -tpng -pipe > seq.png

Generates this diagram:

seq.png

Creating a Report

The diagrams and findings can be included in the template to create a final report:

tm.py --report docs/basic_template.md | pandoc -f markdown -t html > report.html

The templating format used in the report template is very simple:


# Threat Model Sample
***

## System Description

{tm.description}

## Dataflow Diagram

![Level 0 DFD](dfd.png)

## Dataflows

Name|From|To |Data|Protocol|Port
----|----|---|----|--------|----
{dataflows:repeat:{{item.name}}|{{item.source.name}}|{{item.sink.name}}|{{item.data}}|{{item.protocol}}|{{item.dstPort}}
}

## Findings

{findings:repeat:* {{item.description}} on element "{{item.target}}"
}

To group findings by elements, use a more advanced, nested loop:

## Findings

{elements:repeat:{{item.findings:if:
### {{item.name}}

{{item.findings:repeat:
**Threat**: {{{{item.id}}}} - {{{{item.description}}}}

**Severity**: {{{{item.severity}}}}

**Mitigations**: {{{{item.mitigations}}}}

**References**: {{{{item.references}}}}

}}}}}

All items inside a loop must be escaped, doubling the braces, so {item.name} becomes {{item.name}}. The example above uses two nested loops, so items in the inner loop must be escaped twice, that's why they're using four braces.

Overrides

You can override attributes of findings (threats matching the model assets and/or dataflows), for example to set a custom CVSS score and/or response text:

user_to_web = Dataflow(user, web, "User enters comments (*)", protocol="HTTP", dstPort="80")
user_to_web.overrides = [
    Finding(
        # Overflow Buffers
        threat_id="INP02",
        cvss="9.3",
        response="""**To Mitigate**: run a memory sanitizer to validate the binary""",
        severity="Very High",
    )
]

If you are adding a Finding, make sure to add a severity: "Very High", "High", "Medium", "Low", "Very Low".

Threats database

For the security practitioner, you may supply your own threats file by setting TM.threatsFile. It should contain entries like:

{
   "SID":"INP01",
   "target": ["Lambda","Process"],
   "description": "Buffer Overflow via Environment Variables",
   "details": "This attack pattern involves causing a buffer overflow through manipulation of environment variables. Once the attacker finds that they can modify an environment variable, they may try to overflow associated buffers. This attack leverages implicit trust often placed in environment variables.",
   "Likelihood Of Attack": "High",
   "severity": "High",
   "condition": "target.usesEnvironmentVariables is True and target.controls.sanitizesInput is False and target.controls.checksInputBounds is False",
   "prerequisites": "The application uses environment variables.An environment variable exposed to the user is vulnerable to a buffer overflow.The vulnerable environment variable uses untrusted data.Tainted data used in the environment variables is not properly validated. For instance boundary checking is not done before copying the input data to a buffer.",
   "mitigations": "Do not expose environment variable to the user.Do not use untrusted data in your environment variables. Use a language or compiler that performs automatic bounds checking. There are tools such as Sharefuzz [R.10.3] which is an environment variable fuzzer for Unix that support loading a shared library. You can use Sharefuzz to determine if you are exposing an environment variable vulnerable to buffer overflow.",
   "example": "Attack Example: Buffer Overflow in $HOME A buffer overflow in sccw allows local users to gain root access via the $HOME environmental variable. Attack Example: Buffer Overflow in TERM A buffer overflow in the rlogin program involves its consumption of the TERM environmental variable.",
   "references": "https://capec.mitre.org/data/definitions/10.html, CVE-1999-0906, CVE-1999-0046, http://cwe.mitre.org/data/definitions/120.html, http://cwe.mitre.org/data/definitions/119.html, http://cwe.mitre.org/data/definitions/680.html"
 }

The target field lists classes of model elements to match this threat against. Those can be assets, like: Actor, Datastore, Server, Process, SetOfProcesses, ExternalEntity, Lambda or Element, which is the base class and matches any. It can also be a Dataflow that connects two assets.

All other fields (except condition) are available for display and can be used in the template to list findings in the final report.

WARNING

The threats.json file contains strings that run through eval(). Make sure the file has correct permissions or risk having an attacker change the strings and cause you to run code on their behalf.

The logic lives in the condition, where members of target can be logically evaluated. Returning a true means the rule generates a finding, otherwise, it is not a finding. Condition may compare attributes of target and/or control attributes of the 'target.control' and also call one of these methods:

  • target.oneOf(class, ...) where class is one or more: Actor, Datastore, Server, Process, SetOfProcesses, ExternalEntity, Lambda or Dataflow,
  • target.crosses(Boundary),
  • target.enters(Boundary),
  • target.exits(Boundary),
  • target.inside(Boundary).

If target is a Dataflow, remember you can access target.source and/or target.sink along with other attributes.

Conditions on assets can analyze all incoming and outgoing Dataflows by inspecting the target.input and target.output attributes. For example, to match a threat only against servers with incoming traffic, use any(target.inputs). A more advanced example, matching elements connecting to SQL datastores, would be any(f.sink.oneOf(Datastore) and f.sink.isSQL for f in target.outputs).

Making slides!

Once a threat model is done and ready, the dreaded presentation stage comes in - and now pytm can help you there as well, with a template that expresses your threat model in slides, using the power of (RevealMD)[https://github.com/webpro/reveal-md]! Just use the template docs/revealjs.md and you will get some pretty slides, fully configurable, that you can present and share from your browser.

revealjs.mov

Currently supported threats

INP01 - Buffer Overflow via Environment Variables
INP02 - Overflow Buffers
INP03 - Server Side Include (SSI) Injection
CR01 - Session Sidejacking
INP04 - HTTP Request Splitting
CR02 - Cross Site Tracing
INP05 - Command Line Execution through SQL Injection
INP06 - SQL Injection through SOAP Parameter Tampering
SC01 - JSON Hijacking (aka JavaScript Hijacking)
LB01 - API Manipulation
AA01 - Authentication Abuse/ByPass
DS01 - Excavation
DE01 - Interception
DE02 - Double Encoding
API01 - Exploit Test APIs
AC01 - Privilege Abuse
INP07 - Buffer Manipulation
AC02 - Shared Data Manipulation
DO01 - Flooding
HA01 - Path Traversal
AC03 - Subverting Environment Variable Values
DO02 - Excessive Allocation
DS02 - Try All Common Switches
INP08 - Format String Injection
INP09 - LDAP Injection
INP10 - Parameter Injection
INP11 - Relative Path Traversal
INP12 - Client-side Injection-induced Buffer Overflow
AC04 - XML Schema Poisoning
DO03 - XML Ping of the Death
AC05 - Content Spoofing
INP13 - Command Delimiters
INP14 - Input Data Manipulation
DE03 - Sniffing Attacks
CR03 - Dictionary-based Password Attack
API02 - Exploit Script-Based APIs
HA02 - White Box Reverse Engineering
DS03 - Footprinting
AC06 - Using Malicious Files
HA03 - Web Application Fingerprinting
SC02 - XSS Targeting Non-Script Elements
AC07 - Exploiting Incorrectly Configured Access Control Security Levels
INP15 - IMAP/SMTP Command Injection
HA04 - Reverse Engineering
SC03 - Embedding Scripts within Scripts
INP16 - PHP Remote File Inclusion
AA02 - Principal Spoof
CR04 - Session Credential Falsification through Forging
DO04 - XML Entity Expansion
DS04 - XSS Targeting Error Pages
SC04 - XSS Using Alternate Syntax
CR05 - Encryption Brute Forcing
AC08 - Manipulate Registry Information
DS05 - Lifting Sensitive Data Embedded in Cache
SC05 - Removing Important Client Functionality
INP17 - XSS Using MIME Type Mismatch
AA03 - Exploitation of Trusted Credentials
AC09 - Functionality Misuse
INP18 - Fuzzing and observing application log data/errors for application mapping
CR06 - Communication Channel Manipulation
AC10 - Exploiting Incorrectly Configured SSL
CR07 - XML Routing Detour Attacks
AA04 - Exploiting Trust in Client
CR08 - Client-Server Protocol Manipulation
INP19 - XML External Entities Blowup
INP20 - iFrame Overlay
AC11 - Session Credential Falsification through Manipulation
INP21 - DTD Injection
INP22 - XML Attribute Blowup
INP23 - File Content Injection
DO05 - XML Nested Payloads
AC12 - Privilege Escalation
AC13 - Hijacking a privileged process
AC14 - Catching exception throw/signal from privileged block
INP24 - Filter Failure through Buffer Overflow
INP25 - Resource Injection
INP26 - Code Injection
INP27 - XSS Targeting HTML Attributes
INP28 - XSS Targeting URI Placeholders
INP29 - XSS Using Doubled Characters
INP30 - XSS Using Invalid Characters
INP31 - Command Injection
INP32 - XML Injection
INP33 - Remote Code Inclusion
INP34 - SOAP Array Overflow
INP35 - Leverage Alternate Encoding
DE04 - Audit Log Manipulation
AC15 - Schema Poisoning
INP36 - HTTP Response Smuggling
INP37 - HTTP Request Smuggling
INP38 - DOM-Based XSS
AC16 - Session Credential Falsification through Prediction
INP39 - Reflected XSS
INP40 - Stored XSS
AC17 - Session Hijacking - ServerSide
AC18 - Session Hijacking - ClientSide
INP41 - Argument Injection
AC19 - Reusing Session IDs (aka Session Replay) - ServerSide
AC20 - Reusing Session IDs (aka Session Replay) - ClientSide
AC21 - Cross Site Request Forgery



pytm's People

Contributors

archen avatar avhadpooja avatar chadeckles avatar colesmj avatar danieldavidson avatar eskilandreen avatar finestmaximus avatar izar avatar izar-sqsp avatar jharnois4512 avatar jnk22 avatar lobuhi avatar lojikil avatar mikejarrett avatar nineinchnick avatar nozmore avatar per-oestergaard avatar raphaelahrens avatar redshiftzero avatar shambho avatar snyk-bot avatar xee5ch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytm's Issues

Creation of example seq diagram fails.

Hi, I just pulled the new changes and faced some problems with creating the example sequence diagram:

PowerShell -Command tm.py --seq | java -Djava.awt.headless=true -jar plantuml.jar -tpng -pipe > seq.png
ERROR
2
Syntax Error?
Some diagram description contains errors

Do you have any idea why it fails? I can't find out whats wrong. With an older version it is working fine.

Initial TM creation is a pain and requires a lot of typing

... well it was anyway.

9bccd8f

I added a python script to take a CSV with pairs of elements. I then create generic Element definitions for each unique name and create dataflow for each pair.

After editing the file to replace Element with Actor, Server, Process, etc I can generate a basic TM DFD then start to annotate each element and add boundaries as needed.

Before I do any more with this take a look and lets discuss. Initially I wanted the csv to be as lightweight as possible but we could have it contain various more data like variableName, displayName, element type, or various annotations.

I've committed the geneate.py file, a sample csv, the generate sample.py and sample.png and then a modified (Element->Actor,Process, etc) py and png so you can see what its doing.

AC03 - condition is too limiting

"AC03": { "description": "The Data Store Could Be Corrupted", "source": (Process, Element), "target": Datastore, "condition": "target.isShared is True or target.hasWriteAccess is True", },

If a Datastore is shared and allows write access, it may be corrupted, which is True. But what is missing from this logic is if the shared Processes/Elements are granted Write access - an Element:Datastore relationship need not be symmetric or universal. This requires some additional logic, and goes to the complexity of such things.

Consider:

Datastore A
Process A
Process B

A.isShared is True
A.hasWriteAccess (from Process A) is True
A.hasWriteAccess (from Process B) is False

Threat?

Problem: we can't represent this currently - it requires Source:Target:Condition relationships that cannot be represented given the current object model. Note the Object Model I posted to the wiki can represent this relationship, but may be too complex for some.

Proposal: isHardened should not be a property on its own

Considering the example you provide in the readme, and thinking logically about what hardening of a system means, "is not hardened" should be a threat on its own, with conditions that indicate the hardening state (which could be none, partial, or "complete"). This would also require/allow hardening detection for individual object types (e.g. web server vs database server). Consider something like this condition for a web server being not hardened:

condition : "target.RunsAsRoot is True and target.exposesHTTP is True"
(obviously psuedo-conditions)

DS01: Weak credential storage - condition too broad

"DS01": { "description": "Weak Credential Storage", "source": (Process, Element), "target": Datastore, "condition": "(target.storesPII is True or target.storesSensitiveData is True) and (target.isEncrypted is False or target.providesConfidentiality is False or target.providesIntegrity is False)", },

Condition includes storesPII, which would not include credentials (at least not for the target or source); it also includes storesSensitiveData (same comment applies). A better test would be source.hasAccessControl or source.authenticatedWith - these conditions suggest the datastore holds credentials, and the target checks then make sense.

Can not make sequence diagrams

It appears that there is an issue with making sequence diagrams with the latest release.

./tm.py --seq
@startuml
actor cbdcbcbdaaddaadbdfadadfaaae as "User"
Traceback (most recent call last):
File "./tm.py", line 64, in
tm.process()
File "/opt/pytm/pytm/pytm.py", line 255, in process
self.seq()
File "/opt/pytm/pytm/pytm.py", line 234, in seq
print("entity {0} as "{1}"".format(_uniq_name(e.name), e.name))
TypeError: _uniq_name() missing 1 required positional argument: 'obj_uuid'

bidirectional dataflows

when an element can be both a source and a sink with respect to a given data flow - for example, a user interacting with a web application where they both can download and upload data - it would be nice to define a single Dataflow object (or BidirectionalDataflow if another class is preferred) that renders as a bidirectional dataflow in the DFD and elicits the threats in both directions. is there interest in supporting that?

Graph Ordering is not proper..

Region ordering is messed if we have a bi-directional flow. LR rankdir is not working as expected.
No Option for Invisible nodes.

Crash on checking threat attributes that are not in current object while generating report

How does the problem look like and what steps reproduce it?

Issue can be easily reproduced when trying to generate report using provided threat library, sample tm.py (both one in repo and another slightly different in README.md) and template.
Traceback using tm.py from repo:

Exception has occurred: AttributeError
'Actor' object has no attribute 'providesIntegrity'
  File "/root/pytm/pytm/pytm.py", line 445, in apply
    return eval(self.condition)
  File "/root/pytm/pytm/pytm.py", line 547, in resolve
    if not t.apply(e):
  File "/root/pytm/pytm/pytm.py", line 721, in process
    self.resolve()
  File "/root/pytm/tm.py", line 91, in <module>
    tm.process()

Threat being checked is AC05 with condition '((not target.source.providesIntegrity or not target.sink.providesIntegrity) and not target.isEncrypted) or (target.source.inScope and not target.isResponse and (not target.authenticatesDestination or not target.checksDestinationRevocation))'. As we know Actor object doesn't have any providesIntegrity attribute, but it's being checked.

Can you reproduce it using the latest master?

Yes. That's what I used.

What is your running environment?

OS: SLES 15/python:alpine-3.8 image
Python version: 3.6.10/3.8.6
Your model file, if possible: sample tm.py from repo and another one from README.md

What have you already tried to solve the problem?

Not yet. I'm not proficient in Python and still poking the code.
EDIT: I think a simple exception can be added to handle such attribute issues in non-elegant way:

    def apply(self, target):
        if not isinstance(target, self.target):
            return None
        try:
            return eval(self.condition)
        except AttributeError:
            return None

Automatic trust boundary detection?

I was wondering if we might make Boundary identification a capability of the tool, rather than letting a user define them in their object definitions. In other words, a user may decide to place a trust boundary based on particular characteristics, like team organizational units, or areas of control by teams, or based on a misunderstanding of the ability to enable trust relationships. But it should be possible for us to detect strong relationships between entities to establish, or at least hint at, trust boundaries, as a feature to the user.

DE01 Data Flow Sniffing - condition needs improvement

"DE01": { "description": "Data Flow Sniffing", "source": (Process, Element, Datastore), "target": Dataflow, "condition": "target.protocol == 'HTTP' and target.isEncrypted is False", },

In this threat, it checks to see if the protocol is HTTP and if the channel is unencrypted. A user by error may set the protocol but not the flag, or vice versa, unless there is code somewhere which makes the connection automatically. Instead, it may be best to make this an OR condition - either http or unencrypted will trigger the threat.

DO01 - wrong condition check?

"DO01": { "description": "Potential Excessive Resource Consumption", "source": Element, "target": (Process, Server), "condition": "target.handlesResourceConsumption is False",

Without knowing when handlesResourceConsumption should be set to True, the check here appears to not do what is intended. Excessive resource consumption is subjective, and would be a result of memory or resource leaks, and even managed code (e.g. JVM or .NET based Processes) can still run out of file descriptors. There is not enough information in the dataflows to know if resource consumption will be excessive, imho. It should be possible to know if resource consumption could be an issue if e.g. the Process is multi-tenant/multi-user e.g. a RESTful web server or a database.

Boundary.inBoundary does not render in the DFD

Per the model I can defined a Boundary inside another Boundary. I expected all items in the childBoundary and the childBoundary to be visible within the parentBoundary. In this case a Server Boundary within a DataCenter Boundary.

Instead both Boundaries are completely separate.

Objects should be generic, with roles

Today, we have object types:

  • Element
  • Server
  • Client
  • Process
  • Asset
  • Lambda

It seems that a Server, Client, and Lambda are all specializations of Process or Asset, and really represent the "role" of each; role is really determined by the specific use - a server is the sink for a dataflow, the source is a client. But when describing an object, until the dataflows are determined, why force users to know ahead of time which one they need? Also, a client or server may be a server or a client, based on other data flows...

An alternative suggestion: create a generic "node" (Asset may be the right object already available), and allow assignment of properties that are generic. If roles are needed, assigning a role may add attributes specific to the role(s) added at runtime.
This approach helps with constructing models based on less-than-perfect knowledge of the system.

Expanding seq diagrams

First apologies for the long winded issue...

tl;dr - Are there any plans in extending out some plantuml sequence diagram functionality?

I've been using pytm for about a week now and have found it to be a pretty good tool. Coming more from a security side I am more focused on the dataflow diagram and threat modelling report however I had a lot of requests to add bits and pieces to the the seq diagram as well (e.g. lifelines, queue participants, dividers, arrow style).

I'm happy to make a pull request with a few suggested changes (still wrapping my head around how it all ties together)

Queues

The easiest being:

class Queue(Datastore):
    pass


class TM:
    ...
    def seq(self):
        for e in TM._elements:
            ...
            elif isinstance(e, Queue):  # this would need to go before the `Datastore` check
                value = 'queue {0} as "{1}"'.format(e._uniq_name(), e.display_name())

I think the more robust, and probably more extensible way would be another attribute and, ideally, a new method to handle the line that will be formatted / printed in seq()

class Element:
    def seq_line(self):  # naming things is difficult
        return 'entity {} as "{}"'.format(self._uniq_name(), self.display_name()'

class Actor(Element):
    def seq_line(self): 
        return 'actor {} as "{}"'.format(self._uniq_name(), self.display_name()'

class Datastore(Element):
   def seq_line(self):
        if self.isQueue:
            puml_participant = "queue"
        else:
            puml_participant = "database"
        return '{} {} as "{}"'.format(puml_participant, self._uniq_name(), self.display_name()'

class TM:
    def seq(self):
        participants = [e.seq_line() for e in TM._elements if not isinstance(e, (Dataflow, Boundary)]
        ...
...

 my_queue = Datastore("my_queue", isQueue=True)

Dataflow arrows

I think the simplest would be to add new attribute arrowStyle, implement the suggested seq_line from above and instantiate a dataflow the following way to get a dotted blue, open arrow line for the lines in the sequence digramt:

class Dataflow(Element):
    ...
    arrowStyle = varString("->")
    ...

    def seq_line(self):
        note = "\nnote left\n{}\nend note".format(self.note) if not self.note else ""
        line =  "{source} {arrow} {sink}: {display_name}{note}".format(
            source=self.source._uniq_name(),
            arrow=self.arrowStyle,
            sink=self.sink._uniq_name(),
            note=note,
        )

...
df = Dateflow(source, sink, "My dataflow", arrowStyle="-[#blue]->>")

Lifelines

This one I am still exploring and not confident in any implementation yet but maybe something like

class TM:
    ...
    includeSeqLifelines = varBool(False)
    ...

    def seq(self):
        ...
        messages = []
        for e in TM._flows:
            if e.response and self.includeSeqLifelines:  # at the start of the loop
                messages.append('activate {}\n'.format(e.sink._uniq_name()))
            ... all the other flow stuff here ...
            if e.responseTo and self.includeSeqLifelines:  # at the end of the loop
                messages.append('deactivate {}\n'.format(e.responseTo.sink._uniq_name()))

Or introduce a concept of SeqLifelines, again not happy with the exploration I've done so far but quick back of a napkins code:

class _Dummy:
    """A temporary dummy class that allows me to insert SeqLifelines in the flows portion of TM.

    This is where a lot of my uncertaintity comes in. Obviously, if I implement I would properly fix up the 
    SeqLifeline to work properly
    """

    data = []
    levels = {0}
    overrides = []
    protocol = "HTTPS"
    port = 443
    authenticatesDestination = True
    checksDestinationRevocation = True
    name = "Dummy"

class Lifeline(Enum):
    ACTIVATE = "activate"
    DEACTIVATE = "deactivate"
    DESTROY = "destroy"

class varLifeline(var):
    def __set__(self, instance, value):
        if not isinstance(value, Lifeline):
            raise ValueError("expecting a Lifetime, got a {}".format(type(value)))
        super().__set__(instance, value)

class SeqLifeline(Element):
    name = varString("", required=True)
    action = varLifeline(None, required=True)
    participant = varElement(None, required=True)
    color = varString("", required=False, doc="Color variable for the lifeline")
    source = _Dummy
    sink = _Dummy
    data = []
    responseTo = None
    isResponse = False
    response = None
    order = -1

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        TM._flows.append(self)

    def seq_line(self):
        return '{} {}\n'.format(self.action.value, self.participant._uniq_name())

    def dfd(self, **kwargs) -> str:
        """SeqLifeline shouldn't show on dfd's but we do want them to render on seq diagrams."""
        return ""

Other ideas

  • Add a varNote class that allows defining shape, color, location see Notes on messages for attributes we could declare
  • Introduce SeqNewPage and the ability write seq() out to separate files
    • Ability to add hide unlinked (would only work with new page)
  • Introduce SeqDivider see Divider or sperator
  • Add ability group participants on boundaries Participants encompass

Missing documentation for conditions

Some properties/conditions are obvious and don't really need documentation, such as protocol or isEncrypted. But other properties may not be obvious to some, such as Dataflow.authenticatedWith - with what?

Add logical grouping of elements on DFD

When drawing a DFD I will often group things logically on the diagram, different local file datastores might be together as are external services out of scope, various AWS services etc. Currently the diagram is draw with things in random places. Using a Boundary would accomplish this but in some cases there isn't a boundary.

I was thinking it may be useful to have a logical group similar to a Boundary using an inGroup property and when drawing the DFD the Group would not be visible.

Boundaries with same name are grouping together

I'm not sure if it's by intent or not, but if I have two boundaries with the same name the components rendered into one boundary.
i.e:

dbVpc = Boundary("VPC")
serviceVpc = Boundary("VPC")

...

db = Datastore("Postgres Aurora")
db.inBoundary = dbVpc
server = Server("Server")
server.inBoundary = serviceVpc

Generates the following:

image

Alternative import mechanism

As suggested in the comments on #38 a proper way to import threat lists would be desirable. The goal would be to have an easy way to import threat lists based on existing lists of controls. (Many companies already have something like that in excel or csv, so it would be a matter of tweaking in the right layout and of you go).

Off course, it would also mitigate the dangerous use of eval() in my commit ;)

However the current dictionary structure makes this challenging. I first tried to output the existing dictionary and analyze what would be suitable to adopt, but so far my experiments aren't really successful:

csv.writer
seems to do the job, but gives full class names and throws all values in a big string. I will do some more research.

json.dump and json.dumps
crash on output with complaints about structure

pickle.dump
works, but creates unreadable file; not suitable as import mechanism

jsonpickle
works, but adds more meta info

Not sure what the best solution would be, so I'm open to suggestions.

DF1 threat assumes source and sink are remote systems

I was mocking up a sample DFD with a Process and some local files datastores and I am getting a DF1 threat (Dataflow not authn'd) which isn't the beast threat here.

I would like to add logic so this threat doesn't apply to local file data stores and maybe introduce a threat about permissions or something.

I've mocked up two places where I can do this.
-Use the Protocol property on the Dataflow.
"target.authenticatedWith is False and target.protocol is not 'FileSystem'"
-Add an isLocalFile to DataStore
"target.authenticatedWith is False and ( (type(target.source) is Datastore and target.source.isLocalFile is False) or (type(target.sink) is Datastore and target.sink.isLocalFile is False) )"

Thoughts?

Logic is going to get messy fast, rules engine?

I've been thinking about this and relates to a few issues I've added recently.

I think the logic is going to get messy as we add more Threats, Mitigations, and add logic to alter severity while applying mitigations.

Does it make sense to continue creating a tightly coupled rules engine here vs using something existing?

Idk what exists for Python. For Java I've worked with Drools that would be perfect for this. So much so I had the fleeting thought to port this to Java to use it.

README for report template not updated?

In the README, it says {findings:repeat:* ...}

but, the template given was:

|{findings:repeat:
<details>
  <summary>   {{item.id}}   --   {{item.description}}</summary>
  <h6> Targeted Element </h6>
  <p> {{item.target}} </p>
  <h6> Severity </h6>
  <p>{{item.severity}}</p>
  <h6>Example Instances</h6>
  <p>{{item.example}}</p>
  <h6>Mitigations</h6>
  <p>{{item.mitigations}}</p>
  <h6>References</h6>
  <p>{{item.references}}</p>
  &nbsp;
  &nbsp;
  &emsp;
</details>
}|

I am trying to change the template but no idea where i should look into to properly build a nested loop in md.

please assist!

Move DFD writer logic outside pytm

I'd like to move the writer logic outside pytm. Python isn't my first language so my terms might not be right but I'd like to create a interface for tmwriter so we can support other formats than graphviz.

One writer might just be a report for Threats, Mitigations, Elements and Annotations, another could be existing graphviz, another could be mxGraph (https://jgraph.github.io/mxgraph/javascript/examples/helloworld.html) or use mxGraph XML format which can be loaded into Draw.io (Extras->Edit Diagram)

<mxGraphModel dx="1190" dy="727" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" background="#ffffff" math="0" shadow="0"> <root> <mxCell id="0"/> <mxCell id="1" parent="0"/> <mxCell id="2" value="Client" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontColor=#000000;align=center;" parent="1" vertex="1"> <mxGeometry x="120" y="140" width="80" height="80" as="geometry"/> </mxCell> <mxCell id="3" value="Server" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontColor=#000000;align=center;" parent="1" vertex="1"> <mxGeometry x="410" y="110" width="80" height="80" as="geometry"/> </mxCell> <mxCell id="4" value="" style="endArrow=classic;html=1;fontColor=#000000;exitX=1;exitY=0.5;entryX=0;entryY=0.5;" parent="1" source="2" target="3" edge="1"> <mxGeometry width="50" height="50" relative="1" as="geometry"/> </mxCell> </root> </mxGraphModel>

Python complains about lru_cache (Python versions compatibility issues)

How does the problem look like and what steps reproduce it?

Trying to launch a sample tm.py from root of repository:

Traceback (most recent call last):
  File "./tm.py", line 3, in <module>
    from pytm import TM, Actor, Boundary, Dataflow, Datastore, Lambda, Server, Data, Classification  
  File "/usr/lib/python3.6/site-packages/pytm-1.1.1-py3.6.egg/pytm/__init__.py", line 3, in <module> 
    from .pytm import Element, Server, ExternalEntity, Dataflow, Datastore, Actor, Process, SetOfProcesses, Boundary, TM, Action, Lambda, Threat, Classification, Data 
  File "/usr/lib/python3.6/site-packages/pytm-1.1.1-py3.6.egg/pytm/pytm.py", line 487, in <module>
    class TM():
  File "/usr/lib/python3.6/site-packages/pytm-1.1.1-py3.6.egg/pytm/pytm.py", line 781, in TM
    @lru_cache
  File "/usr/lib64/python3.6/functools.py", line 477, in lru_cache
    raise TypeError('Expected maxsize to be an integer or None')
TypeError: Expected maxsize to be an integer or None

Can you reproduce it using the latest master?

Yes

What is your running environment?

OS: SUSE SLES 15 SP1
Python version: Python 3.6.10

Also reproduced in Docker container python:3.7.9-alpine

What have you already tried to solve the problem?

Change @lru_cache on line 487 to @lru_cache().

@lru_cache

See this Python bug
After changing to @lru_cache still works in Docker container python:3.8.6-alpine which is current stable version of Python 3.8

CR01 - Collision in communications?

"CR01": { "description": "Collision Attacks", "source": Process, "target": Process, "condition": "target.implementsCommunicationProtocol is True", },

A collision attack is usually associated with hash algorithms, not communication protocols (unless you mean protocols pre-1980). Implementation of a custom comm protocol does not automatically mean a collision is a security threat, and implementsCommunicationProcotol should not imply a custom one.

DO02 - huh?

"DO02": { "description": "Potential Process Crash or Stop", "source": (Process, Datastore, Element), "target": Process, "condition": "target.handlesCrashes is False", },

What is the thought process behind this threat? Is it that the Process can crash, or that it could crash in some security-relevant way?
It would seem to me that if the concern is that a Process may crash, then the conditions one might check for would be susceptibility to a buffer overflow in unmanaged code, not whether or not it can "handle" crashes, whatever that means...

Issue on sequence diagram

Hi all,

I'm just starting with pytm, following examples and if I try to follow the README instructions I got this using --seq

kali@kali:~/pytm$ ./tm.py --seq
@startuml
entity cbfcfffaaffdacebab as "Internet"
entity aeebeaeccccdadbfbbed as "Server/DB"
entity acdbbafaebffabebcefad as "AWS VPC"
actor bdafaaafeadfceac as "User"
database afeaeabaaabeaaabcbedeaa as "SQL Database"
bdafaaafeadfceac -> facdfcdecbdebecaeaa: User enters comments (*)
note left
This is a simple web app
that stores and retrieves user comments.
end note
facdfcdecbdebecaeaa -> afeaeabaaabeaaabcbedeaa: Insert query with comments
note left
Web server inserts user comments
into it's SQL query and stores them in the DB.
end note
afeaeabaaabeaaabcbedeaa -> facdfcdecbdebecaeaa: Retrieve comments
facdfcdecbdebecaeaa -> bdafaaafeadfceac: Show comments (*)
bbeedeaacfffcbbbbcba -> afeaeabaaabeaaabcbedeaa: Lambda periodically cleans DB
@enduml

kali@kali:~/pytm$ ./tm.py --seq | java -Djava.awt.headless=true -jar plantuml.jar -tpng -pipe > seq.png
Picked up _JAVA_OPTIONS: -Dawt.useSystemAAFontSettings=on -Dswing.aatext=true

And then this is my seq diagram
image

Am I missing something? I've already check that I've all requirements installed.

Thx in advance ^_^

Add Exclusion logic for Threats?

Does it make sense to have two conditions and inclusion and exclusion? I think this will simplify more complex conditional logic.

Rather than having to make a complex logic to handle inclusion and exclusion I think there could be value in having two separate conditions that should be written to return true. First loop thru the elements and apply the inclusion condition then apply the exclusion before returning.

Python 3.7.8: "TypeError: Expected maxsize to be an integer or None" for lru_cache in line 780

$ pip install pytm
$ python pytm-example.py
Traceback (most recent call last):
  File "pytm-example.py", line 3, in <module>
    from pytm.pytm import TM, Server, Datastore, Dataflow, Boundary, Actor, Lambda
  File "/home/math/.pyenv/versions/3.7.8/lib/python3.7/site-packages/pytm/__init__.py", line 3, in <module>
    from .pytm import Element, Server, ExternalEntity, Dataflow, Datastore, Actor, Process, SetOfProcesses, Boundary, TM, Action, Lambda, Threat, Classification, Data
  File "/home/math/.pyenv/versions/3.7.8/lib/python3.7/site-packages/pytm/pytm.py", line 486, in <module>
    class TM():
  File "/home/math/.pyenv/versions/3.7.8/lib/python3.7/site-packages/pytm/pytm.py", line 780, in TM
    @lru_cache
  File "/home/math/.pyenv/versions/3.7.8/lib/python3.7/functools.py", line 490, in lru_cache
    raise TypeError('Expected maxsize to be an integer or None')
TypeError: Expected maxsize to be an integer or None

Additional Threat Metadata

I was thinking of adding CWE to Threat metadata and I see remediation is in some of the commented threats. Lets brainstorm on other elements we would like in as Threat metadata

Existing elements
-ID
-Description
-CVSS
-Condition

Possible new elements
-Remediation
-CWE
-Exclusion Condition (#14)
-References (blog posts, books, whitepapers, etc)
-Severity (Info, Low, Medium, High, Critical), see cvss change below.

Changes
-CVSS -- Should this be a specific score or a range?

Improve `--describe` output

Can you please ellaborate about each property?
For example:

➜  meetings-service git:(Michael/rank/SCPJM-115) ✗ ./tm.py --describe Datastore
The following properties are available for Datastore
	OS
	authenticatesDestination
	authenticatesSource
	authenticationScheme
	authorizesSource
	check
	definesConnectionTimeout
	description
	dfd
	handlesInterruptions
	handlesResources
	hasAccessControl
	hasWriteAccess
	implementsAuthenticationScheme
	implementsNonce
	inBoundary
	inScope
	isAdmin
	isEncrypted
	isHardened
	isResilient
	isSQL
	isShared
	name
	onAWS
	onRDS
	providesConfidentiality
	providesIntegrity
	storesLogData
	storesPII
	storesSensitiveData

What is the difference between storesSensitiveData and storesPII?
onRDS include Aurora?
What's authenticatesDestination/authenticatesSource?
What's the expected values (enum) for authenticationScheme?
And so on
Thanks

Threat Mitigations

Related to #17

I saw Threat Mitigations in the TODO file and thought it might be useful to start a threat to brainstorm about it.

The primary goal of implementing mitigation logic would be to:

  1. Remove a threat (or mark it as remediated, not applicable)
  2. Alter a threat's CVSS or Severity

Any other goals?

There be dragons here (cvss for weaknesses?)

self.cvss = cvss

Consider using CWSS rather than CVSS for severity scoring, especially where threats != vulnerabilities.
Also consider adding a scoring interface so users can define their own scoring methods (with some pre-canned ones); perhaps this function would take a JSON file describing similarly to rules the conditions for each severity level?

DataSet change breaking report template

The recent DataSet issue (#77) breaks the report templates.
The data column in the report now shows a class __repr__:

DataSet({<pytm.pytm.Data(User ID and SSL Cert.) at 0x105ef5ac0>})

Unless I'm misunderstanding something, I believe adding a __str__ method to the DataSet class (on or about line 191) addresses this issue.

    def __str__(self):
        return ", ".join([d.name for d in self])

Looking forward to making good use of the new DataSet feature.

Are container objects missing?

With object types like Server or Asset, can these contain other Servers or Processes? They should. If they can already, better docs needed.

Setup

It is getting big and probably will be bigger. We need to start looking at the setup framework so we are able to use the standard tooling and publish to package directory sites.

Error when executing tm.py

Hi all,

thanks for sharing this nice tool. I just wanted to explore the sample but getting an error

➜  pytm git:(master) ✗ ./tm.py --dfd | dot -Tpng -o sample1.png
2019-03-31 07:24:45.271 dot[27759:12821403] +[__NSCFConstantString length]: unrecognized selector sent to class 0x7fff95b4a8c0
2019-03-31 07:24:45.272 dot[27759:12821403] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '+[__NSCFConstantString length]: unrecognized selector sent to class 0x7fff95b4a8c0'
*** First throw call stack:
(
	0   CoreFoundation                      0x00007fff3df1743d __exceptionPreprocess + 256
	1   libobjc.A.dylib                     0x00007fff69e25720 objc_exception_throw + 48
	2   CoreFoundation                      0x00007fff3df941a5 __CFExceptionProem + 0
	3   CoreFoundation                      0x00007fff3deb6ad0 ___forwarding___ + 1486
	4   CoreFoundation                      0x00007fff3deb6478 _CF_forwarding_prep_0 + 120
	5   CoreFoundation                      0x00007fff3de47f54 CFStringCompareWithOptionsAndLocale + 72
	6   ImageIO                             0x00007fff409b5367 _ZN17IIO_ReaderHandler15readerForUTTypeEPK10__CFString + 53
	7   ImageIO                             0x00007fff4098d527 _ZN14IIOImageSource14extractOptionsEP13IIODictionary + 183
	8   ImageIO                             0x00007fff409ba2e6 _ZN14IIOImageSourceC2EP14CGDataProviderP13IIODictionary + 72
	9   ImageIO                             0x00007fff409ba1bb CGImageSourceCreateWithDataProvider + 172
	10  libgvplugin_quartz.6.dylib          0x0000000107cfcc54 quartz_loadimage_quartz + 224
	11  libgvc.6.dylib                      0x0000000107c59781 gvloadimage + 269
	12  libgvc.6.dylib                      0x0000000107c587e0 gvrender_usershape + 955
	13  libgvc.6.dylib                      0x0000000107c8662e poly_gencode + 2129
	14  libgvc.6.dylib                      0x0000000107c92b7b emit_node + 1030
	15  libgvc.6.dylib                      0x0000000107c91805 emit_graph + 4769
	16  libgvc.6.dylib                      0x0000000107c96d0d gvRenderJobs + 4911
	17  dot                                 0x0000000107c4fd62 main + 697
	18  libdyld.dylib                       0x00007fff6aef3085 start + 1
)
libc++abi.dylib: terminating with uncaught exception of type NSException
[1]    27758 done       ./tm.py --dfd |
       27759 abort      dot -Tpng -o sample1.png

I installed graphviz via brew, using macOS 10.14 and Python 3.7.3.

How should I generate a report?

By looking on the doc I need to run:

tm.py --report REPORT (output report using the named template file)

What is this template file? Where can I find it?

AA03 - is every implementsAuthenticationScheme an SSO scheme?

"AA03": { "description": "Weakness in SSO Authorization", "source": (Process, Element), "target": (Process, Server), "condition": "target.implementsAuthenticationScheme is False", },

What if the Process implements BasicAuth or uses mutual TLS (neither of which is SSO)?
If the Process uses SAML or OAuth, then maybe.
Maybe authenticationScheme as a string var is necessary?

DataFlow object attributes need work (or moves)

Many of the attributes defined for the DataFlow object belong elsewhere:

Correctly assigned:

  • source = varElement(None, required=True)
    
  • sink = varElement(None, required=True)
    
  • order = varInt(-1, doc="Number of this data flow in the threat model")
    
  • note = varString("")
    

Maybe correct:

  • isResponse = varBool(False, doc="Is a response to another data flow") --> Is this a dup with `responseTo`?
    
  • response = varElement(None, doc="Another data flow that is a response to this one") --> If this is non-empty, is either `isResponse` or `responseTo` needed (since it would be detectable as True if non-empty)?
    
  • responseTo = varElement(None, doc="Is a response to this data flow") --> Is this a dup with `isResponse`?
    
  • data = varData([], doc="Default type of data in incoming data flows") --> Does this represent the data sent by the source, or returned by the sink? I think this highlights a challenge in setting data to the flow and not associating the connection to the source as sender of data, and server is replier of data.
    

Should be a property of the Source:

  • srcPort = varInt(-1, doc="Source TCP port")
    
  • isEncrypted = varBool(False, doc="Is the data encrypted") --> Clarification needed - is this data encryption independent of the protocol?
    
  • authenticatesDestination = varBool(False, doc="""Verifies the identity of the destination,
    
  • for example by verifying the authenticity of a digital certificate.""")
  • checksDestinationRevocation = varBool(False, doc="""Correctly checks the revocation status
    
  • of credentials used to authenticate the destination""")

Should be a property of the Sink:

  • usesSessionTokens = varBool(False)
    
  • authorizesSource = varBool(False)
    
  • usesLatestTLSversion = varBool(False) --> This will become out of date (TLS 1.2 to TLS 1.3 to whatever is next), and TLS is not the only option for secure protocol, so maybe this should be a list
    
  • implementsAuthenticationScheme = varBool(False)
    
  • authenticatedWith = varBool(False)
    
  • protocol = varString("", doc="Protocol used in this data flow") --> With this list, it is possible to check for `usesLatestTLSversion` state
    
  • dstPort = varInt(-1, doc="Destination TCP port")
    

Should be a property of either source or sink:

  • usesVPN = varBool(False) --> DataFlows are associated with a source (the initiator) and a sink (the target). It is either the source or sink that determines if a VPN is in use. The source may use one, or the sink may do so, or both, but the DataFlow would if anything inherit the state of this flag based on `source.usesVPN or sink.usesVPN`. 
    
  • implementsCommunicationProtocol = varBool(False) --> Sink always determines the protocol to be used by the source, but this may also apply to the source's comm stack
    

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.