nautobot / nautobot-app-firewall-models Goto Github PK
View Code? Open in Web Editor NEWModel Firewall policies in Nautobot
Home Page: https://docs.nautobot.com/projects/firewall-models/en/latest/
License: Other
Model Firewall policies in Nautobot
Home Page: https://docs.nautobot.com/projects/firewall-models/en/latest/
License: Other
Successful redirect after updating policy rule index in the "Edit Policy Rule Indexes", in the detailed policy view.
Policy rule index is updated but exception is raised after clicking "Submit".
NoReverseMatch at /plugins/firewall/policy/72861d28-8b72-43c3-b487-ddcd37d123bb/
Reverse for 'policy_policyrules' not found. 'policy_policyrules' is not a valid view function or pattern name.
Request Method: POST
Request URL: http://172.28.136.211:8080/plugins/firewall/policy/72861d28-8b72-43c3-b487-ddcd37d123bb/?tab=edit-policy-rule-index
Django Version: 3.2.13
Exception Type: NoReverseMatch
Exception Value:
Reverse for 'policy_policyrules' not found. 'policy_policyrules' is not a valid view function or pattern name.
Exception Location: /usr/local/lib/python3.8/site-packages/django/urls/resolvers.py, line 698, in _reverse_with_prefix
Python Executable: /usr/local/bin/python
Python Version: 3.8.13
Python Path:
['/source',
'/usr/local/lib/python3.8/site-packages/git/ext/gitdb',
'/source',
'/usr/local/bin',
'/usr/local/lib/python38.zip',
'/usr/local/lib/python3.8',
'/usr/local/lib/python3.8/lib-dynload',
'/root/.local/lib/python3.8/site-packages',
'/usr/local/lib/python3.8/site-packages',
'/source',
'/usr/local/lib/python3.8/site-packages/gitdb/ext/smmap']
Currently the CapircaPolicy views are two separate views with the detail view not inheriting from generic/object_detail.html. Collapsing to one with tabs helps to keep context and navigation ability.
UX Improvement
Allow the ability to configure the weight of a policy via the UI.
To make it clear which order Policies should be applied in. Consider the following:
The default algorithm (or lack thereof) orders them in the wrong order as one would expect. Yes they can adjust it via the API, but many will not be aware of that.
It would be really cool to add Applications as well for NGFW Firewalls Like Palo Alto.
In a Palo Alto you can define Applications to a Policy Rule (see Screenshot)
https://capture.dropbox.com/6KVFT8BFA2tt3Fzk
Automate Firewall rules with Nautobot and a Firewall like Palo Alto or Fortinet and all other FWs which are using Application detection.
Add model to store ACL or policy hit count. It is oper state and maybe would be better in SSoT integrations, but the goal is to have a place to store hits per policy and then expose that via capacity metrics, so then we can get that data into a telemetry stack.
In addition to the name of the ServiceObject
, also display its port and IP protocol, possibly in parentheses.
A user processing a firewall request might know they want to allow TCP/1234 in a policy rule, but they might not know that TCP/1234 is named $SOME-SERVICE in their environment so they have to cross-check in the service object view, which adds time and complexity to the process.
I created iptables platform and rules in firewall plugin for tah platform. then i started job "Generate FW Config via Capirca."
Job should generate caprica config for iptables
First i created iptables platform and assigned it to device.
Then i created Address objects, service objects, zones then policy rules and policy
After that i started job and got error:
Traceback (most recent call last):
File "/opt/nautobot/lib/python3.10/site-packages/nautobot/extras/jobs.py", line 1179, in _run_job
output = job.run(data=data, commit=commit)
File "/opt/nautobot/lib/python3.10/site-packages/nautobot_firewall_models/jobs.py", line 51, in run
CapircaPolicy.objects.update_or_create(device=device_obj)
File "/opt/nautobot/lib/python3.10/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/opt/nautobot/lib/python3.10/site-packages/django/db/models/query.py", line 613, in update_or_create
obj.save(using=self.db)
File "/opt/nautobot/lib/python3.10/site-packages/nautobot_firewall_models/models/capirca_models.py", line 57, in save
cap_obj.get_all_capirca_cfg()
File "/opt/nautobot/lib/python3.10/site-packages/nautobot_firewall_models/utils/capirca.py", line 590, in get_all_capirca_cfg
self.get_capirca_cfg()
File "/opt/nautobot/lib/python3.10/site-packages/nautobot_firewall_models/utils/capirca.py", line 553, in get_capirca_cfg
self.cfg_file = generate_capirca_config(servicecfg, networkcfg, self.pol_file, self.platform)
File "/opt/nautobot/lib/python3.10/site-packages/nautobot_firewall_models/utils/capirca.py", line 74, in generate_capirca_config
return str(import_string(CAPIRCA_MAPPER[platform]["lib"])(pol, 0))
File "/opt/nautobot/lib/python3.10/site-packages/capirca/lib/iptables.py", line 644, in __init__
super().__init__(pol, exp_info)
File "/opt/nautobot/lib/python3.10/site-packages/capirca/lib/aclgenerator.py", line 322, in __init__
self._TranslatePolicy(pol, exp_info)
File "/opt/nautobot/lib/python3.10/site-packages/capirca/lib/iptables.py", line 782, in _TranslatePolicy
new_terms.append(self._TERM(term, filter_name, all_protocols_stateful,
File "/opt/nautobot/lib/python3.10/site-packages/capirca/lib/iptables.py", line 107, in __init__
self.term_name = '%s_%s' % (self.filter[:1], self.term.name)
TypeError: 'NoneType' object is not subscriptable
###Reason of that error
I tried run this objects of this job separately from nautobot shell and i got this error when i tried to start PolicyToCapirca class from capirca.py
First i issued this in shell
from nautobot_firewall_models.models.core_models import Policy,AddressObject, AddressObjectGroup
from nautobot.dcim.models import Platform
from nautobot_firewall_models.utils.capirca import PolicyToCapirca
from nautobot_firewall_models.constants import (
ALLOW_STATUS,
CAPIRCA_OS_MAPPER,
ACTION_MAP,
LOGGING_MAP,
CAPIRCA_MAPPER,
PLUGIN_CFG,
)
pve_policy = Policy.objects.get(name="PVE")
pve_platform = Platform.objects.get(name="iptables")
capirca_policy = PolicyToCapirca(pve_platform.slug, pve_policy)
capirca_policy.get_capirca_cfg()
Then i got this error
WARNING:absl:Term PVE_Cluster has service UDP-5404-5405 which is not defined with protocol tcp, but will be permitted. Unless intended, you should consider splitting the protocols into separate terms!
WARNING:absl:Term PVE_Cluster has service SSH which is not defined with protocol udp, but will be permitted. Unless intended, you should consider splitting the protocols into separate terms!
WARNING:absl:Term PVE_Cluster has service TCP-3128 which is not defined with protocol udp, but will be permitted. Unless intended, you should consider splitting the protocols into separate terms!
WARNING:absl:Term PVE_Cluster has service TCP-5900-5999 which is not defined with protocol udp, but will be permitted. Unless intended, you should consider splitting the protocols into separate terms!
WARNING:absl:Term PVE_Cluster has service TCP-8006 which is not defined with protocol udp, but will be permitted. Unless intended, you should consider splitting the protocols into separate terms!
WARNING:absl:Filter is generating a non-standard chain that will not apply to traffic unless linked from INPUT, OUTPUT or FORWARD filters. New chain name is: None
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/opt/nautobot/lib/python3.10/site-packages/nautobot_firewall_models/utils/capirca.py", line 553, in get_capirca_cfg
self.cfg_file = generate_capirca_config(servicecfg, networkcfg, self.pol_file, self.platform)
File "/opt/nautobot/lib/python3.10/site-packages/nautobot_firewall_models/utils/capirca.py", line 74, in generate_capirca_config
return str(import_string(CAPIRCA_MAPPER[platform]["lib"])(pol, 0))
File "/opt/nautobot/lib/python3.10/site-packages/capirca/lib/iptables.py", line 644, in __init__
super().__init__(pol, exp_info)
File "/opt/nautobot/lib/python3.10/site-packages/capirca/lib/aclgenerator.py", line 322, in __init__
self._TranslatePolicy(pol, exp_info)
File "/opt/nautobot/lib/python3.10/site-packages/capirca/lib/iptables.py", line 782, in _TranslatePolicy
new_terms.append(self._TERM(term, filter_name, all_protocols_stateful,
File "/opt/nautobot/lib/python3.10/site-packages/capirca/lib/iptables.py", line 107, in __init__
self.term_name = '%s_%s' % (self.filter[:1], self.term.name)
TypeError: 'NoneType' object is not subscriptable
As i understad capirca cant find direction for rule (INPUT, OUTPUT or FORWARD) Then i checked
>>> cap_policy
[{'rule-name': 'SSH-to-PVE', 'headers': ['iptables'], 'terms': {'source-address': ['pci-mgmt-z501', 'test'], 'source-port': [], 'destination-address': ['PVE-z501'], 'destination-port': ['SSH'], 'protocol': ['tcp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}, {'rule-name': 'ICMP', 'headers': ['iptables'], 'terms': {'source-address': [], 'source-port': [], 'destination-address': [], 'destination-port': [], 'protocol': ['icmp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}, {'rule-name': 'Httpftp', 'headers': ['iptables'], 'terms': {'source-address': [], 'source-port': [], 'destination-address': [], 'destination-port': ['FTP', 'Http-Https'], 'protocol': ['tcp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}, {'rule-name': 'test-to-http', 'headers': ['iptables'], 'terms': {'source-address': ['pci-mgmt-z501', 'test'], 'source-port': [], 'destination-address': ['PVE-z501', 'test'], 'destination-port': ['HTTP', 'HTTPS'], 'protocol': ['tcp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}, {'rule-name': 'https', 'headers': ['iptables'], 'terms': {'source-address': [], 'source-port': [], 'destination-address': [], 'destination-port': ['Http-Https'], 'protocol': ['tcp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}, {'rule-name': 'PVE_Cluster', 'headers': ['iptables'], 'terms': {'source-address': ['PVE-z501'], 'source-port': [], 'destination-address': ['PVE-z501'], 'destination-port': ['SSH', 'TCP-3128', 'TCP-5900-5999', 'TCP-8006', 'UDP-5404-5405'], 'protocol': ['tcp', 'udp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}]
>>> pol
['header {', ' target:: iptables', '}', '', 'term SSH-to-PVE {', ' action:: accept', ' comment:: ""', ' destination-address:: PVE-z501', ' destination-port:: SSH', ' logging:: true', ' protocol:: tcp', ' source-address:: pci-mgmt-z501', ' source-address:: test', '}', '', 'header {', ' target:: iptables', '}', '', 'term ICMP {', ' action:: accept', ' comment:: ""', ' logging:: true', ' protocol:: icmp', '}', '', 'header {', ' target:: iptables', '}', '', 'term Httpftp {', ' action:: accept', ' comment:: ""', ' destination-port:: FTP', ' destination-port:: Http-Https', ' logging:: true', ' protocol:: tcp', '}', '', 'header {', ' target:: iptables', '}', '', 'term test-to-http {', ' action:: accept', ' comment:: ""', ' destination-address:: PVE-z501', ' destination-address:: test', ' destination-port:: HTTP', ' destination-port:: HTTPS', ' logging:: true', ' protocol:: tcp', ' source-address:: pci-mgmt-z501', ' source-address:: test', '}', '', 'header {', ' target:: iptables', '}', '', 'term https {', ' action:: accept', ' comment:: ""', ' destination-port:: Http-Https', ' logging:: true', ' protocol:: tcp', '}', '', 'header {', ' target:: iptables', '}', '', 'term PVE_Cluster {', ' action:: accept', ' comment:: ""', ' destination-address:: PVE-z501', ' destination-port:: SSH', ' destination-port:: TCP-3128', ' destination-port:: TCP-5900-5999', ' destination-port:: TCP-8006', ' destination-port:: UDP-5404-5405', ' logging:: true', ' protocol:: tcp', ' protocol:: udp', ' source-address:: PVE-z501', '}', '']
There is no direction header.
I checked capirca manual and this is example from it:
target:: iptables [INPUT|OUTPUT|FORWARD|custom] {ACCEPT|DROP} {truncatenames} {nostate} {inet|inet6}
Then i found PolicyToCapirca class and cheched get_capirca_cfg function.
I found on line 482 this block of code:
if CAPIRCA_MAPPER[self.platform]["type"] == "zone":
from_zone = _slugify(pol["from-zone"])
to_zone = _slugify(pol["to-zone"])
rule_details["headers"].extend(["from-zone", from_zone, "to-zone", to_zone])
LOGGER.debug("Zone Logic hit, from-zone: `%s` to-zone: `%s`", from_zone, to_zone)
if CAPIRCA_MAPPER[self.platform]["type"] == "filter-name":
rule_details["headers"].append(rule_details["rule-name"])
LOGGER.debug("Filter Name Logic hit for: `%s`", str(rule_details["rule-name"]))
i checked platform type of iptables and it was direction:
>>> CAPIRCA_MAPPER[capirca_policy.platform]["type"]
'direction'
this is reason why i got error. This function didn't add right header.
I changed this code for this:
if CAPIRCA_MAPPER[self.platform]["type"] == "zone":
from_zone = _slugify(pol["from-zone"])
to_zone = _slugify(pol["to-zone"])
rule_details["headers"].extend(["from-zone", from_zone, "to-zone", to_zone])
LOGGER.debug("Zone Logic hit, from-zone: `%s` to-zone: `%s`", from_zone, to_zone)
elif CAPIRCA_MAPPER[self.platform]["type"] == "direction":
from_zone = _slugify(pol["from-zone"])
rule_details["headers"].extend([from_zone])
LOGGER.debug("Zone Logic hit, from-zone: `%s` to-zone: `%s`", from_zone)
if CAPIRCA_MAPPER[self.platform]["type"] == "filter-name":
rule_details["headers"].append(rule_details["rule-name"])
LOGGER.debug("Filter Name Logic hit for: `%s`", str(rule_details["rule-name"]))
and everything works
>>> cap_policy
[{'rule-name': 'SSH-to-PVE', 'headers': ['iptables', 'INPUT'], 'terms': {'source-address': ['pci-mgmt-z501', 'test'], 'source-port': [], 'destination-address': ['PVE-z501'], 'destination-port': ['SSH'], 'protocol': ['tcp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}, {'rule-name': 'ICMP', 'headers': ['iptables', 'INPUT'], 'terms': {'source-address': [], 'source-port': [], 'destination-address': [], 'destination-port': [], 'protocol': ['icmp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}, {'rule-name': 'Httpftp', 'headers': ['iptables', 'INPUT'], 'terms': {'source-address': [], 'source-port': [], 'destination-address': [], 'destination-port': ['FTP', 'Http-Https'], 'protocol': ['tcp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}, {'rule-name': 'test-to-http', 'headers': ['iptables', 'INPUT'], 'terms': {'source-address': ['pci-mgmt-z501', 'test'], 'source-port': [], 'destination-address': ['PVE-z501', 'test'], 'destination-port': ['HTTP', 'HTTPS'], 'protocol': ['tcp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}, {'rule-name': 'https', 'headers': ['iptables', 'INPUT'], 'terms': {'source-address': [], 'source-port': [], 'destination-address': [], 'destination-port': ['Http-Https'], 'protocol': ['tcp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}, {'rule-name': 'PVE_Cluster', 'headers': ['iptables', 'INPUT'], 'terms': {'source-address': ['PVE-z501'], 'source-port': [], 'destination-address': ['PVE-z501'], 'destination-port': ['SSH', 'TCP-3128', 'TCP-5900-5999', 'TCP-8006', 'UDP-5404-5405'], 'protocol': ['tcp', 'udp'], 'action': 'accept', 'logging': 'true', 'comment': '""'}}]
>>> pol
['header {', ' target:: iptables INPUT', '}', '', 'term SSH-to-PVE {', ' action:: accept', ' comment:: ""', ' destination-address:: PVE-z501', ' destination-port:: SSH', ' logging:: true', ' protocol:: tcp', ' source-address:: pci-mgmt-z501', ' source-address:: test', '}', '', 'header {', ' target:: iptables INPUT', '}', '', 'term ICMP {', ' action:: accept', ' comment:: ""', ' logging:: true', ' protocol:: icmp', '}', '', 'header {', ' target:: iptables INPUT', '}', '', 'term Httpftp {', ' action:: accept', ' comment:: ""', ' destination-port:: FTP', ' destination-port:: Http-Https', ' logging:: true', ' protocol:: tcp', '}', '', 'header {', ' target:: iptables INPUT', '}', '', 'term test-to-http {', ' action:: accept', ' comment:: ""', ' destination-address:: PVE-z501', ' destination-address:: test', ' destination-port:: HTTP', ' destination-port:: HTTPS', ' logging:: true', ' protocol:: tcp', ' source-address:: pci-mgmt-z501', ' source-address:: test', '}', '', 'header {', ' target:: iptables INPUT', '}', '', 'term https {', ' action:: accept', ' comment:: ""', ' destination-port:: Http-Https', ' logging:: true', ' protocol:: tcp', '}', '', 'header {', ' target:: iptables INPUT', '}', '', 'term PVE_Cluster {', ' action:: accept', ' comment:: ""', ' destination-address:: PVE-z501', ' destination-port:: SSH', ' destination-port:: TCP-3128', ' destination-port:: TCP-5900-5999', ' destination-port:: TCP-8006', ' destination-port:: UDP-5404-5405', ' logging:: true', ' protocol:: tcp', ' protocol:: udp', ' source-address:: PVE-z501', '}', '']
PolicyDeivceM2MNestedSerializer
should be PolicyDeviceM2MNestedSerializer
pylint: disable=too-many-ancestors
not pylint: disable=R0201
)0002_custom_status
, is the relative path to nautobot_firewall_models/migrations/services.yml
valid in a production installation? Or should it be something like os.path.join(os.path.abspath(__file__), "nautobot_firewall_models/migrations/services.yml")
?help_text
on all model fields to make them more self-explanatory."""FQDN model"""
for example isn't very informative/helpful.FQDN.name
allow up to 254 characters (assuming name
is the fqdn
value, which I'm not 100% sure on)?slug
field and which ones do not?slug
, it could be preferable to have their absolute (detail) URL use the slug rather than the PK as the URL parameter.object_detail.html
instead of base.html
, which should allow a significant reduction in boilerplate
test_has_advanced_tab
view tests to be enabled and pass."""I hate writing docs strings easter egg."""
- caught you! :-)nautobot.extras.filters.NautobotFilterSet
to reduce boilerplateUUID should be able to be used for API when adding a zone to a policy rule
Requires full zone payload
Given a set of address objects for source/destination and/or a set of services, find any policies that apply to exactly these fields.
Trying to find if there is an existing policy that covers a new firewall request.
I have implemented something similar to this in a job I've been building. This takes in a variable called address_objects
whose values are explained below and returns all PolicyRule
objects that have exactly those sources and destinations in them. It currently looks like this:
PolicyRule.objects.all().annotate(
source_matches=Count("source_address", filter=Q(source_address__in=address_objects["source"])),
destination_matches=Count(
"destination_address", filter=Q(destination_address__in=address_objects["destination"])
),
).filter(
source_matches=len(address_objects["source"]),
destination_matches=len(address_objects["destination"]),
).filter(
source_matches=Count("source_address"),
destination_matches=Count("destination_address"),
)
where
address_objects = {"source": {AddressObject<10.0.0.0/24>}, "destination": {AddressObject<192.168.0.0/24>}}
There are currently both Assigned Devices and Assigned dynamic groups, I am proposing to consolidate down to just dynamic groups. This will continue to be used in the Nautobot ecosystem, with more reliance and knowledge of it moving forward.
There is complication in that both have weight. Let's explore an issue.
Deny-Bogons
is assigned to device=nyc-fw01 with weight 100 and dynamic_group={site: nyc} with weight 1000Allow-Internet
is applied to the device with weight 500Conceptually, this will not work, and while I understand that we can simply document "operator beware", not to do such a thing, it is still odd.
As a developer of a job or creating configuration management from the system, it is not clear what the intention should be, for either prefer assigned devices or dynamic groups
As a developer, there is an increased complication to always determine given a set of Policies, which Device's are actually in scope.
As a developer, there is an increased complication to always determine given a set of Devices, which Policy's are actually in scope.
As an alternate, I believe that the assigned devices and dynamic groups should at a minimum be mutually exclusive, but would prefer to aggregate down to dynamic groups.
I'm using the OpenAPI docs (i.e., http://localhost:8080/api/docs/#/), and I'm unable to add policy_rules via the /api/plugins/firewall/policy/ endpoint using PUT, PATCH, or POST. I've verified that the policy and the policy rule UUIDs are correct. The same operation works correctly via web interface.
PUT request body to /api/plugins/firewall/policy/:
[
{
"id": "5b729dd5-daeb-418c-ba43-51f4acc76fc3",
"name": "ngen-poc-switch-177",
"policy_rules": [
"f03e3ee9-0a84-4609-917d-5af6ca0765c6"
]
}
]
policy rules should be added
No policy rules are added. I get a 200 response as follows:
[
{
"id": "5b729dd5-daeb-418c-ba43-51f4acc76fc3",
"display": "ngen-poc-switch-177",
"tags": [],
"status": {
"value": "active",
"label": "Active"
},
"relationships": {},
"computed_fields": {},
"custom_fields": {},
"notes_url": "http://localhost:8080/api/plugins/firewall/policy/5b729dd5-daeb-418c-ba43-51f4acc76fc3/notes/",
"url": "http://localhost:8080/api/plugins/firewall/policy/5b729dd5-daeb-418c-ba43-51f4acc76fc3/",
"assigned_devices": [
{
"device": "828245e0-ea43-4860-a6b0-d32cc262470f",
"weight": 100
}
],
"assigned_dynamic_groups": [],
"created": "2023-01-08",
"last_updated": "2023-01-08T23:09:10.969667Z",
"_custom_field_data": {},
"description": "",
"name": "ngen-poc-switch-177",
"tenant": null,
"policy_rules": []
}
]
The policy_rules field is always empty unless I add a policy rule via the web interface, which works properly.
Services (http, https, tcp/8080 etc) are typically associated with a firewall rule, not a source or destination address. In the current implementation the service is tied to the SourceDestination
model. In my simple use cases it seems to make more sense for Service
to be tied to the PolicyRule
object.
The following is a pretty standard way to receive information for a firewall change request. The request's rules are defined as action/protocol/source/destination. Depending on the protocol (icmp, udp, tcp etc) there may be optional values for source or destination port or (in the case of icmp) type and code:
request_id: 12345
rules:
- action: permit
protocol: tcp
src:
addr: 192.168.10.1/32
dst:
addr: 192.168.12.0/28
port: 80
- action: permit
protocol: tcp
src:
addr: 192.168.10.1/32
dst:
addr: 192.168.12.0/28
port: 80
- action: permit
protocol: tcp
src:
addr: 192.168.10.1/32
dst:
addr: 192.168.12.0/28
port: 80
It seems like there should be a Service
object created that is associated with the PolicyRule
, not with the SourceDestination
objects
Get an app_use_cases page
404 Not Found page when clicking Using The App page in Readme
It would be great to model NAT more extensibly within this plugin. The current NAT and the incoming change to NAT to allow 1:many within Nautobot core (nautobot/nautobot#630) should suffice for a lot of uses cases, but doesn't cover PAT. I discussed with @lampwins about this and a lot of the models required to model NAT/PAT more extensibly already exists within this plugin.
Model NAT/PAT to be able to track and configure on firewalls.
Add the ability to actively relate a policy to a device.
Creating a relationship can accomplish something similar, but I think it is reasonable to build policies for a specific firewall. In theory in the future when nautobot/nautobot#896 is resolved, can additionally tie to that.
Either via the GUI or RestAPI be able to filter policy-rules by their name
The filter is not valid
This plugin introduces a new ServiceObject
, how does it compare/differentiate from the existing ipam:service
model ?
Looks like both have port
and protocol
defined ?
Would it be possible to reuse the existing model, with or without custom fields ?
what is the trade off if we do that ?
class ServiceObject(BaseModel, ChangeLoggedModel):
"""ServiceObject model."""
description = models.CharField(
max_length=200,
blank=True,
)
name = models.CharField(max_length=50)
slug = models.SlugField(max_length=50, editable=False)
port = models.IntegerField()
ip_protocol = models.CharField(choices=choices.IP_PROTOCOL_CHOICES, null=True, blank=True, max_length=20)
class Service(PrimaryModel):
"""
A Service represents a layer-four service (e.g. HTTP or SSH) running on a Device or VirtualMachine. A Service may
optionally be tied to one or more specific IPAddresses belonging to its parent.
"""
device = models.ForeignKey(
to="dcim.Device",
on_delete=models.CASCADE,
related_name="services",
verbose_name="device",
null=True,
blank=True,
)
virtual_machine = models.ForeignKey(
to="virtualization.VirtualMachine",
on_delete=models.CASCADE,
related_name="services",
null=True,
blank=True,
)
name = models.CharField(max_length=100)
protocol = models.CharField(max_length=50, choices=ServiceProtocolChoices)
ports = JSONArrayField(
base_field=models.PositiveIntegerField(
validators=[
MinValueValidator(SERVICE_PORT_MIN),
MaxValueValidator(SERVICE_PORT_MAX),
]
),
verbose_name="Port numbers",
)
ipaddresses = models.ManyToManyField(
to="ipam.IPAddress",
related_name="services",
blank=True,
verbose_name="IP addresses",
)
description = models.CharField(max_length=200, blank=True)
User experience
add support for port list
We should allow setting a port range in the Service
model
Some services are set up with a range of ports. For instance, the default port range for RPC on linux is 1024-5000. Some firewalls may handle that use case with some sort of state tracking or application layer gateway, but for simple ACLs this would be handled with a port range. Other port ranges may not have an ALG or state tracking.
If I'm understanding the current implementation, a range of ports would require an individual Service
object for each port in the sequence. It would be easier on the operator if the port field could accept ranges.
Would it be possible and would it make sense to consolidate the Source and Destination model into a single "endpoint" model or similar ?
By experience, it can make it harder to consume the information from the REST API or from GraphQL if we have too many models
I'm not very familiar with security so sorry if that doesn't make sense.
User experience
create_status
in 0002_custom_status.py
only adds models that need a status to the content types for the applicable Statuses. I'm not sure how big of a problem this is in practice but I thought I'd report it. If this is intended, please let me know why so I can understand it better.
create_status
in 0002_custom_status.py
adds all the M2M through models to the content types for the applicable Statuses.
$ inv build start nbshell
>>> list(Status.objects.get(name="Active").content_types.all())
Single in on one plurality (my preference would be plural) for all many to many fields. Example:
PolicyRule
has its many to many fields in singular, while Policy
has them in plural.
class PolicyRule(PrimaryModel):
(...)
source_user = models.ManyToManyField(to=UserObject, through="SrcUserM2M", related_name="policy_rules")
source_user_group = models.ManyToManyField(
to=UserObjectGroup, through="SrcUserGroupM2M", related_name="policy_rules"
)
source_address = models.ManyToManyField(to=AddressObject, through="SrcAddrM2M", related_name="source_policy_rules")
source_address_group = models.ManyToManyField(
to=AddressObjectGroup, through="SrcAddrGroupM2M", related_name="source_policy_rules"
)
source_zone = models.ForeignKey(
to=Zone, null=True, blank=True, on_delete=models.SET_NULL, related_name="source_policy_rules"
)
destination_address = models.ManyToManyField(
to=AddressObject, through="DestAddrM2M", related_name="destination_policy_rules"
)
destination_address_group = models.ManyToManyField(
to=AddressObjectGroup, through="DestAddrGroupM2M", related_name="destination_policy_rules"
)
destination_zone = models.ForeignKey(
to=Zone, on_delete=models.SET_NULL, null=True, blank=True, related_name="destination_policy_rules"
)
service = models.ManyToManyField(to=ServiceObject, through="SvcM2M", related_name="policy_rules")
service_group = models.ManyToManyField(to=ServiceObjectGroup, through="SvcGroupM2M", related_name="policy_rules")
class Policy(PrimaryModel):
(...)
policy_rules = models.ManyToManyField(to=PolicyRule, through="PolicyRuleM2M", related_name="policies")
assigned_devices = models.ManyToManyField(
to="dcim.Device", through="PolicyDeviceM2M", related_name="firewall_policies"
)
assigned_dynamic_groups = models.ManyToManyField(
to="extras.DynamicGroup", through="PolicyDynamicGroupM2M", related_name="firewall_policies"
)
Greater level of intuition for field naming.
Problemless nautobot-server post_upgrade
Applying nautobot_firewall_models.0008_renaming_part3...Traceback (most recent call last):
File "/opt/nautobot/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/opt/nautobot/lib/python3.9/site-packages/django/db/backends/mysql/base.py", line 73, in execute
return self.cursor.execute(query, args)
File "/opt/nautobot/lib/python3.9/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/opt/nautobot/lib/python3.9/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/opt/nautobot/lib/python3.9/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1553, "Cannot drop index 'unique_with_index': needed in a foreign key constraint")
nautobot-server post_upgrade
After manually lifting the foreign key constraint on the nautobot_firewall_models_policyrulem2m
table, I was able to let the post_upgrade migration successfully finish. Sadly, I was not able to figure out how to rewrite the offending migration so this error would not occur. I can confirm, though, that installing this firewall plugin on MariaDB from scratch also leads to this error. So this is a breaking bug for users of MariaDB.
These were my manual SQL statements:
MariaDB [nautobot]> show create table nautobot_firewall_models_policyrulem2m;
+----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| nautobot_firewall_models_policyrulem2m | CREATE TABLE `nautobot_firewall_models_policyrulem2m` (
`id` char(32) NOT NULL,
`index` smallint(5) unsigned DEFAULT NULL CHECK (`index` >= 0),
`policy_id` char(32) NOT NULL,
`rule_id` char(32) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `unique_with_index` (`policy_id`,`rule_id`,`index`),
KEY `nautobot_firewall_mo_rule_id_4fdd7827_fk_nautobot_` (`rule_id`),
CONSTRAINT `nautobot_firewall_mo_policy_id_9591fe9b_fk_nautobot_` FOREIGN KEY (`policy_id`) REFERENCES `nautobot_firewall_models_policy` (`id`),
CONSTRAINT `nautobot_firewall_mo_rule_id_4fdd7827_fk_nautobot_` FOREIGN KEY (`rule_id`) REFERENCES `nautobot_firewall_models_policyrule` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 |
+----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.001 sec)
MariaDB [nautobot]> alter table nautobot_firewall_models_policyrulem2m drop constraint nautobot_firewall_mo_policy_id_9591fe9b_fk_nautobot_;
Query OK, 0 rows affected (0.022 sec)
Records: 0 Duplicates: 0 Warnings: 0
As far as I understand some models introduced by this plugin are not security specific like : IPRange and FQDN as such I'm wondering if it would it make sense to move them to the IPAM menu instead ?
Thoughts ?
User experience
Provide a more detailed single view of the policy. This is just a mock up, I tried to blur out parts that did not have realistic data.
Mind mapping such a large data model is difficult, a bit easier and worth the effort imho to maintain within everything controlled in a centralized view
When selecting policy rules for a given policy, instead of showing a blank selectable item, generate a default name (real-time, not to be saved to the database) to display.
Policy rules are very hard to select if they lack a name.
I think having Role and/or Type on as many objects as possible would be very useful
Of course it's not applicable to all of them
Being able to track the Role or the Type of a given object is a key part of the Source of Truth and it very helpful when we need to automate that resource.
Allow AddressObjectGroup
to source its members from a dynamic group. This could for example be a new ForeignKey
field on the model pointing to DynamicGroup
.
All prefixes with the role user-lan
should have access to a set of services. Instead of manually updating the policy rule (or NAT policy rule) whenever there are changes, we could instead use dynamic groups to automatically accomplish that.
Readthedocs documentation up to date
Docs in https://nautobot-plugin-firewall-models.readthedocs.io/en/latest/ look out of date
When you define a tenant group, you can select another object as parent. That leads to the ability to anidate this object.
Service Group would need same approach. This way amount of objects decreases as well as different levels of anidation of Services could be implemented.
SERVICE1
SERVICE2
SERVICE3
SERVICE4
SERVICE5
SERVICE6
GROUP1 = SERVICE1+SERVICE2
GROUP2 = SERVICE3+SERVICE4
GROUP3= SERVICE4+SERVICE5
GROUP4, parent from GROUP3 + SERVICE4
I am not sure that Status is always required, it is an extra step, and I am not sure that every object requires that level of detail. Firewall models are complex to start with, trying to think of how to make it easiest on the users.
Adding a lot of objects and having to always have status.
The ability to more closely align PolicyRules to the Parent Policy. While it is clear that the current design is more flexible, that flexibility comes at the cost of complexity (both in code, mind mapping, and UI) vs the ForeignKey approach.
7c914f7f703200909c6cd045e7e681b8dbf8967a
When using the search for the models created by this plugin, it works similar to that on the core models.
The search doesn't seem to filter down the table.
For all the models I've tested this plays out in a similar fashion.
NA
It looks like the ultimate purpose of this app is to model firewall policies in Nautobot, but that purpose should be stated explicitly.
The docs should have examples of how to use the app: how to create a firewall policy model using the different objects (Services, Users, Zones, etc)
Given a query like
query ($device_id: ID!) {
device(id: $device_id) {
policies
}
}
I want all Policy objects to be returned for that given device, as in
Get all rules for a given device, specific use case is using nautobot-golden-config to generate ACLs as part of the configuration.
The test data should generate properly, and the command should execute successfully.
Even with the container in a stopped state, I expect the necessary container(s) to be brought up and the test data populated.
I get the following error in my terminal:
(nautobot-firewall-models-py3.11) ➜ nautobot-plugin-firewall-models git:(main) ✗ invoke testdata
Running docker-compose command "ps --services --filter status=running"
Running docker-compose command "run --entrypoint 'nautobot-server create_test_firewall_data' nautobot"
[+] Running 2/0
⠿ Container nautobot_firewall_models-db-1 Created 0.0s
⠿ Container nautobot_firewall_models-redis-1 Created 0.0s
[+] Running 2/2
⠿ Container nautobot_firewall_models-redis-1 Started 0.4s
⠿ Container nautobot_firewall_models-db-1 Started 0.4s
usage: nautobot-server create_test_firewall_data [-h] [--version] [-v {0,1,2,3}] [--settings SETTINGS] [--pythonpath PYTHONPATH] [--traceback] [--no-color]
[--force-color] [--skip-checks]
nautobot-server create_test_firewall_data: error: unrecognized arguments: nautobot-server runserver 0.0.0.0:8080
invoke build
invoke testdata
invoke start
invoke testdata
command again. This time it should succeed.A way to validate that there is no duplication/shadowing going on in a given policy.
Users can validate that their policy is built properly.
Able to send a list of policy-rules to a policy via RestAPI so they can be attached.
API accepts call however the policy-rules aren't attached to the policy
A POST to http://localhost:8080/api/plugins/firewall/policy/
with this payload is accepted but policy-rules aren't attached:
{
"name": "mypolicy",
"description": "restrictive",
"status": "active",
"assigned_devices": [
{
"device": "d0562f91-37a6-413e-8b45-5756f6d784dc",
"weight": 100
}
],
"policy_rules": [
"58099d8e-badc-49b4-81d6-685db4c3186a"
]
}
A PATCH to http://localhost:8080/api/plugins/firewall/policy/859ff981-52bc-4980-a209-1243d2212ac2/)
with this payload is accepted but policy-rules aren't attached:
{
"policy_rules": [
"58099d8e-badc-49b4-81d6-685db4c3186a"
]
}
Add a field/bit of information on the PolicyRule
(and possibly NATPolicyRule
) model so we can specify IPv4 vs IPv6 for rules.
A rule for which the intent is:
All source IP addresses to all destination IP addresses on any port, but only IPv4
Similar to the core models, add a clone button into the models detail page where applicable.
Save time when creating multiple, similar model instances and resemble the core models more closely to provide a unified experience across plugin and core.
As much as possible, I think we should add a status field to all models
Of course it's not applicable to all of them but at a minimum I imagine we could add that to :
Being able to track the status of a given object is a key part of the Source of Truth and it very helpful when we need to automate that resource.
Currently only 3 extras features have been configured on most models:
Whenever possible i think we should consider also enabling
export_templates
Consistency, user experience
Custom FIelds etc field should be displayed in forms
Relationship displayed but not config context custom fields or computed fields
Review each dropdown and prefer the DynamicModelChoiceField and DynamicModelMultipleChoiceField. These will soon enough become large dropdowns and the type ahead will be required.
Type ahead on larger models.
Add the ability to generate configuration by integrating with Caprica. I have a working demo, and it works pretty slick.
Leveraging caprica to generate configuration will allow for quick multi-vendor support for generating policy from the model.
Create Convience Methods for Policy -> all devices & Device -> all Policies
Make it easier for downstream jobs to access the appropriate data.
Following, Nautobot naming convention the filterset should be named FilterSet
it's especially important for GraphQL when we are using @extras_feature because if the name of the filterset is not correct the GraphQL type generated by @extras_feature will miss the filtering options
Extras_feature for GraphQL is leverating the function get_filterset_for_model from nautobot/utilities/utils
no filtering option available in GraphQL
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.