Code Monkey home page Code Monkey logo

cookiecutter-nautobot-app's Introduction

Nautobot

Nautobot

Nautobot is a Network Source of Truth and Network Automation Platform built as a web application atop the Django Python framework with a PostgreSQL or MySQL database.

Key Use Cases

1. Flexible Source of Truth for Networking - Nautobot core data models are used to define the intended state of network infrastructure enabling it as a Source of Truth. While a baseline set of models are provided (such as IP networks and addresses, devices and racks, circuits and cable, etc.) it is Nautobot's goal to offer maximum data model flexibility. This is enabled through features such as user-defined relationships, custom fields on any model, and data validation that permits users to codify everything from naming standards to having automated tests run before data can be populated into Nautobot.

2. Extensible Data Platform for Automation - Nautobot has a rich feature set to seamlessly integrate with network automation solutions. Nautobot offers GraphQL and native Git integration along with REST APIs and webhooks. Git integration dynamically loads YAML data files as Nautobot config contexts. Nautobot also has an evolving plugin system that enables users to create custom models, APIs, and UI elements. The plugin system is also used to unify and aggregate disparate data sources creating a Single Source of Truth to streamline data management for network automation.

3. Platform for Network Automation Apps - The Nautobot plugin system enables users to create Network Automation Apps. Apps can be as lightweight or robust as needed based on user needs. Using Nautobot for creating custom applications saves up to 70% development time by re-using features such as authentication, permissions, webhooks, GraphQL, change logging, etc. all while having access to the data already stored in Nautobot. Some production ready applications include:

The complete documentation for Nautobot can be found at Read the Docs.

Questions? Comments? Start by perusing our GitHub discussions for the topic you have in mind, or join the #nautobot channel on Network to Code's Slack community!

Build Status

Branch Status
main Build Status
develop Build Status
next Build Status

Screenshots

Gif of main page


Gif of config contexts


Gif of prefix hierarchy


Gif of GraphQL


Gif of Modes

Installation

Please see the documentation for instructions on installing Nautobot.

Application Stack

Below is a simplified overview of the Nautobot application stack for reference:

Application stack diagram

Plugins and Extensibility

Nautobot offers the ability to customize your setup to better align with your direct business needs. It does so through the use of various plugins that have been developed for network automation, and are designed to be used in environments where needed.

There are many plugins available within the Nautobot Apps ecosystem. The below screenshots are an example of some popular ones that are currently available.

Plugin Screenshots

Golden Config Plugin

Gif of golden config

ChatOps Plugin

Gif of chatops

Device Lifecycle Management Plugin

Gif of DLM

Providing Feedback

The best platform for general feedback, assistance, and other discussion is our GitHub discussions. To report a bug or request a specific feature, please open a GitHub issue using the appropriate template.

If you are interested in contributing to the development of Nautobot, please read our contributing guide prior to beginning any work.

Related projects

Please check out the GitHub nautobot topic for a list of relevant community projects.

Notices

Nautobot was initially developed as a fork of NetBox (v2.10.4). NetBox was originally developed by Jeremy Stretch at DigitalOcean and the NetBox Community.

cookiecutter-nautobot-app's People

Contributors

bryanculver avatar cmsirbu avatar glennmatthews avatar jdrew82 avatar jvanderaa avatar snaselj avatar whitej6 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

cookiecutter-nautobot-app's Issues

invoke towncrier

Environment

  • cookiecutter-nautobot-app version: N/A

Proposed Functionality

Create an invoke towncrier command, that makes a call to the respective github project and guesses the next towncrier number as well as created the file for you, e.g. 1234.xxxx or 1234.fixed if you sent the --fixed command.

Use Case

Make it easier to determine the towncrier number and ideally save on github actions running.

`invoke import-db` not populating postgres DB

Environment

  • Python version: 3.11
  • cookiecutter-nautobot-app template version: 1.2

When trying to use invoke import-db to import a dump.sql to a postgres DB without making any specific modifications to default cookie configurations, it does not populate. This has been tested with a dump.sql that is known to work & populate in the same environment when done manually using docker exec -it.

Observed Behavior

1st Issue: Initially, no activity seems to even take place in the postgres DB when looking at the logs. I believe this is due to the command exec -- db sh -c 'psql --username=$POSTGRES_USER postgres' < 'dump.sql' targeting a DB with the name postgres which is not the default DB name. When modifying this to $POSTGRES_DB, the exection gets further.

2nd Issue: As the command executes and imports the data, there are numerous errors regarding objects already existing and foreign key constraint violations. Example errors:

  • ERROR: relation "tenancy_tenant_group_id_7daef6f4" already exists
  • ERROR: constraint "dcim_cable_status_id_6a580869_fk_extras_status_id" for relation "dcim_cable" already exists
  • ERROR: insert or update on table "nautobot_device_lifecycle_mgmt_hardwarelcm" violates foreign key constraint "nautobot_device_life_device_type_id_c15e8265_fk_dcim_devi"

Note that this occurs with a fresh invoke build start, not with some existing Nautobot instance with data. This might be due to the fact that the existing database volume is not removed prior to this command running.

Expected Behavior

The data in dump.sql is successfully imported and loaded into my Nautobot DB.

Steps to Reproduce

  1. invoke build start
  2. invoke import-db --input-file="/home/steven/coding/data_nautobot/nautobot.sql"
  3. invoke start

`inv import-db` does not shut down `nautobot` service before importing

Environment

  • Python version: 3.11
  • cookiecutter-nautobot-app template version: Latest
  • database: psql

Observed Behavior

IF nautobot service is running and you run inv invoke-db psql raises an error due to an active connection from the nautobot service

Expected Behavior

All services are stopped prior to import and import is successful

Steps to Reproduce

  1. inv start
  2. inv import-db

Additional Enhancments

Environment

  • cookiecutter-nautobot-app version:

Proposed Functionality

  • Flake8 config into pyproject.toml
  • Alternative CI templates (Gitlab, Jenkins)
  • Is pytest even working?
  • Better development documentation

Use Case

Docs container errors out from freshly baked cookie

Environment

  • Python version: 3.11
  • cookiecutter-nautobot-app template version: 2.0.0

Observed Behavior

docs-1      | DEBUG   -  Running 1 `page_content` events
docs-1      | DEBUG   -  Reading: dev/dev_environment.md
docs-1      | DEBUG   -  Running 2 `page_markdown` events
docs-1      | DEBUG   -  Running 1 `page_content` events
docs-1      | DEBUG   -  Reading: dev/extending.md
docs-1      | DEBUG   -  Running 2 `page_markdown` events
docs-1      | DEBUG   -  Running 1 `page_content` events
docs-1      | DEBUG   -  Reading: dev/code_reference/index.md
docs-1      | DEBUG   -  Running 2 `page_markdown` events
docs-1      | DEBUG   -  Running 1 `page_content` events
docs-1      | DEBUG   -  Reading: dev/code_reference/api.md
docs-1      | DEBUG   -  Running 2 `page_markdown` events
docs-1      | DEBUG   -  mkdocstrings: Matched '::: nautobot_ip_acls.api'
docs-1      | DEBUG   -  mkdocstrings: Using handler 'python'
docs-1      | DEBUG   -  mkdocstrings: Collecting data
docs-1      | INFO    -  DeprecationWarning: The `load_module` method was renamed `load`, and is deprecated.
docs-1      |   File "/usr/local/lib/python3.11/site-packages/mkdocstrings_handlers/python/handler.py", line 280, in collect
docs-1      |     loader.load_module(module_name)
docs-1      |   File "/usr/local/lib/python3.11/site-packages/griffe/loader.py", line 105, in load_module
docs-1      |     warnings.warn(
docs-1      | 
docs-1      | DEBUG   -  griffe: Found nautobot_ip_acls: loading
docs-1      | DEBUG   -  griffe: Loading path /source/nautobot_ip_acls/__init__.py
docs-1      | DEBUG   -  griffe: Loading path /source/nautobot_ip_acls/tests/__init__.py
docs-1      | DEBUG   -  griffe: Loading path /source/nautobot_ip_acls/tests/test_basic.py
docs-1      | DEBUG   -  griffe: Loading path /source/nautobot_ip_acls/tests/test_api.py
docs-1      | DEBUG   -  griffe: Iteration 1 finished, 0 aliases resolved, still 0 to go
docs-1      | ERROR   -  mkdocstrings: nautobot_ip_acls.api could not be found
docs-1      | ERROR   -  Error reading page 'dev/code_reference/api.md':
docs-1      | ERROR   -  Could not collect 'nautobot_ip_acls.api'
docs-1      | 
docs-1      | Aborted with a BuildError!
docs-1 exited with code 1

Expected Behavior

All containers to start successfully.

Steps to Reproduce

  1. Bake cookie with no pre-defined model.
  2. poetry lock
  3. invoke build
  4. cp development/creds.example.env development/creds.env
  5. invoke debug

Add ability to run upstream testing github actions workflow manually

Environment

  • cookiecutter-nautobot-app version: 2.2.1? Why is a version required for a feature request?

Proposed Functionality

We need to add workflow_dispatch to the on: key of nautobot-app/{{ cookiecutter.project_slug }}/.github/workflows/upstream_testing.yml so that we can manually invoke this testing when preparing a new release of Nautobot core.

@glennmatthews thoughts on adding a branch argument to this workflow so we can specify which branch/tag to test against? We could have core's release workflow call this workflow and specify the release tag so before releasing a new minor version of Nautobot we would release a release candidate to initiate the upstream testing.

Use Case

We'd like to manually invoke upstream testing before releasing a new version of Nautobot core to verify the apps won't be broken by any changes introduced in the new version.

Unable to Use Environment to Control Nautobot Version

Environment

  • Python version: 3.11
  • cookiecutter-nautobot-app template version: Beta

Observed Behavior

When I set NAUTOBOT_VER on the CLI, in the creds.env and development.env to 2.1, it continuously took 2.0 instead from the Dockerfile. Unclear of where this should be set.

Expected Behavior

Expected to get Nautobot 2.1

Steps to Reproduce

  1. Update NAUTOBOT_VER environment variable on the system, in the .env files
  2. Run invoke build
  3. Run invoke debug
  4. Log into the UI
  5. Check the version on the bottom left

(tasks.py) Shell redirection from the local shell to docker-compose is not working in my dev environment

Environment

  • Python version: n/a
  • cookiecutter-nautobot-app template version: 2.2.0

Steps to reproduce

  1. Be running fedora and/or podman?
  2. Try to use one of the invoke commands that redirects a file into docker-compose exec

Observed Behavior

invoke generate_app_config_schema and invoke validate_app_config_schema are not working for me because when the shell tries to redirect that file from the local shell to the spawned shell it gets out of sync and the code isn't properly loaded.

Expected Behavior

I expected it to work. Running the shell redirection entirely in the spawned shell fixed it for me:

diff --git a/tasks.py b/tasks.py
index 6c08350..4823705 100644
--- a/tasks.py
+++ b/tasks.py
@@ -352,8 +352,9 @@ def nbshell(context, file="", env={}, plain=False):
         "nautobot-server",
         "nbshell",
         "--plain" if plain else "",
-        f"< '{file}'" if file else "",
     ]
+    if file:
+        command = ["sh", "-c", f"\"{' '.join(command)} < '{file}'\""]
     run_command(context, " ".join(command), pty=not bool(file), command_env=env)
 

`invoke stop` removes more than expected

Environment

  • Python version: 3.11
  • cookiecutter-nautobot-app template version: 1.2

Currently, invoke stop without any specified services produces the command docker compose down --remove-orphans. This removes all containers.

The invoke commands probably shouldn't deviate from what they're actually doing (like defined here https://docs.docker.com/engine/reference/commandline/compose_stop/), otherwise it could cause unexpected outcomes from what the user intended.

Observed Behavior

(nautobot-data-validation-engine-py3.10) steven@NTC-LAT7320:~/coding/nautobot-plugin-data-validation-engine$ docker container list -a
CONTAINER ID   IMAGE                                                   COMMAND                  CREATED              STATUS                          PORTS                    NAMES
220bdc828075   nautobot-data-validation-engine/nautobot:2.0.0-py3.11   "sh -c 'watchmedo au…"   About a minute ago   Up About a minute (healthy)     8080/tcp                 nautobot-data-validation-engine-worker-1
cfd7614dc1c7   nautobot-data-validation-engine/nautobot:2.0.0-py3.11   "sh -c 'nautobot-ser…"   About a minute ago   Exited (1) About a minute ago                            nautobot-data-validation-engine-beat-1
24bdac6e9868   nautobot-data-validation-engine/nautobot:2.0.0-py3.11   "/docker-entrypoint.…"   About a minute ago   Up About a minute (healthy)     0.0.0.0:8080->8080/tcp   nautobot-data-validation-engine-nautobot-1
ad45d01d95fc   nautobot-data-validation-engine/nautobot:2.0.0-py3.11   "mkdocs serve -v -a …"   About a minute ago   Up About a minute               0.0.0.0:8001->8080/tcp   nautobot-data-validation-engine-docs-1
0033dfa12953   postgres:13-alpine                                      "docker-entrypoint.s…"   About a minute ago   Up About a minute (healthy)     5432/tcp                 nautobot-data-validation-engine-db-1
65f086f1bb22   redis:6-alpine                                          "docker-entrypoint.s…"   About a minute ago   Up About a minute               6379/tcp                 nautobot-data-validation-engine-redis-1
(nautobot-data-validation-engine-py3.10) steven@NTC-LAT7320:~/coding/nautobot-plugin-data-validation-engine$ invoke stop
Stopping Nautobot...
Running docker compose command "down --remove-orphans"
 Container nautobot-data-validation-engine-beat-1  Stopping
 Container nautobot-data-validation-engine-worker-1  Stopping
 Container nautobot-data-validation-engine-docs-1  Stopping
 Container nautobot-data-validation-engine-beat-1  Stopped
 Container nautobot-data-validation-engine-beat-1  Removing
 Container nautobot-data-validation-engine-beat-1  Removed
 Container nautobot-data-validation-engine-docs-1  Stopped
 Container nautobot-data-validation-engine-docs-1  Removing
 Container nautobot-data-validation-engine-docs-1  Removed
 Container nautobot-data-validation-engine-worker-1  Stopped
 Container nautobot-data-validation-engine-worker-1  Removing
 Container nautobot-data-validation-engine-worker-1  Removed
 Container nautobot-data-validation-engine-nautobot-1  Stopping
 Container nautobot-data-validation-engine-nautobot-1  Stopped
 Container nautobot-data-validation-engine-nautobot-1  Removing
 Container nautobot-data-validation-engine-nautobot-1  Removed
 Container nautobot-data-validation-engine-redis-1  Stopping
 Container nautobot-data-validation-engine-db-1  Stopping
 Container nautobot-data-validation-engine-redis-1  Stopped
 Container nautobot-data-validation-engine-redis-1  Removing
 Container nautobot-data-validation-engine-redis-1  Removed
 Container nautobot-data-validation-engine-db-1  Stopped
 Container nautobot-data-validation-engine-db-1  Removing
 Container nautobot-data-validation-engine-db-1  Removed
 Network nautobot-data-validation-engine_default  Removing
 Network nautobot-data-validation-engine_default  Removed
(nautobot-data-validation-engine-py3.10) steven@NTC-LAT7320:~/coding/nautobot-plugin-data-validation-engine$ docker container list -all
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
(nautobot-data-validation-engine-py3.10) steven@NTC-LAT7320:~/coding/nautobot-plugin-data-validation-engine$ 

Expected Behavior

The invoke stop command only stops the running containers, not removes them.

Steps to Reproduce

  1. invoke build start
  2. invoke stop

Add `invoke backup-media` and `invoke import-media` commands

Environment

  • cookiecutter-nautobot-app version:

Proposed Functionality

Add the following commands to task.py

  • invoke backup-media - backup all files from /opt/nautobot/media to media.tgz
  • invoke import-media - Restore all files from media.tgz to /opt/nautobot/media

Use Case

Backing up only the database is not enough to transfer data. The media files that will be referenced should also be available in a restored project.

healthcheck needed for nautobot container to reduce startup noise

Environment

Observed Behavior

When first launching a newly baked nautobot-app cookie with invoke debug, there is a tremendous amount of console noise resulting from the beat container trying to launch Celery Beat before the requisite database tables have been created by initial migrations, as an excerpt:

nautobot-ip-acls-beat-1      | [2023-12-14 20:46:17,715: CRITICAL/MainProcess] beat raised exception <class 'django.db.utils.ProgrammingError'>: ProgrammingError('column extras_scheduledjob.celery_kwargs does not exist\nLINE 1: ...duledjob"."args", "extras_scheduledjob"."kwargs", "extras_sc...\n                                                             ^\n')
nautobot-ip-acls-beat-1      | Traceback (most recent call last):
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
nautobot-ip-acls-beat-1      |     return self.cursor.execute(sql, params)
nautobot-ip-acls-beat-1      |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django_prometheus/db/common.py", line 69, in execute
nautobot-ip-acls-beat-1      |     return super().execute(*args, **kwargs)
nautobot-ip-acls-beat-1      |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      | psycopg2.errors.UndefinedColumn: column extras_scheduledjob.celery_kwargs does not exist
nautobot-ip-acls-beat-1      | LINE 1: ...duledjob"."args", "extras_scheduledjob"."kwargs", "extras_sc...
nautobot-ip-acls-beat-1      |                                                              ^
nautobot-ip-acls-beat-1      | 
nautobot-ip-acls-beat-1      | 
nautobot-ip-acls-beat-1      | The above exception was the direct cause of the following exception:
nautobot-ip-acls-beat-1      | 
nautobot-ip-acls-beat-1      | Traceback (most recent call last):
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/celery/apps/beat.py", line 113, in start_scheduler
nautobot-ip-acls-beat-1      |     service.start()
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/celery/beat.py", line 634, in start
nautobot-ip-acls-beat-1      |     humanize_seconds(self.scheduler.max_interval))
nautobot-ip-acls-beat-1      |                      ^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/kombu/utils/objects.py", line 40, in __get__
nautobot-ip-acls-beat-1      |     return super().__get__(instance, owner)
nautobot-ip-acls-beat-1      |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/functools.py", line 1001, in __get__
nautobot-ip-acls-beat-1      |     val = self.func(instance)
nautobot-ip-acls-beat-1      |           ^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/celery/beat.py", line 677, in scheduler
nautobot-ip-acls-beat-1      |     return self.get_scheduler()
nautobot-ip-acls-beat-1      |            ^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/celery/beat.py", line 668, in get_scheduler
nautobot-ip-acls-beat-1      |     return symbol_by_name(self.scheduler_cls, aliases=aliases)(
nautobot-ip-acls-beat-1      |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django_celery_beat/schedulers.py", line 233, in __init__
nautobot-ip-acls-beat-1      |     Scheduler.__init__(self, *args, **kwargs)
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/celery/beat.py", line 264, in __init__
nautobot-ip-acls-beat-1      |     self.setup_schedule()
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django_celery_beat/schedulers.py", line 241, in setup_schedule
nautobot-ip-acls-beat-1      |     self.install_default_entries(self.schedule)
nautobot-ip-acls-beat-1      |                                  ^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django_celery_beat/schedulers.py", line 363, in schedule
nautobot-ip-acls-beat-1      |     self._schedule = self.all_as_schedule()
nautobot-ip-acls-beat-1      |                      ^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django_celery_beat/schedulers.py", line 247, in all_as_schedule
nautobot-ip-acls-beat-1      |     for model in self.Model.objects.enabled():
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 280, in __iter__
nautobot-ip-acls-beat-1      |     self._fetch_all()
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 1324, in _fetch_all
nautobot-ip-acls-beat-1      |     self._result_cache = list(self._iterable_class(self))
nautobot-ip-acls-beat-1      |                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 51, in __iter__
nautobot-ip-acls-beat-1      |     results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
nautobot-ip-acls-beat-1      |               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1175, in execute_sql
nautobot-ip-acls-beat-1      |     cursor.execute(sql, params)
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 98, in execute
nautobot-ip-acls-beat-1      |     return super().execute(sql, params)
nautobot-ip-acls-beat-1      |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 66, in execute
nautobot-ip-acls-beat-1      |     return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
nautobot-ip-acls-beat-1      |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
nautobot-ip-acls-beat-1      |     return executor(sql, params, many, context)
nautobot-ip-acls-beat-1      |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 79, in _execute
nautobot-ip-acls-beat-1      |     with self.db.wrap_database_errors:
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django/db/utils.py", line 90, in __exit__
nautobot-ip-acls-beat-1      |     raise dj_exc_value.with_traceback(traceback) from exc_value
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
nautobot-ip-acls-beat-1      |     return self.cursor.execute(sql, params)
nautobot-ip-acls-beat-1      |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      |   File "/usr/local/lib/python3.11/site-packages/django_prometheus/db/common.py", line 69, in execute
nautobot-ip-acls-beat-1      |     return super().execute(*args, **kwargs)
nautobot-ip-acls-beat-1      |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
nautobot-ip-acls-beat-1      | django.db.utils.ProgrammingError: column extras_scheduledjob.celery_kwargs does not exist
nautobot-ip-acls-beat-1      | LINE 1: ...duledjob"."args", "extras_scheduledjob"."kwargs", "extras_sc...
nautobot-ip-acls-beat-1      |                                                              ^
nautobot-ip-acls-beat-1      | 

I believe this is because even though docker-compose.base.yml defines beat with a depends_on: nautobot: condition: service_healthy, we don't have any healthcheck defined for the nautobot container. Enabling a health check for nautobot that relies on the server actually having completed migrations should fix this issue.

Expected Behavior

All containers to start up in the appropriate order with a minimum of errors.

Steps to Reproduce

Align Jinja2 Templates

Environment

  • cookiecutter-nautobot-app version: pre-1.2

Proposed Functionality

Align Jinja2 templates, either use spaces within braces or not. An example:

$ cd nautobot-app
nautobot-app$ ls
 README.md   cookiecutter.json   hooks   tests  '{{ cookiecutter.project_slug }}'
nautobot-app$ cd \{\{\ cookiecutter.project_slug\ \}\}/
nautobot-app/{{ cookiecutter.project_slug }}$ ls
LICENSE  README.md  development  docs  invoke.example.yml  invoke.mysql.yml  mkdocs.yml  pyproject.toml  tasks.py  {{cookiecutter.plugin_name}}

Apps CI Improvements

Nautobot App CI Proposal

A proposal to improve GitHub based CI for Nautobot App development.

Processes

Description of processes to be implemented. Bullets under Trigger GitHub workflow ... are automated GitHub workflow jobs and steps.

Add Feature

Process of developing a new feature.

The idea is to run as simple as possible tests for each pull request commit, and all tests only once for merge commit.

  • Locally run invoke add-feature --issue <#issue> to start developing a new feature. This task will:
    • Fetch the repository remote branches and tags.
    • Checkout and pull the latest develop.
    • Create a new feature branch u/<user name>-<#issue>-<title>.
      • Omit #issue if not related to any issue.
    • Add towncrier fragment.
    • Commit and push the feature branch.
    • Open a new pull request to develop branch.
  • Implement the feature (assignee).
  • Trigger GitHub workflow by pushing a commit to the feature branch.
    • Build the documentation using readthedocs.org.
    • Test towncrier fragment existence.
    • Build and push Docker image to ghcr.io, and use it in the following parallelized jobs:
      • Run linters.
      • Run unit tests using latest stable Nautobot version, latest supported Python version and PostgreSQL.
  • Review and approve the pull request (code owner).
  • Squash and merge the pull requests (assignee).
  • Trigger GitHub workflow by merging to develop.

How is this different from the current/existing CI workflow?

Here is an examples run, still WIP:

https://github.com/nautobot/nautobot-plugin-firewall-models/actions/runs/6470190422

  • Workflow uses Docker gha caching.
    • Current solution is missing push: true, so caching is not working.
  • All linters run as a single job.
  • Linters use the same docker image as unit tests, no need to install dependencies using poetry.
  • Unit tests run only one test using latest stable Nautobot version, latest supported Python version and PostgreSQL.

Speedup against current solution is about 25 % and uses significantly fewer workers.

Docker caching is explained here.

Bug Fix

The process is the same as adding a new feature.

Stable Release

To safely release a new stable version.

  • Locally run invoke release --version 'X.Y.Z' --ref <git reference> --push to start the release process. This task will:
    • Fetch the repository remote branches and tags.
    • Pull the latest develop and main.
    • Fail if tag vX.Y.Z already exists.
    • Update pyproject.toml version to the version provided.
      • Implement some checking between the current vs provided versions.
    • Checkout and rebase main to the --ref argument value.
      • Default value is the latest develop.
      • Fail if provided git reference is not a descendant of main or develop.
    • Create changelog based on towncrier fragments.
    • Commit and push the main branch.
    • Open a new pull request to develop branch.
  • Trigger GitHub workflow by pushing a commit to the release pull request.
  • Review and approve the pull request (code owner).
  • Trigger GitHub workflow by approving the release pull request:
    • Check whether full tests for the commit passed.
    • Tag the commit vX.Y.Z and push the tag.
  • Trigger GitHub workflow by pushing a tag:
    • Check whether full tests for the tagged commit passed.
      • The commit should be tagged in the previous step only if the tests passed, however, it is possible to tag the commit manually. This will verify it.
    • Build a package.
    • Create a new GitHub release.
  • Trigger GitHub workflow by creating a GitHub release.
    • Release the package to PyPI.
    • Merge and close the release pull request. Do not squash to keep the release commit history.

When some step fails, it can be simply re-run.

Pre Release

To be able to quickly release a new pre-release version.

Similar to stable release, but:

  • Locally run invoke pre-release --version <version> --base <branch name> --push to start the release process. This task will:
    • Increment the version if no --version argument is provided, e.g.:
      • 1.0.1 => 1.0.2-dev0
      • 1.0.2-dev0 => 1.0.2-dev1
      • An example implementation is here.
    • Create a new u/<username>-v<version> branch from the current git reference.
    • Open a new pull request to the --base branch.
      • Use the current branch as the base branch if no --base argument is provided.
  • Approval can be done by pull request author, if the base branch is not protected.
  • Delete the release branch after successful release.

Bug Fix LTM

Implement and merge bug fix to develop first, if the bug is present in both, stable and LTM releases.

  • Locally run invoke fix-ltm --ref <merge-commit-sha | #issue> to start fixing LTM bug. This task will:
    • Fetch the repository remote branches and tags.
    • Checkout and pull the latest ltm-1.6.
    • Create a new branch `v/-<#issue>-<title>.
    • Increment the patch version in pyproject.toml.
    • Cherry-pick the commit from develop if provided by --ref.
      • Merge commit reference can be determined from #issue and vice versa.
    • Commit and push the branch.
    • Open a new pull request to ltm-1.6 branch.
  • If the bug is not present in stable release, implement and commit the bug fix (assignee).
  • Trigger GitHub workflow by pushing a commit to the feature branch.
  • Review and approve the pull request (code owner).
  • Squash and merge the pull requests (assignee).

Back-port Feature to LTM

If allowed, process will be the same as bug fix LTM.

LTM Release

To safely release a new LTM version.

Similar to stable release, but:

  • The invoke task is named invoke release-ltm.
    • --version argument is missing.
      • Increment version patch part only.
  • Use ltm-1.6 branch instead of develop.
  • Use protected ltm-1.6-main branch (doesn't exist yet) instead of main.

Considerations:

  • Align branch names:
    • Rename ltm-1.6 => ltm-1.6/develop.
    • Rename ltm-1.6-main => ltm-1.6/main.
    • Use ltm-1.6/u/... for feature branches.
  • Process can be automated for each LTM bug fix to speed things up.

Fix Failed Merge

It's rare but possible, that after the merge to the latest develop, something can get broken, even when tests on feature branch passes. E.g.: incompatibility between concurrent features.

When the full tests fail, the following steps will be done automatically by GitHub workflow:

  • Create a new pull request with rollback commit.
  • Re-open the feature pull request.
    • An option is to open a new issue instead.

It's up to the users to decide, whether to fix the failed merge or not.

GitHub Actions

Define reusable actions to be used by workflows.

Actions .yml files can be stored in the following locations:

  • .github/actions folder for each repository and managed by the Drift Manager.
  • Some public shared repository (e.g. cookiecutter-nautobot-app/ can be used after open-sourcing).

The following actions can be defined:

  • Build Docker image for specific Python and Nautobot version.
  • Run linters for specific Python and Nautobot version.
  • Run unit tests for specific database type, Python and Nautobot version.
  • Full tests as described here.
  • Build a package.
  • Release a tag to GitHub.
  • Release a package to PyPI.

Full Tests

Action, that contains the following tests:

  • Build the documentation using readthedocs.org.
  • Python 3.11, Nautobot latest stable linters.
  • Python 3.11, Nautobot latest stable, PostgreSQL unit tests.
  • Python 3.11, Nautobot latest stable, MySQL unit tests.
  • Python 3.11, Nautobot 2.0.0, PostgreSQL unit tests.
  • Python 3.8, Nautobot latest stable, PostgreSQL unit tests.

Jobs will run in parallel and re-use cached Docker layers and database dumps.

Docker Caching

Use Docker buildx to build and push images to ghcr.io, and use layers cache.

  • An example docker image reference: ghcr.io/nautobot/nautobot-plugin-firewall-models/nautobot-dev:pr-179-stable-py3.11.
    • Tag contain the following information:
      • Pull request number pr-179.
      • Nautobot version: latest stable.
      • Python version: py3.11.
  • Re-use existing images by pulling from ghcr.io.
  • Remove feature images from ghcr.io after PR merge and full tests passes.

Better define .gitignore file to avoid unnecessary context changes.

  • Deny everything first.
  • Allow particular files/directories necessary for build.

Database Caching

Cache and re-use empty migrated database dumps to avoid migrations using GitHub actions cache.

GitHub unittest action will first check, whether there is cached dump.

  • If so, apply that dump.
  • If not, run migrations, create a new dump, and cache that dump.

Unit tests will be run with --keepdb flag to avoid re-creating the database.

Calculate cache key as a hash of:

  • migrations folder file content.
  • Nautobot version.
  • Database server Docker image reference.

This should speed up unit tests significantly.

Future Improvements

  • Add E2E Selenium tests.
  • Add E2E external integrations tests.
  • Factory dumps caching similar to Nautobot core.

Questions

  • What is preferred in GitHub workflows, to fail fast or finish fast?

Development Environment Improvements

Development Environment Issues

The design of development environment seems obsolete to me, and can be written in more simple and standard way.

Following widely used standards will simplify and clean up our processes, and will make it easier for new developers to join the project.

This is more business decision, whether we want to update our standards to ease new developers to join, or rather keep things as is, not to surprise existing developers.

As open-source project, we should IMHO follow the latest standards.

Docker Related Issues

  • All credentials are now exposed to all containers.
  • It's not possible to use docker compose ... commands directly, encapsulation is not complete.
  • It's not possible to pass environment variables to containers using shell export or .env file (docker compose standards).
  • Caching is not implemented in Dockerfiles.
  • Containers are being recreated in case of code change.
    • Code is mounted to /source/ and docker compose up should not recreate containers.
  • Running some tasks in containers creates files owned by root on host.
  • Not possible to run multiple Nautobot instances at the same time (port conflict).
  • Unable to simultaneously use MySQL and PostgreSQL.

Proposed solution:

  • Use single compose.yaml in repository root.
  • Define environment variables with default values using environment: section for each container. YAML anchors can be used to avoid duplication.
    • Optionally there can be some default.env file to list all variables with their defaults.
  • Use custom .env file or export to shell, to override default values.
    • Expected by most developers.
  • Use docker compose secrets.
  • Better implement caching in Dockerfiles.
  • Build and run containers with proper UID/GID.
  • Pass exposed port numbers as environment variables.

Some of proposed solutions are being improved in core Nautobot, but more can be done.

Invoke Related Issues

  • Encapsulation of invoke context.run brings a lot of unnecessary complexity.
    • Implementation is incomplete, multiple tasks are bypassing run_command due to that.
    • No one is fixing that.
  • Encapsulation of invoke @task decorator is not necessary and breaks language server typing support.
  • Configuration is unnecessarily complex:
    • Multiple files with different formats: development.env, creds.env and invoke.yml file.
    • Environment variables are too long, e.g. INVOKE_NAUTOBOT_CIRCUIT_MAINTENANCE_NAUTOBOT_VER.
  • Using custom command printing solution instead of Invoke built-in echo feature.

Proposed solution:

  • Use @task and context.run directly without encapsulation.
  • As described in above section, put default values into the Dockerfile and use custom .env file to override them.

Broken link on CookieCutter docs for Nautobot-App-SSOT

Environment

Observed Behaviour

Expected Behaviour

  • Expected to be taken to a page explaining cookiecutter usage for nautobot-app-ssot.

Steps to Reproduce

  1. Navigate to https://docs.nautobot.com/projects/cookiecutter-nautobot-app/en/latest/#templates
  2. Under Templates section click on the nautobot-app-ssot link
  3. Link goes to https://docs.nautobot.com/projects/cookiecutter-nautobot-app/en/latest/nautobot-app-ssot which produces a 404 error.

Add test case checking/enforcing authentication on URL endpoints

Environment

  • cookiecutter-nautobot-app version:

Proposed Functionality

To protect against inadvertently implementing views (UI, API, or other) that can expose sensitive information to unauthenticated users, the app cookiecutter should provide a default generic test case that iterates over all URL patterns published by the app and attempts to access them as an anonymous/unauthenticated user.

A similar pattern was implemented in Nautobot itself in nautobot/nautobot#5464; if the app requires Nautobot 2.1.9 or 1.6.16 or later, the test can use the nautobot.apps.utils.get_url_patterns and nautobot.apps.utils.get_url_for_url_pattern APIs introduced in those versions, something along the lines of:

import my_app.api.urls as api_urls
import my_app.urls as ui_urls

for urlconf in (api_urls, ui_urls):
    url_patterns = get_url_patterns(urlconf)
    for url_pattern in url_patterns:
        url = get_url_for_url_pattern(url_pattern)
        response = self.client.get(url, follow=True)
        if response.status_code == 405:  # Method not allowed
            response = self.client.post(url, follow=True)
        if response.status_code == 200:
            # UI views generally should redirect unauthenticated users to the appropriate login page
            redirect_url = f"/login/?next={url}"
            self.assertRedirects(response, redirect_url)
        else:
            self.assertEqual(response.status_code, 403)

Use Case

Proactively protect against a common implementation error that has security implications.

Install Instructions - Fail to Run Black

Environment

  • Python version: 3.11
  • cookiecutter-nautobot-app template version: Develop

Observed Behavior

When tried to run black . it failed to run due to the poetry environment not being installed yet

Expected Behavior

Black to run

Steps to Reproduce

  1. Follow the install instructions
  2. Get to the part about black . and get the failure.

Repository release checklist

  • Add README
  • Review and update PR/Issue templates
  • Update CODEOWNERS
  • Add CI and review test_bake_nautobot_plugin.py
  • Review contents of nautobot-app/README.md
  • Review plugin template code - models.py has lots of comments and (outdated?) docs links
  • Should tasks.py and invoke.*.yml be common across templates?

Add a `plugin_bin_requirements.txt` for binary packages in Dockerfile

Environment

  • cookiecutter-nautobot-app version:

Proposed Functionality

Add a plugin_bin_requirements.txt file to install additional binary packages without having to modify the Dockerfile

COPY ./plugin_bin_requirements.txt /opt/nautobot/
RUN apt-get -y update && \
    apt-get install -y $(grep -o ^[^#][[:alnum:].-]* "/opt/nautobot/plugin_bin_requirements.txt") && \
    && rm -rf /var/lib/apt/lists/*

Example plugin_bin_requirements.txt:

# Comment
libldap2-dev
tk

Use Case

As a Developer I want to specify needed binary packages without touching the standard development/Dockerfile.

Improve pyproject.toml

Environment

  • cookiecutter-nautobot-app version: pre-1.2

Proposed Functionality

Improve pyproject.tom.

  • Remove unused dependencies.
    • ipython
    • jupyter
  • Sort requirements alphabetically.
  • Align lists to use the same formatting:
    • e.g. from:
      notes = """,
          FIXME,
          XXX,
      """
    • to:
      notes = [
          "FIXME",
          "XXX",
      ]

Invoke build on new clone does not work

Environment

  • Python version: 3.8.10
  • cookiecutter-nautobot-app template version: develop

Observed Behavior

build fails due to permissions on ~/.config

Expected Behavior

Build is successful

Steps to Reproduce

  1. clone
  2. poetry install
  3. poetry run build

Upgrading guide should document use of a constraints file to avoid inadvertent Nautobot upgrade

https://github.com/nautobot/cookiecutter-nautobot-app/blob/develop/nautobot-app/%7B%7B%20cookiecutter.project_slug%20%7D%7D/docs/admin/upgrade.md should probably have some boilerplate about using a constraints file with pip to avoid inadvertently updating to a new Nautobot version just because the latest release of the App requires a newer version. Probably something similar to how it's done in the Dockerfile, i.e.:

pip show nautobot | grep "^Version: " | sed -e 's/Version: /nautobot==/' > constraints.txt
pip install --constraint constraints.txt --upgrade {{ cookiecutter.plugin_name }}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.