1
0
mirror of https://github.com/netbox-community/netbox.git synced 2024-05-10 07:54:54 +00:00

Merge remote-tracking branch 'upstream/develop' into 3619-new-400G-osfp-interface-type

This commit is contained in:
Emil Palm
2019-10-30 14:31:17 -05:00
27 changed files with 317 additions and 81 deletions

23
.github/stale.yaml vendored Normal file
View File

@ -0,0 +1,23 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 14
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7
# Issues with these labels will never be considered stale
exemptLabels:
- "status: accepted"
- "status: gathering feedback"
- "status: blocked"
# Label to use when marking an issue as stale
staleLabel: wontfix
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. NetBox
is governed by a small group of core maintainers which means not all opened
issues may receive direct feedback. Please see our [contributing guide](https://github.com/netbox-community/netbox/blob/develop/CONTRIBUTING.md).
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: >
This issue has been automatically closed due to lack of activity. In an
effort to reduce noise, please do not comment any further. Note that the
core maintainers may elect to reopen this issue at a later date if deemed
necessary.

View File

@ -24,7 +24,7 @@ already been fixed.
to see if the bug you've found has already been reported. If you think you may
be experiencing a reported issue that hasn't already been resolved, please
click "add a reaction" in the top right corner of the issue and add a thumbs
up (+1). You mightalso want to add a comment describing how it's affecting your
up (+1). You might also want to add a comment describing how it's affecting your
installation. This will allow us to prioritize bugs based on how many users are
affected.
@ -99,6 +99,8 @@ any work that's already in progress.
* Any pull request which does _not_ relate to an accepted issue will be closed.
* All major new functionality must include relevant tests where applicable.
* When submitting a pull request, please be sure to work off of the `develop`
branch, rather than `master`. The `develop` branch is used for ongoing
development, while `master` is used for tagging new stable releases.
@ -118,6 +120,30 @@ feedback. **Do not** comment on an issue just to show your support (give the
top post a :+1: instead) or ask for an ETA. These comments will be deleted to
reduce noise in the discussion.
## Issue Lifecycle
When a correctly formatted issue is submitted it is evaluated by a moderator
who may elect to immediately label the issue as accepted in addition to another
issue type label. In other cases, the issue may be labeled as "status: gathering feedback"
which will often be accompanied by a comment from a moderator asking for further dialog from the community.
If an issue is labeled as "status: revisions needed" a moderator has identified a problem with
the issue itself and is asking for the submitter himself to update the original post with
the requested information. If the original post is not updated in a reasonable amount of time,
the issue will be closed as invalid.
The core maintainers group has chosen to make use of the GitHub Stale bot to aid in issue management.
* Issues will be marked as stale after 14 days of no activity.
* Then after 7 more days of inactivity, the issue will be closed.
* Any issue bearing one of the following labels will be exempt from all Stale bot actions:
* `status: accepted`
* `status: gathering feedback`
* `status: blocked`
It is natural that some new issues get more attention than others. Often this is a metric of an issues's
overall usefulness to the project. In other cases in which issues merely get lost in the shuffle,
notifications from Stale bot can bring renewed attention to potentially meaningful issues.
## Maintainer Guidance
* Maintainers are expected to contribute at least four hours per week to the

View File

@ -3,7 +3,8 @@
NetBox is an IP address management (IPAM) and data center infrastructure
management (DCIM) tool. Initially conceived by the network engineering team at
[DigitalOcean](https://www.digitalocean.com/), NetBox was developed specifically
to address the needs of network and infrastructure engineers.
to address the needs of network and infrastructure engineers. It is intended to
function as a domain-specific source of truth for network operations.
NetBox runs as a web application atop the [Django](https://www.djangoproject.com/)
Python framework with a [PostgreSQL](http://www.postgresql.org/) database. For a
@ -42,6 +43,15 @@ and run `upgrade.sh`.
* [Ansible deployment](https://github.com/lae/ansible-role-netbox) (via [@lae](https://github.com/lae))
* [Kubernetes deployment](https://github.com/CENGN/netbox-kubernetes) (via [@CENGN](https://github.com/CENGN))
# Providing Feedback
Feature requests and bug reports must be submitted as GiHub issues. (Please be
sure to use the [appropriate template](https://github.com/netbox-community/netbox/issues/new/choose).)
For general discussion, please consider joining our [mailing list](https://groups.google.com/forum/#!forum/netbox-discuss).
If you are interested in contributing to the development of NetBox, please read
our [contributing guide](CONTRIBUTING.md) prior to beginning any work.
# Related projects
Please see [our wiki](https://github.com/netbox-community/netbox/wiki/Community-Contributions) for a list of relevant community projects.

View File

@ -119,6 +119,23 @@ Stored a numeric integer. Options include:
A true/false flag. This field has no options beyond the defaults.
### ChoiceVar
A set of choices from which the user can select one.
* `choices` - A list of `(value, label)` tuples representing the available choices. For example:
```python
CHOICES = (
('n', 'North'),
('s', 'South'),
('e', 'East'),
('w', 'West')
)
direction = ChoiceVar(choices=CHOICES)
```
### ObjectVar
A NetBox object. The list of available objects is defined by the queryset parameter. Each instance of this variable is limited to a single object type.

View File

@ -4,7 +4,7 @@ NetBox allows users to define custom templates that can be used when exporting o
Each export template is associated with a certain type of object. For instance, if you create an export template for VLANs, your custom template will appear under the "Export" button on the VLANs list.
Export templates are written in [Django's template language](https://docs.djangoproject.com/en/1.9/ref/templates/language/), which is very similar to Jinja2. The list of objects returned from the database is stored in the `queryset` variable, which you'll typically want to iterate through using a `for` loop. Object properties can be access by name. For example:
Export templates are written in [Django's template language](https://docs.djangoproject.com/en/stable/ref/templates/language/), which is very similar to Jinja2. The list of objects returned from the database is stored in the `queryset` variable, which you'll typically want to iterate through using a `for` loop. Object properties can be access by name. For example:
```
{% for rack in queryset %}

View File

@ -4,7 +4,7 @@ NetBox includes a Python shell within which objects can be directly queried, cre
./manage.py nbshell
```
This will launch a customized version of [the built-in Django shell](https://docs.djangoproject.com/en/dev/ref/django-admin/#shell) with all relevant NetBox models pre-loaded. (If desired, the stock Django shell is also available by executing `./manage.py shell`.)
This will launch a customized version of [the built-in Django shell](https://docs.djangoproject.com/en/stable/ref/django-admin/#shell) with all relevant NetBox models pre-loaded. (If desired, the stock Django shell is also available by executing `./manage.py shell`.)
```
$ ./manage.py nbshell
@ -28,7 +28,7 @@ DCIM:
## Querying Objects
Objects are retrieved by forming a [Django queryset](https://docs.djangoproject.com/en/dev/topics/db/queries/#retrieving-objects). The base queryset for an object takes the form `<model>.objects.all()`, which will return a (truncated) list of all objects of that type.
Objects are retrieved by forming a [Django queryset](https://docs.djangoproject.com/en/stable/topics/db/queries/#retrieving-objects). The base queryset for an object takes the form `<model>.objects.all()`, which will return a (truncated) list of all objects of that type.
```
>>> Device.objects.all()
@ -99,7 +99,7 @@ This approach can span multiple levels of relations. For example, the following
```
!!! note
While the above query is functional, it is very inefficient. There are ways to optimize such requests, however they are out of the scope of this document. For more information, see the [Django queryset method reference](https://docs.djangoproject.com/en/dev/ref/models/querysets/) documentation.
While the above query is functional, it is very inefficient. There are ways to optimize such requests, however they are out of the scope of this document. For more information, see the [Django queryset method reference](https://docs.djangoproject.com/en/stable/ref/models/querysets/) documentation.
Reverse relationships can be traversed as well. For example, the following will find all devices with an interface named "em0":
@ -137,7 +137,7 @@ To return the inverse of a filtered queryset, use `exclude()` instead of `filter
```
!!! info
The examples above are intended only to provide a cursory introduction to queryset filtering. For an exhaustive list of the available filters, please consult the [Django queryset API docs](https://docs.djangoproject.com/en/dev/ref/models/querysets/).
The examples above are intended only to provide a cursory introduction to queryset filtering. For an exhaustive list of the available filters, please consult the [Django queryset API docs](https://docs.djangoproject.com/en/stable/ref/models/querysets/).
## Creating and Updating Objects

View File

@ -39,6 +39,11 @@ If you want to export only the database schema, and not the data itself (e.g. fo
```no-highlight
pg_dump -s netbox > netbox_schema.sql
```
If you are migrating your instance of NetBox to a different machine, please make sure you invalidate the cache by performing this command:
```no-highlight
python3 manage.py invalidate all
```
---

View File

@ -139,7 +139,7 @@ Enforcement of unique IP space can be toggled on a per-VRF basis. To enforce uni
By default, all messages of INFO severity or higher will be logged to the console. Additionally, if `DEBUG` is False and email access has been configured, ERROR and CRITICAL messages will be emailed to the users defined in `ADMINS`.
The Django framework on which NetBox runs allows for the customization of logging, e.g. to write logs to file. Please consult the [Django logging documentation](https://docs.djangoproject.com/en/1.11/topics/logging/) for more information on configuring this setting. Below is an example which will write all INFO and higher messages to a file:
The Django framework on which NetBox runs allows for the customization of logging, e.g. to write logs to file. Please consult the [Django logging documentation](https://docs.djangoproject.com/en/stable/topics/logging/) for more information on configuring this setting. Below is an example which will write all INFO and higher messages to a file:
```
LOGGING = {
@ -311,7 +311,7 @@ Enable this option to run the webhook backend. See the docs section on the webho
## Date and Time Formatting
You may define custom formatting for date and times. For detailed instructions on writing format strings, please see [the Django documentation](https://docs.djangoproject.com/en/dev/ref/templates/builtins/#date).
You may define custom formatting for date and times. For detailed instructions on writing format strings, please see [the Django documentation](https://docs.djangoproject.com/en/stable/ref/templates/builtins/#date).
Defaults:

View File

@ -2,7 +2,7 @@
## ALLOWED_HOSTS
This is a list of valid fully-qualified domain names (FQDNs) that is used to reach the NetBox service. Usually this is the same as the hostname for the NetBox server, but can also be different (e.g. when using a reverse proxy serving the NetBox website under a different FQDN than the hostname of the NetBox server). NetBox will not permit access to the server via any other hostnames (or IPs). The value of this option is also used to set `CSRF_TRUSTED_ORIGINS`, which restricts `HTTP POST` to the same set of hosts (more about this [here](https://docs.djangoproject.com/en/1.9/ref/settings/#std:setting-CSRF_TRUSTED_ORIGINS)). Keep in mind that NetBox, by default, has `USE_X_FORWARDED_HOST = True` (in `netbox/netbox/settings.py`) which means that if you're using a reverse proxy, it's the FQDN used to reach that reverse proxy which needs to be in this list (more about this [here](https://docs.djangoproject.com/en/1.9/ref/settings/#allowed-hosts)).
This is a list of valid fully-qualified domain names (FQDNs) that is used to reach the NetBox service. Usually this is the same as the hostname for the NetBox server, but can also be different (e.g. when using a reverse proxy serving the NetBox website under a different FQDN than the hostname of the NetBox server). NetBox will not permit access to the server via any other hostnames (or IPs). The value of this option is also used to set `CSRF_TRUSTED_ORIGINS`, which restricts `HTTP POST` to the same set of hosts (more about this [here](https://docs.djangoproject.com/en/stable/ref/settings/#std:setting-CSRF_TRUSTED_ORIGINS)). Keep in mind that NetBox, by default, has `USE_X_FORWARDED_HOST = True` (in `netbox/netbox/settings.py`) which means that if you're using a reverse proxy, it's the FQDN used to reach that reverse proxy which needs to be in this list (more about this [here](https://docs.djangoproject.com/en/stable/ref/settings/#allowed-hosts)).
Example:
@ -21,6 +21,7 @@ NetBox requires access to a PostgreSQL database service to store data. This serv
* `PASSWORD` - PostgreSQL password
* `HOST` - Name or IP address of the database server (use `localhost` if running locally)
* `PORT` - TCP port of the PostgreSQL service; leave blank for default port (5432)
* `CONN_MAX_AGE` - Number in seconds for Netbox to keep database connections open. 150-300 seconds is typically a good starting point ([more info](https://docs.djangoproject.com/en/stable/ref/databases/#persistent-connections)).
Example:
@ -31,6 +32,7 @@ DATABASE = {
'PASSWORD': 'J5brHrAXFLQSif0K', # PostgreSQL password
'HOST': 'localhost', # Database server
'PORT': '', # Database port (leave blank for default)
'CONN_MAX_AGE': 300, # Max database connection age
}
```
@ -69,7 +71,7 @@ REDIS = {
!!! note:
If you were using these settings in a prior release with webhooks, the `DATABASE` setting remains the same but
an additional `CACHE_DATABASE` setting has been added with a default value of 1 to support the caching backend. The
`DATABASE` setting will be renamed in a future release of NetBox to better relay the meaning of the setting.
`DATABASE` setting will be renamed in a future release of NetBox to better relay the meaning of the setting.
!!! warning:
It is highly recommended to keep the webhook and cache databases seperate. Using the same database number for both may result in webhook

View File

@ -1,6 +1,6 @@
# Style Guide
NetBox generally follows the [Django style guide](https://docs.djangoproject.com/en/dev/internals/contributing/writing-code/coding-style/), which is itself based on [PEP 8](https://www.python.org/dev/peps/pep-0008/). [Pycodestyle](https://github.com/pycqa/pycodestyle) is used to validate code formatting, ignoring certain violations. See `scripts/cibuild.sh`.
NetBox generally follows the [Django style guide](https://docs.djangoproject.com/en/stable/internals/contributing/writing-code/coding-style/), which is itself based on [PEP 8](https://www.python.org/dev/peps/pep-0008/). [Pycodestyle](https://github.com/pycqa/pycodestyle) is used to validate code formatting, ignoring certain violations. See `scripts/cibuild.sh`.
## PEP 8 Exceptions

View File

@ -129,6 +129,7 @@ DATABASE = {
'PASSWORD': 'J5brHrAXFLQSif0K', # PostgreSQL password
'HOST': 'localhost', # Database server
'PORT': '', # Database port (leave blank for default)
'CONN_MAX_AGE': 300, # Max database connection age
}
```

View File

@ -4,9 +4,19 @@
* [#3445](https://github.com/netbox-community/netbox/issues/3445) - Add support for additional user defined headers to be added to webhook requests
* [#3499](https://github.com/netbox-community/netbox/issues/3499) - Add `ca_file_path` to Webhook model to support user supplied CA certificate verification of webhook requests
* [#3594](https://github.com/netbox-community/netbox/issues/3594) - Add ChoiceVar for custom scripts
## Bug Fixes
* [#3309](https://github.com/netbox-community/netbox/issues/3309) - Rewrite change logging middleware to resolve sporadic testing failures
* [#3340](https://github.com/netbox-community/netbox/issues/3340) - Add missing options to connect front ports to console ports
* [#3460](https://github.com/netbox-community/netbox/issues/3460) - Extend upgrade script to validate Python dependencies
* [#3596](https://github.com/netbox-community/netbox/issues/3596) - Prevent server error when reassigning a device to a new device bay
* [#3629](https://github.com/netbox-community/netbox/issues/3629) - Use `get_lldp_neighors_detail` to validation LLDP neighbors
* [#3635](https://github.com/netbox-community/netbox/issues/3635) - Add missing cache support for the circuits app
* [#3636](https://github.com/netbox-community/netbox/issues/3636) - Add missing `rack_group` field to PowerFeed CSV export
* [#3652](https://github.com/netbox-community/netbox/issues/3652) - Limit next/previous rack by assigned rack group
---
# v2.6.6 (2019-10-10)

View File

@ -2588,6 +2588,16 @@ class DeviceBay(ComponentModel):
if self.device == self.installed_device:
raise ValidationError("Cannot install a device into itself.")
# Check that the installed device is not already installed elsewhere
if self.installed_device:
current_bay = DeviceBay.objects.filter(installed_device=self.installed_device).first()
if current_bay:
raise ValidationError({
'installed_device': "Cannot install the specified device; device is already installed in {}".format(
current_bay
)
})
#
# Inventory items
@ -3112,6 +3122,7 @@ class PowerFeed(ChangeLoggedModel, CableTermination, CustomFieldModel):
return (
self.power_panel.site.name,
self.power_panel.name,
self.rack.group.name if self.rack and self.rack.group else None,
self.rack.name if self.rack else None,
self.name,
self.get_status_display(),

View File

@ -404,8 +404,12 @@ class RackView(PermissionRequiredMixin, View):
position__isnull=True,
parent_bay__isnull=True
).prefetch_related('device_type__manufacturer')
next_rack = Rack.objects.filter(site=rack.site, name__gt=rack.name).order_by('name').first()
prev_rack = Rack.objects.filter(site=rack.site, name__lt=rack.name).order_by('-name').first()
if rack.group:
peer_racks = Rack.objects.filter(site=rack.site, group=rack.group)
else:
peer_racks = Rack.objects.filter(site=rack.site, group__isnull=True)
next_rack = peer_racks.filter(name__gt=rack.name).order_by('name').first()
prev_rack = peer_racks.filter(name__lt=rack.name).order_by('-name').first()
reservations = RackReservation.objects.filter(rack=rack)
power_feeds = PowerFeed.objects.filter(rack=rack).prefetch_related('power_panel')

View File

@ -97,13 +97,13 @@ class CustomFieldModelSerializer(ValidatedModelSerializer):
def __init__(self, *args, **kwargs):
def _populate_custom_fields(instance, fields):
custom_fields = {f.name: None for f in fields}
for cfv in instance.custom_field_values.all():
if cfv.field.type == CF_TYPE_SELECT:
custom_fields[cfv.field.name] = CustomFieldChoiceSerializer(cfv.value).data
instance.custom_fields = {}
for field in fields:
value = instance.cf.get(field.name)
if field.type == CF_TYPE_SELECT and value is not None:
instance.custom_fields[field.name] = CustomFieldChoiceSerializer(value).data
else:
custom_fields[cfv.field.name] = cfv.value
instance.custom_fields = custom_fields
instance.custom_fields[field.name] = value
super().__init__(*args, **kwargs)

View File

@ -1,14 +1,15 @@
import random
import threading
import uuid
from copy import deepcopy
from datetime import timedelta
from django.conf import settings
from django.db.models.signals import post_delete, post_save
from django.db.models.signals import pre_delete, post_save
from django.utils import timezone
from django.utils.functional import curry
from django_prometheus.models import model_deletes, model_inserts, model_updates
from utilities.querysets import DummyQuerySet
from .constants import *
from .models import ObjectChange
from .signals import purge_changelog
@ -19,33 +20,34 @@ _thread_locals = threading.local()
def handle_changed_object(sender, instance, **kwargs):
"""
Fires when an object is created or updated
Fires when an object is created or updated.
"""
# Queue the object and a new ObjectChange for processing once the request completes
if hasattr(instance, 'to_objectchange'):
action = OBJECTCHANGE_ACTION_CREATE if kwargs['created'] else OBJECTCHANGE_ACTION_UPDATE
objectchange = instance.to_objectchange(action)
_thread_locals.changed_objects.append(
(instance, objectchange)
)
# Queue the object for processing once the request completes
action = OBJECTCHANGE_ACTION_CREATE if kwargs['created'] else OBJECTCHANGE_ACTION_UPDATE
_thread_locals.changed_objects.append(
(instance, action)
)
def _handle_deleted_object(request, sender, instance, **kwargs):
def handle_deleted_object(sender, instance, **kwargs):
"""
Fires when an object is deleted
Fires when an object is deleted.
"""
# Record an Object Change
if hasattr(instance, 'to_objectchange'):
objectchange = instance.to_objectchange(OBJECTCHANGE_ACTION_DELETE)
objectchange.user = request.user
objectchange.request_id = request.id
objectchange.save()
# Cache custom fields prior to copying the instance
if hasattr(instance, 'cache_custom_fields'):
instance.cache_custom_fields()
# Enqueue webhooks
enqueue_webhooks(instance, request.user, request.id, OBJECTCHANGE_ACTION_DELETE)
# Create a copy of the object being deleted
copy = deepcopy(instance)
# Increment metric counters
model_deletes.labels(instance._meta.model_name).inc()
# Preserve tags
if hasattr(instance, 'tags'):
copy.tags = DummyQuerySet(instance.tags.all())
# Queue the copy of the object for processing once the request completes
_thread_locals.changed_objects.append(
(copy, OBJECTCHANGE_ACTION_DELETE)
)
def purge_objectchange_cache(sender, **kwargs):
@ -81,12 +83,9 @@ class ObjectChangeMiddleware(object):
# the same request.
request.id = uuid.uuid4()
# Signals don't include the request context, so we're currying it into the post_delete function ahead of time.
handle_deleted_object = curry(_handle_deleted_object, request)
# Connect our receivers to the post_save and post_delete signals.
post_save.connect(handle_changed_object, dispatch_uid='cache_changed_object')
post_delete.connect(handle_deleted_object, dispatch_uid='cache_deleted_object')
post_save.connect(handle_changed_object, dispatch_uid='handle_changed_object')
pre_delete.connect(handle_deleted_object, dispatch_uid='handle_deleted_object')
# Provide a hook for purging the change cache
purge_changelog.connect(purge_objectchange_cache)
@ -98,22 +97,31 @@ class ObjectChangeMiddleware(object):
if not _thread_locals.changed_objects:
return response
# Create records for any cached objects that were created/updated.
for obj, objectchange in _thread_locals.changed_objects:
# Create records for any cached objects that were changed.
for instance, action in _thread_locals.changed_objects:
# Record the change
objectchange.user = request.user
objectchange.request_id = request.id
objectchange.save()
# Refresh cached custom field values
if action in [OBJECTCHANGE_ACTION_CREATE, OBJECTCHANGE_ACTION_UPDATE]:
if hasattr(instance, 'cache_custom_fields'):
instance.cache_custom_fields()
# Record an ObjectChange if applicable
if hasattr(instance, 'to_objectchange'):
objectchange = instance.to_objectchange(action)
objectchange.user = request.user
objectchange.request_id = request.id
objectchange.save()
# Enqueue webhooks
enqueue_webhooks(obj, request.user, request.id, objectchange.action)
enqueue_webhooks(instance, request.user, request.id, action)
# Increment metric counters
if objectchange.action == OBJECTCHANGE_ACTION_CREATE:
model_inserts.labels(obj._meta.model_name).inc()
elif objectchange.action == OBJECTCHANGE_ACTION_UPDATE:
model_updates.labels(obj._meta.model_name).inc()
if action == OBJECTCHANGE_ACTION_CREATE:
model_inserts.labels(instance._meta.model_name).inc()
elif action == OBJECTCHANGE_ACTION_UPDATE:
model_updates.labels(instance._meta.model_name).inc()
elif action == OBJECTCHANGE_ACTION_DELETE:
model_deletes.labels(instance._meta.model_name).inc()
# Housekeeping: 1% chance of clearing out expired ObjectChanges. This applies only to requests which result in
# one or more changes being logged.

View File

@ -138,16 +138,21 @@ class CustomFieldModel(models.Model):
class Meta:
abstract = True
def cache_custom_fields(self):
"""
Cache all custom field values for this instance
"""
self._cf = {
field.name: value for field, value in self.get_custom_fields().items()
}
@property
def cf(self):
"""
Name-based CustomFieldValue accessor for use in templates
"""
if self._cf is None:
# Cache all custom field values for this instance
self._cf = {
field.name: value for field, value in self.get_custom_fields().items()
}
self.cache_custom_fields()
return self._cf
def get_custom_fields(self):

View File

@ -24,6 +24,7 @@ from .signals import purge_changelog
__all__ = [
'BaseScript',
'BooleanVar',
'ChoiceVar',
'FileVar',
'IntegerVar',
'IPNetworkVar',
@ -133,6 +134,27 @@ class BooleanVar(ScriptVariable):
self.field_attrs['required'] = False
class ChoiceVar(ScriptVariable):
"""
Select one of several predefined static choices, passed as a list of two-tuples. Example:
color = ChoiceVar(
choices=(
('#ff0000', 'Red'),
('#00ff00', 'Green'),
('#0000ff', 'Blue')
)
)
"""
form_field = forms.ChoiceField
def __init__(self, choices, *args, **kwargs):
super().__init__(*args, **kwargs)
# Set field choices
self.field_attrs['choices'] = choices
class ObjectVar(ScriptVariable):
"""
NetBox object representation. The provided QuerySet will determine the choices available.

View File

@ -1,33 +1,57 @@
from django.contrib.contenttypes.models import ContentType
from django.urls import reverse
from rest_framework import status
from dcim.models import Site
from extras.constants import OBJECTCHANGE_ACTION_CREATE, OBJECTCHANGE_ACTION_UPDATE, OBJECTCHANGE_ACTION_DELETE
from extras.models import ObjectChange
from extras.constants import *
from extras.models import CustomField, CustomFieldValue, ObjectChange
from utilities.testing import APITestCase
class ChangeLogTest(APITestCase):
def setUp(self):
super().setUp()
# Create a custom field on the Site model
ct = ContentType.objects.get_for_model(Site)
cf = CustomField(
type=CF_TYPE_TEXT,
name='my_field',
required=False
)
cf.save()
cf.obj_type.set([ct])
def test_create_object(self):
data = {
'name': 'Test Site 1',
'slug': 'test-site-1',
'custom_fields': {
'my_field': 'ABC'
},
'tags': [
'bar', 'foo'
],
}
self.assertEqual(ObjectChange.objects.count(), 0)
url = reverse('dcim-api:site-list')
response = self.client.post(url, data, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_201_CREATED)
self.assertEqual(ObjectChange.objects.count(), 1)
oc = ObjectChange.objects.first()
site = Site.objects.get(pk=response.data['id'])
oc = ObjectChange.objects.get(
changed_object_type=ContentType.objects.get_for_model(Site),
changed_object_id=site.pk
)
self.assertEqual(oc.changed_object, site)
self.assertEqual(oc.action, OBJECTCHANGE_ACTION_CREATE)
self.assertEqual(oc.object_data['custom_fields'], data['custom_fields'])
self.assertListEqual(sorted(oc.object_data['tags']), data['tags'])
def test_update_object(self):
@ -37,26 +61,43 @@ class ChangeLogTest(APITestCase):
data = {
'name': 'Test Site X',
'slug': 'test-site-x',
'custom_fields': {
'my_field': 'DEF'
},
'tags': [
'abc', 'xyz'
],
}
self.assertEqual(ObjectChange.objects.count(), 0)
url = reverse('dcim-api:site-detail', kwargs={'pk': site.pk})
response = self.client.put(url, data, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_200_OK)
self.assertEqual(ObjectChange.objects.count(), 1)
site = Site.objects.get(pk=response.data['id'])
self.assertEqual(site.name, data['name'])
oc = ObjectChange.objects.first()
site = Site.objects.get(pk=response.data['id'])
oc = ObjectChange.objects.get(
changed_object_type=ContentType.objects.get_for_model(Site),
changed_object_id=site.pk
)
self.assertEqual(oc.changed_object, site)
self.assertEqual(oc.action, OBJECTCHANGE_ACTION_UPDATE)
self.assertEqual(oc.object_data['custom_fields'], data['custom_fields'])
self.assertListEqual(sorted(oc.object_data['tags']), data['tags'])
def test_delete_object(self):
site = Site(name='Test Site 1', slug='test-site-1')
site = Site(
name='Test Site 1',
slug='test-site-1'
)
site.save()
site.tags.add('foo', 'bar')
CustomFieldValue.objects.create(
field=CustomField.objects.get(name='my_field'),
obj=site,
value='ABC'
)
self.assertEqual(ObjectChange.objects.count(), 0)
@ -70,3 +111,5 @@ class ChangeLogTest(APITestCase):
self.assertEqual(oc.changed_object, None)
self.assertEqual(oc.object_repr, site.name)
self.assertEqual(oc.action, OBJECTCHANGE_ACTION_DELETE)
self.assertEqual(oc.object_data['custom_fields'], {'my_field': 'ABC'})
self.assertListEqual(sorted(oc.object_data['tags']), ['bar', 'foo'])

View File

@ -99,6 +99,31 @@ class ScriptVariablesTest(TestCase):
self.assertTrue(form.is_valid())
self.assertEqual(form.cleaned_data['var1'], False)
def test_choicevar(self):
CHOICES = (
('ff0000', 'Red'),
('00ff00', 'Green'),
('0000ff', 'Blue')
)
class TestScript(Script):
var1 = ChoiceVar(
choices=CHOICES
)
# Validate valid choice
data = {'var1': CHOICES[0][0]}
form = TestScript().as_form(data)
self.assertTrue(form.is_valid())
self.assertEqual(form.cleaned_data['var1'], CHOICES[0][0])
# Validate invalid choices
data = {'var1': 'taupe'}
form = TestScript().as_form(data)
self.assertFalse(form.is_valid())
def test_objectvar(self):
class TestScript(Script):

View File

@ -17,12 +17,13 @@ DATABASE = {
'PASSWORD': '', # PostgreSQL password
'HOST': 'localhost', # Database server
'PORT': '', # Database port (leave blank for default)
'CONN_MAX_AGE': 300, # Max database connection age
}
# This key is used for secure generation of random numbers and strings. It must never be exposed outside of this file.
# For optimal security, SECRET_KEY should be at least 50 characters in length and contain a mix of letters, numbers, and
# symbols. NetBox will not run without this defined. For more information, see
# https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-SECRET_KEY
# https://docs.djangoproject.com/en/stable/ref/settings/#std:setting-SECRET_KEY
SECRET_KEY = ''
# Redis database settings. The Redis database is used for caching and background processing such as webhooks
@ -106,7 +107,7 @@ EXEMPT_VIEW_PERMISSIONS = [
]
# Enable custom logging. Please see the Django documentation for detailed guidance on configuring custom logs:
# https://docs.djangoproject.com/en/1.11/topics/logging/
# https://docs.djangoproject.com/en/stable/topics/logging/
LOGGING = {}
# Setting this to True will permit only authenticated users to access any part of NetBox. By default, anonymous users
@ -171,7 +172,7 @@ TIME_ZONE = 'UTC'
WEBHOOKS_ENABLED = False
# Date/time formatting. See the following link for supported formats:
# https://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
# https://docs.djangoproject.com/en/stable/ref/templates/builtins/#date
DATE_FORMAT = 'N j, Y'
SHORT_DATE_FORMAT = 'Y-m-d'
TIME_FORMAT = 'g:i a'

View File

@ -364,6 +364,7 @@ CACHEOPS = {
'auth.user': {'ops': 'get', 'timeout': 60 * 15},
'auth.*': {'ops': ('fetch', 'get')},
'auth.permission': {'ops': 'all'},
'circuits.*': {'ops': 'all'},
'dcim.*': {'ops': 'all'},
'ipam.*': {'ops': 'all'},
'extras.*': {'ops': 'all'},

View File

@ -52,10 +52,10 @@
<script type="text/javascript">
$(document).ready(function() {
$.ajax({
url: "{% url 'dcim-api:device-napalm' pk=device.pk %}?method=get_lldp_neighbors",
url: "{% url 'dcim-api:device-napalm' pk=device.pk %}?method=get_lldp_neighbors_detail",
dataType: 'json',
success: function(json) {
$.each(json['get_lldp_neighbors'], function(iface, neighbors) {
$.each(json['get_lldp_neighbors_detail'], function(iface, neighbors) {
var neighbor = neighbors[0];
var row = $('#' + iface.split(".")[0].replace(/([\/:])/g, "\\$1"));
@ -69,8 +69,8 @@ $(document).ready(function() {
}
// Clean up hostnames/interfaces learned via LLDP
var neighbor_host = neighbor['hostname'] || ""; // sanitize hostname if it's null to avoid breaking the split func
var neighbor_port = neighbor['port'] || ""; // sanitize port if it's null to avoid breaking the split func
var neighbor_host = neighbor['remote_system_name'] || ""; // sanitize hostname if it's null to avoid breaking the split func
var neighbor_port = neighbor['remote_port'] || ""; // sanitize port if it's null to avoid breaking the split func
var lldp_device = neighbor_host.split(".")[0]; // Strip off any trailing domain name
var lldp_interface = neighbor_port.split(".")[0]; // Strip off any trailing subinterface ID

View File

@ -64,6 +64,8 @@
</button>
<ul class="dropdown-menu dropdown-menu-right">
<li><a href="{% url 'dcim:frontport_connect' termination_a_id=frontport.pk termination_b_type='interface' %}?return_url={{ device.get_absolute_url }}">Interface</a></li>
<li><a href="{% url 'dcim:frontport_connect' termination_a_id=frontport.pk termination_b_type='console-server-port' %}?return_url={{ device.get_absolute_url }}">Console Server Port</a></li>
<li><a href="{% url 'dcim:frontport_connect' termination_a_id=frontport.pk termination_b_type='console-port' %}?return_url={{ device.get_absolute_url }}">Console Port</a></li>
<li><a href="{% url 'dcim:frontport_connect' termination_a_id=frontport.pk termination_b_type='front-port' %}?return_url={{ device.get_absolute_url }}">Front Port</a></li>
<li><a href="{% url 'dcim:frontport_connect' termination_a_id=frontport.pk termination_b_type='rear-port' %}?return_url={{ device.get_absolute_url }}">Rear Port</a></li>
<li><a href="{% url 'dcim:frontport_connect' termination_a_id=frontport.pk termination_b_type='circuit-termination' %}?return_url={{ device.get_absolute_url }}">Circuit Termination</a></li>

View File

@ -0,0 +1,9 @@
class DummyQuerySet:
"""
A fake QuerySet that can be used to cache relationships to objects that have been deleted.
"""
def __init__(self, queryset):
self._cache = [obj for obj in queryset.all()]
def all(self):
return self._cache

View File

@ -99,7 +99,7 @@ def serialize_object(obj, extra=None):
# Include any custom fields
if hasattr(obj, 'get_custom_fields'):
data['custom_fields'] = {
field.name: str(value) for field, value in obj.get_custom_fields().items()
field: str(value) for field, value in obj.cf.items()
}
# Include any tags

View File

@ -20,6 +20,17 @@ COMMAND="${PIP} install -r requirements.txt --upgrade"
echo "Updating required Python packages ($COMMAND)..."
eval $COMMAND
# Validate Python dependencies
COMMAND="${PIP} check"
echo "Validating Python dependencies ($COMMAND)..."
eval $COMMAND || (
echo "******** PLEASE FIX THE DEPENDENCIES BEFORE CONTINUING ********"
echo "* Manually install newer version(s) of the highlighted packages"
echo "* so that 'pip3 check' passes. For more information see:"
echo "* https://github.com/pypa/pip/issues/988"
exit 1
)
# Apply any database migrations
COMMAND="${PYTHON} netbox/manage.py migrate"
echo "Applying database migrations ($COMMAND)..."