mirror of
https://github.com/netbox-community/netbox.git
synced 2024-05-10 07:54:54 +00:00
Merge branch 'develop' into feature
This commit is contained in:
18
.github/ISSUE_TEMPLATE/documentation_change.yaml
vendored
18
.github/ISSUE_TEMPLATE/documentation_change.yaml
vendored
@ -14,17 +14,19 @@ body:
|
|||||||
- Cleanup (formatting, typos, etc.)
|
- Cleanup (formatting, typos, etc.)
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: dropdown
|
||||||
attributes:
|
attributes:
|
||||||
label: Area
|
label: Area
|
||||||
description: To what section(s) of the documentation does this change pertain?
|
description: To what section of the documentation does this change primarily pertain?
|
||||||
options:
|
options:
|
||||||
- label: Installation instructions
|
- Installation instructions
|
||||||
- label: Configuration parameters
|
- Configuration parameters
|
||||||
- label: Functionality/features
|
- Functionality/features
|
||||||
- label: REST API
|
- REST API
|
||||||
- label: Administration/development
|
- Administration/development
|
||||||
- label: Other
|
- Other
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
attributes:
|
attributes:
|
||||||
label: Proposed Changes
|
label: Proposed Changes
|
||||||
|
@ -25,7 +25,7 @@ discussions.
|
|||||||
|
|
||||||
### Slack
|
### Slack
|
||||||
|
|
||||||
For real-time chat, you can join the **#netbox** Slack channel on [NetDev Community](https://join.slack.com/t/netdev-community/shared_invite/zt-mtts8g0n-Sm6Wutn62q_M4OdsaIycrQ).
|
For real-time chat, you can join the **#netbox** Slack channel on [NetDev Community](https://slack.netbox.dev/).
|
||||||
Unfortunately, the Slack channel does not provide long-term retention of chat
|
Unfortunately, the Slack channel does not provide long-term retention of chat
|
||||||
history, so try to avoid it for any discussions would benefit from being
|
history, so try to avoid it for any discussions would benefit from being
|
||||||
preserved for future reference.
|
preserved for future reference.
|
||||||
|
@ -15,7 +15,7 @@ The complete documentation for NetBox can be found at [Read the Docs](https://ne
|
|||||||
### Discussion
|
### Discussion
|
||||||
|
|
||||||
* [GitHub Discussions](https://github.com/netbox-community/netbox/discussions) - Discussion forum hosted by GitHub; ideal for Q&A and other structured discussions
|
* [GitHub Discussions](https://github.com/netbox-community/netbox/discussions) - Discussion forum hosted by GitHub; ideal for Q&A and other structured discussions
|
||||||
* [Slack](https://join.slack.com/t/netdev-community/shared_invite/zt-mtts8g0n-Sm6Wutn62q_M4OdsaIycrQ) - Real-time chat hosted by the NetDev Community; best for unstructured discussion or just hanging out
|
* [Slack](https://slack.netbox.dev/) - Real-time chat hosted by the NetDev Community; best for unstructured discussion or just hanging out
|
||||||
* [Google Group](https://groups.google.com/g/netbox-discuss) - Legacy mailing list; slowly being replaced by GitHub discussions
|
* [Google Group](https://groups.google.com/g/netbox-discuss) - Legacy mailing list; slowly being replaced by GitHub discussions
|
||||||
|
|
||||||
### Build Status
|
### Build Status
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Caching
|
# Caching
|
||||||
|
|
||||||
NetBox supports database query caching using [django-cacheops](https://github.com/Suor/django-cacheops) and Redis. When a query is made, the results are cached in Redis for a short period of time, as defined by the [CACHE_TIMEOUT](../../configuration/optional-settings/#cache_timeout) parameter (15 minutes by default). Within that time, all recurrences of that specific query will return the pre-fetched results from the cache.
|
NetBox supports database query caching using [django-cacheops](https://github.com/Suor/django-cacheops) and Redis. When a query is made, the results are cached in Redis for a short period of time, as defined by the [CACHE_TIMEOUT](../configuration/optional-settings.md#cache_timeout) parameter (15 minutes by default). Within that time, all recurrences of that specific query will return the pre-fetched results from the cache.
|
||||||
|
|
||||||
If a change is made to any of the objects returned by the query within that time, or if the timeout expires, the results are automatically invalidated and the next request for those results will be sent to the database.
|
If a change is made to any of the objects returned by the query within that time, or if the timeout expires, the results are automatically invalidated and the next request for those results will be sent to the database.
|
||||||
|
|
||||||
|
@ -22,7 +22,7 @@ GET /api/dcim/devices/1/napalm/?method=get_environment
|
|||||||
|
|
||||||
## Authentication
|
## Authentication
|
||||||
|
|
||||||
By default, the [`NAPALM_USERNAME`](../../configuration/optional-settings/#napalm_username) and [`NAPALM_PASSWORD`](../../configuration/optional-settings/#napalm_password) configuration parameters are used for NAPALM authentication. They can be overridden for an individual API call by specifying the `X-NAPALM-Username` and `X-NAPALM-Password` headers.
|
By default, the [`NAPALM_USERNAME`](../configuration/optional-settings.md#napalm_username) and [`NAPALM_PASSWORD`](../configuration/optional-settings.md#napalm_password) configuration parameters are used for NAPALM authentication. They can be overridden for an individual API call by specifying the `X-NAPALM-Username` and `X-NAPALM-Password` headers.
|
||||||
|
|
||||||
```
|
```
|
||||||
$ curl "http://localhost/api/dcim/devices/1/napalm/?method=get_environment" \
|
$ curl "http://localhost/api/dcim/devices/1/napalm/?method=get_environment" \
|
||||||
|
@ -12,7 +12,7 @@ A NetBox report is a mechanism for validating the integrity of data within NetBo
|
|||||||
|
|
||||||
## Writing Reports
|
## Writing Reports
|
||||||
|
|
||||||
Reports must be saved as files in the [`REPORTS_ROOT`](../../configuration/optional-settings/#reports_root) path (which defaults to `netbox/reports/`). Each file created within this path is considered a separate module. Each module holds one or more reports (Python classes), each of which performs a certain function. The logic of each report is broken into discrete test methods, each of which applies a small portion of the logic comprising the overall test.
|
Reports must be saved as files in the [`REPORTS_ROOT`](../configuration/optional-settings.md#reports_root) path (which defaults to `netbox/reports/`). Each file created within this path is considered a separate module. Each module holds one or more reports (Python classes), each of which performs a certain function. The logic of each report is broken into discrete test methods, each of which applies a small portion of the logic comprising the overall test.
|
||||||
|
|
||||||
!!! warning
|
!!! warning
|
||||||
The reports path includes a file named `__init__.py`, which registers the path as a Python module. Do not delete this file.
|
The reports path includes a file named `__init__.py`, which registers the path as a Python module. Do not delete this file.
|
||||||
|
@ -12,13 +12,16 @@ NetBox employs a [PostgreSQL](https://www.postgresql.org/) database, so general
|
|||||||
Use the `pg_dump` utility to export the entire database to a file:
|
Use the `pg_dump` utility to export the entire database to a file:
|
||||||
|
|
||||||
```no-highlight
|
```no-highlight
|
||||||
pg_dump netbox > netbox.sql
|
pg_dump --username netbox --password --host localhost netbox > netbox.sql
|
||||||
```
|
```
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
You may need to change the username, host, and/or database in the command above to match your installation.
|
||||||
|
|
||||||
When replicating a production database for development purposes, you may find it convenient to exclude changelog data, which can easily account for the bulk of a database's size. To do this, exclude the `extras_objectchange` table data from the export. The table will still be included in the output file, but will not be populated with any data.
|
When replicating a production database for development purposes, you may find it convenient to exclude changelog data, which can easily account for the bulk of a database's size. To do this, exclude the `extras_objectchange` table data from the export. The table will still be included in the output file, but will not be populated with any data.
|
||||||
|
|
||||||
```no-highlight
|
```no-highlight
|
||||||
pg_dump --exclude-table-data=extras_objectchange netbox > netbox.sql
|
pg_dump ... --exclude-table-data=extras_objectchange netbox > netbox.sql
|
||||||
```
|
```
|
||||||
|
|
||||||
### Load an Exported Database
|
### Load an Exported Database
|
||||||
@ -41,7 +44,7 @@ Keep in mind that PostgreSQL user accounts and permissions are not included with
|
|||||||
If you want to export only the database schema, and not the data itself (e.g. for development reference), do the following:
|
If you want to export only the database schema, and not the data itself (e.g. for development reference), do the following:
|
||||||
|
|
||||||
```no-highlight
|
```no-highlight
|
||||||
pg_dump -s netbox > netbox_schema.sql
|
pg_dump --username netbox --password --host localhost -s netbox > netbox_schema.sql
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
@ -281,6 +281,14 @@ Setting this to True will display a "maintenance mode" banner at the top of ever
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## MAPS_URL
|
||||||
|
|
||||||
|
Default: `https://maps.google.com/?q=` (Google Maps)
|
||||||
|
|
||||||
|
This specifies the URL to use when presenting a map of a physical location by street address or GPS coordinates. The URL must accept either a free-form street address or a comma-separated pair of numeric coordinates appended to it.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## MAX_PAGE_SIZE
|
## MAX_PAGE_SIZE
|
||||||
|
|
||||||
Default: 1000
|
Default: 1000
|
||||||
@ -301,7 +309,7 @@ The file path to the location where media files (such as image attachments) are
|
|||||||
|
|
||||||
Default: False
|
Default: False
|
||||||
|
|
||||||
Toggle the availability Prometheus-compatible metrics at `/metrics`. See the [Prometheus Metrics](../../additional-features/prometheus-metrics/) documentation for more details.
|
Toggle the availability Prometheus-compatible metrics at `/metrics`. See the [Prometheus Metrics](../additional-features/prometheus-metrics.md) documentation for more details.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -5,8 +5,8 @@
|
|||||||
Getting started with NetBox development is pretty straightforward, and should feel very familiar to anyone with Django development experience. There are a few things you'll need:
|
Getting started with NetBox development is pretty straightforward, and should feel very familiar to anyone with Django development experience. There are a few things you'll need:
|
||||||
|
|
||||||
* A Linux system or environment
|
* A Linux system or environment
|
||||||
* A PostgreSQL server, which can be installed locally [per the documentation](/installation/1-postgresql/)
|
* A PostgreSQL server, which can be installed locally [per the documentation](../installation/1-postgresql.md)
|
||||||
* A Redis server, which can also be [installed locally](/installation/2-redis/)
|
* A Redis server, which can also be [installed locally](../installation/2-redis.md)
|
||||||
* A supported version of Python
|
* A supported version of Python
|
||||||
|
|
||||||
### Fork the Repo
|
### Fork the Repo
|
||||||
|
@ -8,7 +8,7 @@ There are several official forums for communication among the developers and com
|
|||||||
|
|
||||||
* [GitHub issues](https://github.com/netbox-community/netbox/issues) - All feature requests, bug reports, and other substantial changes to the code base **must** be documented in an issue.
|
* [GitHub issues](https://github.com/netbox-community/netbox/issues) - All feature requests, bug reports, and other substantial changes to the code base **must** be documented in an issue.
|
||||||
* [GitHub Discussions](https://github.com/netbox-community/netbox/discussions) - The preferred forum for general discussion and support issues. Ideal for shaping a feature request prior to submitting an issue.
|
* [GitHub Discussions](https://github.com/netbox-community/netbox/discussions) - The preferred forum for general discussion and support issues. Ideal for shaping a feature request prior to submitting an issue.
|
||||||
* [#netbox on NetDev Community Slack](https://join.slack.com/t/netdev-community/shared_invite/zt-mtts8g0n-Sm6Wutn62q_M4OdsaIycrQ) - Good for quick chats. Avoid any discussion that might need to be referenced later on, as the chat history is not retained long.
|
* [#netbox on NetDev Community Slack](https://slack.netbox.dev/) - Good for quick chats. Avoid any discussion that might need to be referenced later on, as the chat history is not retained long.
|
||||||
* [Google Group](https://groups.google.com/g/netbox-discuss) - Legacy mailing list; slowly being phased out in favor of GitHub discussions.
|
* [Google Group](https://groups.google.com/g/netbox-discuss) - Legacy mailing list; slowly being phased out in favor of GitHub discussions.
|
||||||
|
|
||||||
## Governance
|
## Governance
|
||||||
|
@ -113,7 +113,7 @@ cd /opt/netbox/netbox/netbox/
|
|||||||
sudo cp configuration.example.py configuration.py
|
sudo cp configuration.example.py configuration.py
|
||||||
```
|
```
|
||||||
|
|
||||||
Open `configuration.py` with your preferred editor to begin configuring NetBox. NetBox offers [many configuration parameters](/configuration/), but only the following four are required for new installations:
|
Open `configuration.py` with your preferred editor to begin configuring NetBox. NetBox offers [many configuration parameters](../configuration/index.md), but only the following four are required for new installations:
|
||||||
|
|
||||||
* `ALLOWED_HOSTS`
|
* `ALLOWED_HOSTS`
|
||||||
* `DATABASE`
|
* `DATABASE`
|
||||||
@ -136,7 +136,7 @@ ALLOWED_HOSTS = ['*']
|
|||||||
|
|
||||||
### DATABASE
|
### DATABASE
|
||||||
|
|
||||||
This parameter holds the database configuration details. You must define the username and password used when you configured PostgreSQL. If the service is running on a remote host, update the `HOST` and `PORT` parameters accordingly. See the [configuration documentation](/configuration/required-settings/#database) for more detail on individual parameters.
|
This parameter holds the database configuration details. You must define the username and password used when you configured PostgreSQL. If the service is running on a remote host, update the `HOST` and `PORT` parameters accordingly. See the [configuration documentation](../configuration/required-settings.md#database) for more detail on individual parameters.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
DATABASE = {
|
DATABASE = {
|
||||||
@ -151,7 +151,7 @@ DATABASE = {
|
|||||||
|
|
||||||
### REDIS
|
### REDIS
|
||||||
|
|
||||||
Redis is a in-memory key-value store used by NetBox for caching and background task queuing. Redis typically requires minimal configuration; the values below should suffice for most installations. See the [configuration documentation](/configuration/required-settings/#redis) for more detail on individual parameters.
|
Redis is a in-memory key-value store used by NetBox for caching and background task queuing. Redis typically requires minimal configuration; the values below should suffice for most installations. See the [configuration documentation](../configuration/required-settings.md#redis) for more detail on individual parameters.
|
||||||
|
|
||||||
Note that NetBox requires the specification of two separate Redis databases: `tasks` and `caching`. These may both be provided by the same Redis service, however each should have a unique numeric database ID.
|
Note that NetBox requires the specification of two separate Redis databases: `tasks` and `caching`. These may both be provided by the same Redis service, however each should have a unique numeric database ID.
|
||||||
|
|
||||||
@ -203,7 +203,7 @@ sudo echo napalm >> /opt/netbox/local_requirements.txt
|
|||||||
|
|
||||||
### Remote File Storage
|
### Remote File Storage
|
||||||
|
|
||||||
By default, NetBox will use the local filesystem to store uploaded files. To use a remote filesystem, install the [`django-storages`](https://django-storages.readthedocs.io/en/stable/) library and configure your [desired storage backend](/configuration/optional-settings/#storage_backend) in `configuration.py`.
|
By default, NetBox will use the local filesystem to store uploaded files. To use a remote filesystem, install the [`django-storages`](https://django-storages.readthedocs.io/en/stable/) library and configure your [desired storage backend](../configuration/optional-settings.md#storage_backend) in `configuration.py`.
|
||||||
|
|
||||||
```no-highlight
|
```no-highlight
|
||||||
sudo echo django-storages >> /opt/netbox/local_requirements.txt
|
sudo echo django-storages >> /opt/netbox/local_requirements.txt
|
||||||
|
@ -142,7 +142,7 @@ AUTH_LDAP_CACHE_TIMEOUT = 3600
|
|||||||
|
|
||||||
`systemctl restart netbox` restarts the NetBox service, and initiates any changes made to `ldap_config.py`. If there are syntax errors present, the NetBox process will not spawn an instance, and errors should be logged to `/var/log/messages`.
|
`systemctl restart netbox` restarts the NetBox service, and initiates any changes made to `ldap_config.py`. If there are syntax errors present, the NetBox process will not spawn an instance, and errors should be logged to `/var/log/messages`.
|
||||||
|
|
||||||
For troubleshooting LDAP user/group queries, add or merge the following [logging](/configuration/optional-settings.md#logging) configuration to `configuration.py`:
|
For troubleshooting LDAP user/group queries, add or merge the following [logging](../configuration/optional-settings.md#logging) configuration to `configuration.py`:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
LOGGING = {
|
LOGGING = {
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
## Review the Release Notes
|
## Review the Release Notes
|
||||||
|
|
||||||
Prior to upgrading your NetBox instance, be sure to carefully review all [release notes](../../release-notes/) that have been published since your current version was released. Although the upgrade process typically does not involve additional work, certain releases may introduce breaking or backward-incompatible changes. These are called out in the release notes under the release in which the change went into effect.
|
Prior to upgrading your NetBox instance, be sure to carefully review all [release notes](../release-notes/index.md) that have been published since your current version was released. Although the upgrade process typically does not involve additional work, certain releases may introduce breaking or backward-incompatible changes. These are called out in the release notes under the release in which the change went into effect.
|
||||||
|
|
||||||
## Update Dependencies to Required Versions
|
## Update Dependencies to Required Versions
|
||||||
|
|
||||||
|
@ -1,5 +1,29 @@
|
|||||||
# NetBox v2.10
|
# NetBox v2.10
|
||||||
|
|
||||||
|
## v2.10.7 (2021-03-25)
|
||||||
|
|
||||||
|
### Enhancements
|
||||||
|
|
||||||
|
* [#5641](https://github.com/netbox-community/netbox/issues/5641) - Allow filtering device components by label
|
||||||
|
* [#5723](https://github.com/netbox-community/netbox/issues/5723) - Allow customization of the geographic mapping service via `MAPS_URL` config parameter
|
||||||
|
* [#5736](https://github.com/netbox-community/netbox/issues/5736) - Allow changing site assignment when bulk editing devices
|
||||||
|
* [#5953](https://github.com/netbox-community/netbox/issues/5953) - Support Markdown rendering for custom script descriptions
|
||||||
|
* [#6040](https://github.com/netbox-community/netbox/issues/6040) - Add UI search fields for asset tag for devices and racks
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
* [#5595](https://github.com/netbox-community/netbox/issues/5595) - Restore ability to delete an uploaded device type image
|
||||||
|
* [#5650](https://github.com/netbox-community/netbox/issues/5650) - Denote when the total length of a cable trace may exceed the indicated value
|
||||||
|
* [#5962](https://github.com/netbox-community/netbox/issues/5962) - Ensure consistent display of change log action labels
|
||||||
|
* [#5966](https://github.com/netbox-community/netbox/issues/5966) - Skip Markdown reference link when tabbing through form fields
|
||||||
|
* [#5977](https://github.com/netbox-community/netbox/issues/5977) - Correct validation of `RELEASE_CHECK_URL` config parameter
|
||||||
|
* [#6006](https://github.com/netbox-community/netbox/issues/6006) - Fix VLAN group/site association for bulk prefix import
|
||||||
|
* [#6010](https://github.com/netbox-community/netbox/issues/6010) - Eliminate duplicate virtual chassis search results
|
||||||
|
* [#6012](https://github.com/netbox-community/netbox/issues/6012) - Pre-populate attributes when creating an available child prefix via the UI
|
||||||
|
* [#6023](https://github.com/netbox-community/netbox/issues/6023) - Fix display of bottom banner with uBlock Origin enabled
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## v2.10.6 (2021-03-09)
|
## v2.10.6 (2021-03-09)
|
||||||
|
|
||||||
### Enhancements
|
### Enhancements
|
||||||
@ -19,6 +43,8 @@
|
|||||||
* [#5935](https://github.com/netbox-community/netbox/issues/5935) - Fix filtering prefixes list by multiple prefix values
|
* [#5935](https://github.com/netbox-community/netbox/issues/5935) - Fix filtering prefixes list by multiple prefix values
|
||||||
* [#5948](https://github.com/netbox-community/netbox/issues/5948) - Invalidate cached queries when running `renaturalize`
|
* [#5948](https://github.com/netbox-community/netbox/issues/5948) - Invalidate cached queries when running `renaturalize`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## v2.10.5 (2021-02-24)
|
## v2.10.5 (2021-02-24)
|
||||||
|
|
||||||
### Bug Fixes
|
### Bug Fixes
|
||||||
|
@ -20,7 +20,7 @@ http://netbox/api/dcim/sites/
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
A token is not required for read-only operations which have been exempted from permissions enforcement (using the [`EXEMPT_VIEW_PERMISSIONS`](../../configuration/optional-settings/#exempt_view_permissions) configuration parameter). However, if a token _is_ required but not present in a request, the API will return a 403 (Forbidden) response:
|
A token is not required for read-only operations which have been exempted from permissions enforcement (using the [`EXEMPT_VIEW_PERMISSIONS`](../configuration/optional-settings.md#exempt_view_permissions) configuration parameter). However, if a token _is_ required but not present in a request, the API will return a 403 (Forbidden) response:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ curl http://netbox/api/dcim/sites/
|
$ curl http://netbox/api/dcim/sites/
|
||||||
|
@ -269,7 +269,7 @@ The brief format is supported for both lists and individual objects.
|
|||||||
|
|
||||||
### Excluding Config Contexts
|
### Excluding Config Contexts
|
||||||
|
|
||||||
When retrieving devices and virtual machines via the REST API, each will included its rendered [configuration context data](../models/extras/configcontext/) by default. Users with large amounts of context data will likely observe suboptimal performance when returning multiple objects, particularly with very high page sizes. To combat this, context data may be excluded from the response data by attaching the query parameter `?exclude=config_context` to the request. This parameter works for both list and detail views.
|
When retrieving devices and virtual machines via the REST API, each will included its rendered [configuration context data](../models/extras/configcontext.md) by default. Users with large amounts of context data will likely observe suboptimal performance when returning multiple objects, particularly with very high page sizes. To combat this, context data may be excluded from the response data by attaching the query parameter `?exclude=config_context` to the request. This parameter works for both list and detail views.
|
||||||
|
|
||||||
## Pagination
|
## Pagination
|
||||||
|
|
||||||
@ -308,7 +308,7 @@ Vary: Accept
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The default page is determined by the [`PAGINATE_COUNT`](../../configuration/optional-settings/#paginate_count) configuration parameter, which defaults to 50. However, this can be overridden per request by specifying the desired `offset` and `limit` query parameters. For example, if you wish to retrieve a hundred devices at a time, you would make a request for:
|
The default page is determined by the [`PAGINATE_COUNT`](../configuration/optional-settings.md#paginate_count) configuration parameter, which defaults to 50. However, this can be overridden per request by specifying the desired `offset` and `limit` query parameters. For example, if you wish to retrieve a hundred devices at a time, you would make a request for:
|
||||||
|
|
||||||
```
|
```
|
||||||
http://netbox/api/dcim/devices/?limit=100
|
http://netbox/api/dcim/devices/?limit=100
|
||||||
@ -325,7 +325,7 @@ The response will return devices 1 through 100. The URL provided in the `next` a
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The maximum number of objects that can be returned is limited by the [`MAX_PAGE_SIZE`](../../configuration/optional-settings/#max_page_size) configuration parameter, which is 1000 by default. Setting this to `0` or `None` will remove the maximum limit. An API consumer can then pass `?limit=0` to retrieve _all_ matching objects with a single request.
|
The maximum number of objects that can be returned is limited by the [`MAX_PAGE_SIZE`](../configuration/optional-settings.md#max_page_size) configuration parameter, which is 1000 by default. Setting this to `0` or `None` will remove the maximum limit. An API consumer can then pass `?limit=0` to retrieve _all_ matching objects with a single request.
|
||||||
|
|
||||||
!!! warning
|
!!! warning
|
||||||
Disabling the page size limit introduces a potential for very resource-intensive requests, since one API request can effectively retrieve an entire table from the database.
|
Disabling the page size limit introduces a potential for very resource-intensive requests, since one API request can effectively retrieve an entire table from the database.
|
||||||
@ -387,7 +387,7 @@ curl -s -X GET http://netbox/api/ipam/ip-addresses/5618/ | jq '.'
|
|||||||
|
|
||||||
### Creating a New Object
|
### Creating a New Object
|
||||||
|
|
||||||
To create a new object, make a `POST` request to the model's _list_ endpoint with JSON data pertaining to the object being created. Note that a REST API token is required for all write operations; see the [authentication documentation](../authentication/) for more information. Also be sure to set the `Content-Type` HTTP header to `application/json`.
|
To create a new object, make a `POST` request to the model's _list_ endpoint with JSON data pertaining to the object being created. Note that a REST API token is required for all write operations; see the [authentication documentation](../authentication/index.md) for more information. Also be sure to set the `Content-Type` HTTP header to `application/json`.
|
||||||
|
|
||||||
```no-highlight
|
```no-highlight
|
||||||
curl -s -X POST \
|
curl -s -X POST \
|
||||||
|
@ -4,7 +4,7 @@ As with most other objects, the REST API can be used to view, create, modify, an
|
|||||||
|
|
||||||
## Generating a Session Key
|
## Generating a Session Key
|
||||||
|
|
||||||
In order to encrypt or decrypt secret data, a session key must be attached to the API request. To generate a session key, send an authenticated request to the `/api/secrets/get-session-key/` endpoint with the private RSA key which matches your [UserKey](../../core-functionality/secrets/#user-keys). The private key must be POSTed with the name `private_key`.
|
In order to encrypt or decrypt secret data, a session key must be attached to the API request. To generate a session key, send an authenticated request to the `/api/secrets/get-session-key/` endpoint with the private RSA key which matches your [UserKey](../core-functionality/secrets.md#user-keys). The private key must be POSTed with the name `private_key`.
|
||||||
|
|
||||||
```no-highlight
|
```no-highlight
|
||||||
$ curl -X POST http://netbox/api/secrets/get-session-key/ \
|
$ curl -X POST http://netbox/api/secrets/get-session-key/ \
|
||||||
|
@ -3,13 +3,14 @@ from django.contrib.contenttypes.models import ContentType
|
|||||||
from drf_yasg.utils import swagger_serializer_method
|
from drf_yasg.utils import swagger_serializer_method
|
||||||
from rest_framework import serializers
|
from rest_framework import serializers
|
||||||
from rest_framework.validators import UniqueTogetherValidator
|
from rest_framework.validators import UniqueTogetherValidator
|
||||||
|
from timezone_field.rest_framework import TimeZoneSerializerField
|
||||||
|
|
||||||
from dcim.choices import *
|
from dcim.choices import *
|
||||||
from dcim.constants import *
|
from dcim.constants import *
|
||||||
from dcim.models import *
|
from dcim.models import *
|
||||||
from ipam.api.nested_serializers import NestedIPAddressSerializer, NestedVLANSerializer
|
from ipam.api.nested_serializers import NestedIPAddressSerializer, NestedVLANSerializer
|
||||||
from ipam.models import VLAN
|
from ipam.models import VLAN
|
||||||
from netbox.api import ChoiceField, ContentTypeField, SerializedPKRelatedField, TimeZoneField
|
from netbox.api import ChoiceField, ContentTypeField, SerializedPKRelatedField
|
||||||
from netbox.api.serializers import (
|
from netbox.api.serializers import (
|
||||||
NestedGroupModelSerializer, OrganizationalModelSerializer, PrimaryModelSerializer, ValidatedModelSerializer,
|
NestedGroupModelSerializer, OrganizationalModelSerializer, PrimaryModelSerializer, ValidatedModelSerializer,
|
||||||
WritableNestedSerializer,
|
WritableNestedSerializer,
|
||||||
@ -106,7 +107,7 @@ class SiteSerializer(PrimaryModelSerializer):
|
|||||||
region = NestedRegionSerializer(required=False, allow_null=True)
|
region = NestedRegionSerializer(required=False, allow_null=True)
|
||||||
group = NestedSiteGroupSerializer(required=False, allow_null=True)
|
group = NestedSiteGroupSerializer(required=False, allow_null=True)
|
||||||
tenant = NestedTenantSerializer(required=False, allow_null=True)
|
tenant = NestedTenantSerializer(required=False, allow_null=True)
|
||||||
time_zone = TimeZoneField(required=False)
|
time_zone = TimeZoneSerializerField(required=False)
|
||||||
circuit_count = serializers.IntegerField(read_only=True)
|
circuit_count = serializers.IntegerField(read_only=True)
|
||||||
device_count = serializers.IntegerField(read_only=True)
|
device_count = serializers.IntegerField(read_only=True)
|
||||||
prefix_count = serializers.IntegerField(read_only=True)
|
prefix_count = serializers.IntegerField(read_only=True)
|
||||||
|
@ -857,7 +857,7 @@ class ConsolePortFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTermina
|
|||||||
|
|
||||||
class Meta:
|
class Meta:
|
||||||
model = ConsolePort
|
model = ConsolePort
|
||||||
fields = ['id', 'name', 'description']
|
fields = ['id', 'name', 'label', 'description']
|
||||||
|
|
||||||
|
|
||||||
class ConsoleServerPortFilterSet(
|
class ConsoleServerPortFilterSet(
|
||||||
@ -873,7 +873,7 @@ class ConsoleServerPortFilterSet(
|
|||||||
|
|
||||||
class Meta:
|
class Meta:
|
||||||
model = ConsoleServerPort
|
model = ConsoleServerPort
|
||||||
fields = ['id', 'name', 'description']
|
fields = ['id', 'name', 'label', 'description']
|
||||||
|
|
||||||
|
|
||||||
class PowerPortFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTerminationFilterSet, PathEndpointFilterSet):
|
class PowerPortFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTerminationFilterSet, PathEndpointFilterSet):
|
||||||
@ -884,7 +884,7 @@ class PowerPortFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTerminati
|
|||||||
|
|
||||||
class Meta:
|
class Meta:
|
||||||
model = PowerPort
|
model = PowerPort
|
||||||
fields = ['id', 'name', 'maximum_draw', 'allocated_draw', 'description']
|
fields = ['id', 'name', 'label', 'maximum_draw', 'allocated_draw', 'description']
|
||||||
|
|
||||||
|
|
||||||
class PowerOutletFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTerminationFilterSet, PathEndpointFilterSet):
|
class PowerOutletFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTerminationFilterSet, PathEndpointFilterSet):
|
||||||
@ -895,7 +895,7 @@ class PowerOutletFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTermina
|
|||||||
|
|
||||||
class Meta:
|
class Meta:
|
||||||
model = PowerOutlet
|
model = PowerOutlet
|
||||||
fields = ['id', 'name', 'feed_leg', 'description']
|
fields = ['id', 'name', 'label', 'feed_leg', 'description']
|
||||||
|
|
||||||
|
|
||||||
class InterfaceFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTerminationFilterSet, PathEndpointFilterSet):
|
class InterfaceFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTerminationFilterSet, PathEndpointFilterSet):
|
||||||
@ -946,7 +946,7 @@ class InterfaceFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTerminati
|
|||||||
|
|
||||||
class Meta:
|
class Meta:
|
||||||
model = Interface
|
model = Interface
|
||||||
fields = ['id', 'name', 'type', 'enabled', 'mtu', 'mgmt_only', 'mode', 'description']
|
fields = ['id', 'name', 'label', 'type', 'enabled', 'mtu', 'mgmt_only', 'mode', 'description']
|
||||||
|
|
||||||
def filter_device(self, queryset, name, value):
|
def filter_device(self, queryset, name, value):
|
||||||
try:
|
try:
|
||||||
@ -1000,21 +1000,21 @@ class FrontPortFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTerminati
|
|||||||
|
|
||||||
class Meta:
|
class Meta:
|
||||||
model = FrontPort
|
model = FrontPort
|
||||||
fields = ['id', 'name', 'type', 'description']
|
fields = ['id', 'name', 'label', 'type', 'description']
|
||||||
|
|
||||||
|
|
||||||
class RearPortFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTerminationFilterSet):
|
class RearPortFilterSet(BaseFilterSet, DeviceComponentFilterSet, CableTerminationFilterSet):
|
||||||
|
|
||||||
class Meta:
|
class Meta:
|
||||||
model = RearPort
|
model = RearPort
|
||||||
fields = ['id', 'name', 'type', 'positions', 'description']
|
fields = ['id', 'name', 'label', 'type', 'positions', 'description']
|
||||||
|
|
||||||
|
|
||||||
class DeviceBayFilterSet(BaseFilterSet, DeviceComponentFilterSet):
|
class DeviceBayFilterSet(BaseFilterSet, DeviceComponentFilterSet):
|
||||||
|
|
||||||
class Meta:
|
class Meta:
|
||||||
model = DeviceBay
|
model = DeviceBay
|
||||||
fields = ['id', 'name', 'description']
|
fields = ['id', 'name', 'label', 'description']
|
||||||
|
|
||||||
|
|
||||||
class InventoryItemFilterSet(BaseFilterSet, DeviceComponentFilterSet):
|
class InventoryItemFilterSet(BaseFilterSet, DeviceComponentFilterSet):
|
||||||
@ -1075,7 +1075,7 @@ class InventoryItemFilterSet(BaseFilterSet, DeviceComponentFilterSet):
|
|||||||
|
|
||||||
class Meta:
|
class Meta:
|
||||||
model = InventoryItem
|
model = InventoryItem
|
||||||
fields = ['id', 'name', 'part_id', 'asset_tag', 'discovered']
|
fields = ['id', 'name', 'label', 'part_id', 'asset_tag', 'discovered']
|
||||||
|
|
||||||
def search(self, queryset, name, value):
|
def search(self, queryset, name, value):
|
||||||
if not value.strip():
|
if not value.strip():
|
||||||
@ -1167,7 +1167,7 @@ class VirtualChassisFilterSet(BaseFilterSet):
|
|||||||
Q(members__name__icontains=value) |
|
Q(members__name__icontains=value) |
|
||||||
Q(domain__icontains=value)
|
Q(domain__icontains=value)
|
||||||
)
|
)
|
||||||
return queryset.filter(qs_filter)
|
return queryset.filter(qs_filter).distinct()
|
||||||
|
|
||||||
|
|
||||||
class CableFilterSet(BaseFilterSet):
|
class CableFilterSet(BaseFilterSet):
|
||||||
|
@ -56,12 +56,18 @@ def get_device_by_name_or_pk(name):
|
|||||||
|
|
||||||
class DeviceComponentFilterForm(BootstrapMixin, CustomFieldFilterForm):
|
class DeviceComponentFilterForm(BootstrapMixin, CustomFieldFilterForm):
|
||||||
field_order = [
|
field_order = [
|
||||||
'q', 'region_id', 'site_group_id', 'site_id'
|
'q', 'name', 'label', 'region_id', 'site_group_id', 'site_id',
|
||||||
]
|
]
|
||||||
q = forms.CharField(
|
q = forms.CharField(
|
||||||
required=False,
|
required=False,
|
||||||
label=_('Search')
|
label=_('Search')
|
||||||
)
|
)
|
||||||
|
name = forms.CharField(
|
||||||
|
required=False
|
||||||
|
)
|
||||||
|
label = forms.CharField(
|
||||||
|
required=False
|
||||||
|
)
|
||||||
region_id = DynamicModelMultipleChoiceField(
|
region_id = DynamicModelMultipleChoiceField(
|
||||||
queryset=Region.objects.all(),
|
queryset=Region.objects.all(),
|
||||||
required=False,
|
required=False,
|
||||||
@ -880,6 +886,9 @@ class RackFilterForm(BootstrapMixin, TenancyFilterForm, CustomFieldFilterForm):
|
|||||||
null_option='None',
|
null_option='None',
|
||||||
label=_('Role')
|
label=_('Role')
|
||||||
)
|
)
|
||||||
|
asset_tag = forms.CharField(
|
||||||
|
required=False
|
||||||
|
)
|
||||||
tag = TagFilterField(model)
|
tag = TagFilterField(model)
|
||||||
|
|
||||||
|
|
||||||
@ -1149,10 +1158,10 @@ class DeviceTypeForm(BootstrapMixin, CustomFieldModelForm):
|
|||||||
widgets = {
|
widgets = {
|
||||||
'subdevice_role': StaticSelect2(),
|
'subdevice_role': StaticSelect2(),
|
||||||
# Exclude SVG images (unsupported by PIL)
|
# Exclude SVG images (unsupported by PIL)
|
||||||
'front_image': forms.FileInput(attrs={
|
'front_image': forms.ClearableFileInput(attrs={
|
||||||
'accept': 'image/bmp,image/gif,image/jpeg,image/png,image/tiff'
|
'accept': 'image/bmp,image/gif,image/jpeg,image/png,image/tiff'
|
||||||
}),
|
}),
|
||||||
'rear_image': forms.FileInput(attrs={
|
'rear_image': forms.ClearableFileInput(attrs={
|
||||||
'accept': 'image/bmp,image/gif,image/jpeg,image/png,image/tiff'
|
'accept': 'image/bmp,image/gif,image/jpeg,image/png,image/tiff'
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
@ -2344,6 +2353,10 @@ class DeviceBulkEditForm(BootstrapMixin, AddRemoveTagsForm, CustomFieldBulkEditF
|
|||||||
queryset=DeviceRole.objects.all(),
|
queryset=DeviceRole.objects.all(),
|
||||||
required=False
|
required=False
|
||||||
)
|
)
|
||||||
|
site = DynamicModelChoiceField(
|
||||||
|
queryset=Site.objects.all(),
|
||||||
|
required=False
|
||||||
|
)
|
||||||
tenant = DynamicModelChoiceField(
|
tenant = DynamicModelChoiceField(
|
||||||
queryset=Tenant.objects.all(),
|
queryset=Tenant.objects.all(),
|
||||||
required=False
|
required=False
|
||||||
@ -2373,7 +2386,7 @@ class DeviceFilterForm(BootstrapMixin, LocalConfigContextFilterForm, TenancyFilt
|
|||||||
model = Device
|
model = Device
|
||||||
field_order = [
|
field_order = [
|
||||||
'q', 'region_id', 'site_id', 'location_id', 'rack_id', 'status', 'role_id', 'tenant_group_id', 'tenant_id',
|
'q', 'region_id', 'site_id', 'location_id', 'rack_id', 'status', 'role_id', 'tenant_group_id', 'tenant_id',
|
||||||
'manufacturer_id', 'device_type_id', 'mac_address', 'has_primary_ip',
|
'manufacturer_id', 'device_type_id', 'asset_tag', 'mac_address', 'has_primary_ip',
|
||||||
]
|
]
|
||||||
q = forms.CharField(
|
q = forms.CharField(
|
||||||
required=False,
|
required=False,
|
||||||
@ -2437,6 +2450,9 @@ class DeviceFilterForm(BootstrapMixin, LocalConfigContextFilterForm, TenancyFilt
|
|||||||
required=False,
|
required=False,
|
||||||
widget=StaticSelect2Multiple()
|
widget=StaticSelect2Multiple()
|
||||||
)
|
)
|
||||||
|
asset_tag = forms.CharField(
|
||||||
|
required=False
|
||||||
|
)
|
||||||
mac_address = forms.CharField(
|
mac_address = forms.CharField(
|
||||||
required=False,
|
required=False,
|
||||||
label='MAC address'
|
label='MAC address'
|
||||||
|
@ -488,17 +488,23 @@ class CablePath(BigIDModel):
|
|||||||
|
|
||||||
def get_total_length(self):
|
def get_total_length(self):
|
||||||
"""
|
"""
|
||||||
Return the sum of the length of each cable in the path.
|
Return a tuple containing the sum of the length of each cable in the path
|
||||||
|
and a flag indicating whether the length is definitive.
|
||||||
"""
|
"""
|
||||||
cable_ids = [
|
cable_ids = [
|
||||||
# Starting from the first element, every third element in the path should be a Cable
|
# Starting from the first element, every third element in the path should be a Cable
|
||||||
decompile_path_node(self.path[i])[1] for i in range(0, len(self.path), 3)
|
decompile_path_node(self.path[i])[1] for i in range(0, len(self.path), 3)
|
||||||
]
|
]
|
||||||
return Cable.objects.filter(id__in=cable_ids).aggregate(total=Sum('_abs_length'))['total']
|
cables = Cable.objects.filter(id__in=cable_ids, _abs_length__isnull=False)
|
||||||
|
total_length = cables.aggregate(total=Sum('_abs_length'))['total']
|
||||||
|
is_definitive = len(cables) == len(cable_ids)
|
||||||
|
|
||||||
|
return total_length, is_definitive
|
||||||
|
|
||||||
def get_split_nodes(self):
|
def get_split_nodes(self):
|
||||||
"""
|
"""
|
||||||
Return all available next segments in a split cable path.
|
Return all available next segments in a split cable path.
|
||||||
"""
|
"""
|
||||||
rearport = path_node_to_object(self.path[-1])
|
rearport = path_node_to_object(self.path[-1])
|
||||||
|
|
||||||
return FrontPort.objects.filter(rear_port=rearport)
|
return FrontPort.objects.filter(rear_port=rearport)
|
||||||
|
@ -1601,9 +1601,9 @@ class ConsolePortTestCase(TestCase):
|
|||||||
ConsoleServerPort.objects.bulk_create(console_server_ports)
|
ConsoleServerPort.objects.bulk_create(console_server_ports)
|
||||||
|
|
||||||
console_ports = (
|
console_ports = (
|
||||||
ConsolePort(device=devices[0], name='Console Port 1', description='First'),
|
ConsolePort(device=devices[0], name='Console Port 1', label='A', description='First'),
|
||||||
ConsolePort(device=devices[1], name='Console Port 2', description='Second'),
|
ConsolePort(device=devices[1], name='Console Port 2', label='B', description='Second'),
|
||||||
ConsolePort(device=devices[2], name='Console Port 3', description='Third'),
|
ConsolePort(device=devices[2], name='Console Port 3', label='C', description='Third'),
|
||||||
)
|
)
|
||||||
ConsolePort.objects.bulk_create(console_ports)
|
ConsolePort.objects.bulk_create(console_ports)
|
||||||
|
|
||||||
@ -1620,6 +1620,10 @@ class ConsolePortTestCase(TestCase):
|
|||||||
params = {'name': ['Console Port 1', 'Console Port 2']}
|
params = {'name': ['Console Port 1', 'Console Port 2']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
|
def test_label(self):
|
||||||
|
params = {'label': ['A', 'B']}
|
||||||
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
def test_description(self):
|
def test_description(self):
|
||||||
params = {'description': ['First', 'Second']}
|
params = {'description': ['First', 'Second']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
@ -1713,9 +1717,9 @@ class ConsoleServerPortTestCase(TestCase):
|
|||||||
ConsolePort.objects.bulk_create(console_ports)
|
ConsolePort.objects.bulk_create(console_ports)
|
||||||
|
|
||||||
console_server_ports = (
|
console_server_ports = (
|
||||||
ConsoleServerPort(device=devices[0], name='Console Server Port 1', description='First'),
|
ConsoleServerPort(device=devices[0], name='Console Server Port 1', label='A', description='First'),
|
||||||
ConsoleServerPort(device=devices[1], name='Console Server Port 2', description='Second'),
|
ConsoleServerPort(device=devices[1], name='Console Server Port 2', label='B', description='Second'),
|
||||||
ConsoleServerPort(device=devices[2], name='Console Server Port 3', description='Third'),
|
ConsoleServerPort(device=devices[2], name='Console Server Port 3', label='C', description='Third'),
|
||||||
)
|
)
|
||||||
ConsoleServerPort.objects.bulk_create(console_server_ports)
|
ConsoleServerPort.objects.bulk_create(console_server_ports)
|
||||||
|
|
||||||
@ -1732,6 +1736,10 @@ class ConsoleServerPortTestCase(TestCase):
|
|||||||
params = {'name': ['Console Server Port 1', 'Console Server Port 2']}
|
params = {'name': ['Console Server Port 1', 'Console Server Port 2']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
|
def test_label(self):
|
||||||
|
params = {'label': ['A', 'B']}
|
||||||
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
def test_description(self):
|
def test_description(self):
|
||||||
params = {'description': ['First', 'Second']}
|
params = {'description': ['First', 'Second']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
@ -1825,9 +1833,9 @@ class PowerPortTestCase(TestCase):
|
|||||||
PowerOutlet.objects.bulk_create(power_outlets)
|
PowerOutlet.objects.bulk_create(power_outlets)
|
||||||
|
|
||||||
power_ports = (
|
power_ports = (
|
||||||
PowerPort(device=devices[0], name='Power Port 1', maximum_draw=100, allocated_draw=50, description='First'),
|
PowerPort(device=devices[0], name='Power Port 1', label='A', maximum_draw=100, allocated_draw=50, description='First'),
|
||||||
PowerPort(device=devices[1], name='Power Port 2', maximum_draw=200, allocated_draw=100, description='Second'),
|
PowerPort(device=devices[1], name='Power Port 2', label='B', maximum_draw=200, allocated_draw=100, description='Second'),
|
||||||
PowerPort(device=devices[2], name='Power Port 3', maximum_draw=300, allocated_draw=150, description='Third'),
|
PowerPort(device=devices[2], name='Power Port 3', label='C', maximum_draw=300, allocated_draw=150, description='Third'),
|
||||||
)
|
)
|
||||||
PowerPort.objects.bulk_create(power_ports)
|
PowerPort.objects.bulk_create(power_ports)
|
||||||
|
|
||||||
@ -1844,6 +1852,10 @@ class PowerPortTestCase(TestCase):
|
|||||||
params = {'name': ['Power Port 1', 'Power Port 2']}
|
params = {'name': ['Power Port 1', 'Power Port 2']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
|
def test_label(self):
|
||||||
|
params = {'label': ['A', 'B']}
|
||||||
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
def test_description(self):
|
def test_description(self):
|
||||||
params = {'description': ['First', 'Second']}
|
params = {'description': ['First', 'Second']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
@ -1945,9 +1957,9 @@ class PowerOutletTestCase(TestCase):
|
|||||||
PowerPort.objects.bulk_create(power_ports)
|
PowerPort.objects.bulk_create(power_ports)
|
||||||
|
|
||||||
power_outlets = (
|
power_outlets = (
|
||||||
PowerOutlet(device=devices[0], name='Power Outlet 1', feed_leg=PowerOutletFeedLegChoices.FEED_LEG_A, description='First'),
|
PowerOutlet(device=devices[0], name='Power Outlet 1', label='A', feed_leg=PowerOutletFeedLegChoices.FEED_LEG_A, description='First'),
|
||||||
PowerOutlet(device=devices[1], name='Power Outlet 2', feed_leg=PowerOutletFeedLegChoices.FEED_LEG_B, description='Second'),
|
PowerOutlet(device=devices[1], name='Power Outlet 2', label='B', feed_leg=PowerOutletFeedLegChoices.FEED_LEG_B, description='Second'),
|
||||||
PowerOutlet(device=devices[2], name='Power Outlet 3', feed_leg=PowerOutletFeedLegChoices.FEED_LEG_C, description='Third'),
|
PowerOutlet(device=devices[2], name='Power Outlet 3', label='C', feed_leg=PowerOutletFeedLegChoices.FEED_LEG_C, description='Third'),
|
||||||
)
|
)
|
||||||
PowerOutlet.objects.bulk_create(power_outlets)
|
PowerOutlet.objects.bulk_create(power_outlets)
|
||||||
|
|
||||||
@ -1964,6 +1976,10 @@ class PowerOutletTestCase(TestCase):
|
|||||||
params = {'name': ['Power Outlet 1', 'Power Outlet 2']}
|
params = {'name': ['Power Outlet 1', 'Power Outlet 2']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
|
def test_label(self):
|
||||||
|
params = {'label': ['A', 'B']}
|
||||||
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
def test_description(self):
|
def test_description(self):
|
||||||
params = {'description': ['First', 'Second']}
|
params = {'description': ['First', 'Second']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
@ -2056,12 +2072,12 @@ class InterfaceTestCase(TestCase):
|
|||||||
Device.objects.bulk_create(devices)
|
Device.objects.bulk_create(devices)
|
||||||
|
|
||||||
interfaces = (
|
interfaces = (
|
||||||
Interface(device=devices[0], name='Interface 1', type=InterfaceTypeChoices.TYPE_1GE_SFP, enabled=True, mgmt_only=True, mtu=100, mode=InterfaceModeChoices.MODE_ACCESS, mac_address='00-00-00-00-00-01', description='First'),
|
Interface(device=devices[0], name='Interface 1', label='A', type=InterfaceTypeChoices.TYPE_1GE_SFP, enabled=True, mgmt_only=True, mtu=100, mode=InterfaceModeChoices.MODE_ACCESS, mac_address='00-00-00-00-00-01', description='First'),
|
||||||
Interface(device=devices[1], name='Interface 2', type=InterfaceTypeChoices.TYPE_1GE_GBIC, enabled=True, mgmt_only=True, mtu=200, mode=InterfaceModeChoices.MODE_TAGGED, mac_address='00-00-00-00-00-02', description='Second'),
|
Interface(device=devices[1], name='Interface 2', label='B', type=InterfaceTypeChoices.TYPE_1GE_GBIC, enabled=True, mgmt_only=True, mtu=200, mode=InterfaceModeChoices.MODE_TAGGED, mac_address='00-00-00-00-00-02', description='Second'),
|
||||||
Interface(device=devices[2], name='Interface 3', type=InterfaceTypeChoices.TYPE_1GE_FIXED, enabled=False, mgmt_only=False, mtu=300, mode=InterfaceModeChoices.MODE_TAGGED_ALL, mac_address='00-00-00-00-00-03', description='Third'),
|
Interface(device=devices[2], name='Interface 3', label='C', type=InterfaceTypeChoices.TYPE_1GE_FIXED, enabled=False, mgmt_only=False, mtu=300, mode=InterfaceModeChoices.MODE_TAGGED_ALL, mac_address='00-00-00-00-00-03', description='Third'),
|
||||||
Interface(device=devices[3], name='Interface 4', type=InterfaceTypeChoices.TYPE_OTHER, enabled=True, mgmt_only=True),
|
Interface(device=devices[3], name='Interface 4', label='D', type=InterfaceTypeChoices.TYPE_OTHER, enabled=True, mgmt_only=True),
|
||||||
Interface(device=devices[3], name='Interface 5', type=InterfaceTypeChoices.TYPE_OTHER, enabled=True, mgmt_only=True),
|
Interface(device=devices[3], name='Interface 5', label='E', type=InterfaceTypeChoices.TYPE_OTHER, enabled=True, mgmt_only=True),
|
||||||
Interface(device=devices[3], name='Interface 6', type=InterfaceTypeChoices.TYPE_OTHER, enabled=False, mgmt_only=False),
|
Interface(device=devices[3], name='Interface 6', label='F', type=InterfaceTypeChoices.TYPE_OTHER, enabled=False, mgmt_only=False),
|
||||||
)
|
)
|
||||||
Interface.objects.bulk_create(interfaces)
|
Interface.objects.bulk_create(interfaces)
|
||||||
|
|
||||||
@ -2078,6 +2094,10 @@ class InterfaceTestCase(TestCase):
|
|||||||
params = {'name': ['Interface 1', 'Interface 2']}
|
params = {'name': ['Interface 1', 'Interface 2']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
|
def test_label(self):
|
||||||
|
params = {'label': ['A', 'B']}
|
||||||
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
def test_connected(self):
|
def test_connected(self):
|
||||||
params = {'connected': True}
|
params = {'connected': True}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
|
||||||
@ -2237,12 +2257,12 @@ class FrontPortTestCase(TestCase):
|
|||||||
RearPort.objects.bulk_create(rear_ports)
|
RearPort.objects.bulk_create(rear_ports)
|
||||||
|
|
||||||
front_ports = (
|
front_ports = (
|
||||||
FrontPort(device=devices[0], name='Front Port 1', type=PortTypeChoices.TYPE_8P8C, rear_port=rear_ports[0], rear_port_position=1, description='First'),
|
FrontPort(device=devices[0], name='Front Port 1', label='A', type=PortTypeChoices.TYPE_8P8C, rear_port=rear_ports[0], rear_port_position=1, description='First'),
|
||||||
FrontPort(device=devices[1], name='Front Port 2', type=PortTypeChoices.TYPE_110_PUNCH, rear_port=rear_ports[1], rear_port_position=2, description='Second'),
|
FrontPort(device=devices[1], name='Front Port 2', label='B', type=PortTypeChoices.TYPE_110_PUNCH, rear_port=rear_ports[1], rear_port_position=2, description='Second'),
|
||||||
FrontPort(device=devices[2], name='Front Port 3', type=PortTypeChoices.TYPE_BNC, rear_port=rear_ports[2], rear_port_position=3, description='Third'),
|
FrontPort(device=devices[2], name='Front Port 3', label='C', type=PortTypeChoices.TYPE_BNC, rear_port=rear_ports[2], rear_port_position=3, description='Third'),
|
||||||
FrontPort(device=devices[3], name='Front Port 4', type=PortTypeChoices.TYPE_FC, rear_port=rear_ports[3], rear_port_position=1),
|
FrontPort(device=devices[3], name='Front Port 4', label='D', type=PortTypeChoices.TYPE_FC, rear_port=rear_ports[3], rear_port_position=1),
|
||||||
FrontPort(device=devices[3], name='Front Port 5', type=PortTypeChoices.TYPE_FC, rear_port=rear_ports[4], rear_port_position=1),
|
FrontPort(device=devices[3], name='Front Port 5', label='E', type=PortTypeChoices.TYPE_FC, rear_port=rear_ports[4], rear_port_position=1),
|
||||||
FrontPort(device=devices[3], name='Front Port 6', type=PortTypeChoices.TYPE_FC, rear_port=rear_ports[5], rear_port_position=1),
|
FrontPort(device=devices[3], name='Front Port 6', label='F', type=PortTypeChoices.TYPE_FC, rear_port=rear_ports[5], rear_port_position=1),
|
||||||
)
|
)
|
||||||
FrontPort.objects.bulk_create(front_ports)
|
FrontPort.objects.bulk_create(front_ports)
|
||||||
|
|
||||||
@ -2259,6 +2279,10 @@ class FrontPortTestCase(TestCase):
|
|||||||
params = {'name': ['Front Port 1', 'Front Port 2']}
|
params = {'name': ['Front Port 1', 'Front Port 2']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
|
def test_label(self):
|
||||||
|
params = {'label': ['A', 'B']}
|
||||||
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
def test_type(self):
|
def test_type(self):
|
||||||
# TODO: Test for multiple values
|
# TODO: Test for multiple values
|
||||||
params = {'type': PortTypeChoices.TYPE_8P8C}
|
params = {'type': PortTypeChoices.TYPE_8P8C}
|
||||||
@ -2345,12 +2369,12 @@ class RearPortTestCase(TestCase):
|
|||||||
Device.objects.bulk_create(devices)
|
Device.objects.bulk_create(devices)
|
||||||
|
|
||||||
rear_ports = (
|
rear_ports = (
|
||||||
RearPort(device=devices[0], name='Rear Port 1', type=PortTypeChoices.TYPE_8P8C, positions=1, description='First'),
|
RearPort(device=devices[0], name='Rear Port 1', label='A', type=PortTypeChoices.TYPE_8P8C, positions=1, description='First'),
|
||||||
RearPort(device=devices[1], name='Rear Port 2', type=PortTypeChoices.TYPE_110_PUNCH, positions=2, description='Second'),
|
RearPort(device=devices[1], name='Rear Port 2', label='B', type=PortTypeChoices.TYPE_110_PUNCH, positions=2, description='Second'),
|
||||||
RearPort(device=devices[2], name='Rear Port 3', type=PortTypeChoices.TYPE_BNC, positions=3, description='Third'),
|
RearPort(device=devices[2], name='Rear Port 3', label='C', type=PortTypeChoices.TYPE_BNC, positions=3, description='Third'),
|
||||||
RearPort(device=devices[3], name='Rear Port 4', type=PortTypeChoices.TYPE_FC, positions=4),
|
RearPort(device=devices[3], name='Rear Port 4', label='D', type=PortTypeChoices.TYPE_FC, positions=4),
|
||||||
RearPort(device=devices[3], name='Rear Port 5', type=PortTypeChoices.TYPE_FC, positions=5),
|
RearPort(device=devices[3], name='Rear Port 5', label='E', type=PortTypeChoices.TYPE_FC, positions=5),
|
||||||
RearPort(device=devices[3], name='Rear Port 6', type=PortTypeChoices.TYPE_FC, positions=6),
|
RearPort(device=devices[3], name='Rear Port 6', label='F', type=PortTypeChoices.TYPE_FC, positions=6),
|
||||||
)
|
)
|
||||||
RearPort.objects.bulk_create(rear_ports)
|
RearPort.objects.bulk_create(rear_ports)
|
||||||
|
|
||||||
@ -2367,6 +2391,10 @@ class RearPortTestCase(TestCase):
|
|||||||
params = {'name': ['Rear Port 1', 'Rear Port 2']}
|
params = {'name': ['Rear Port 1', 'Rear Port 2']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
|
def test_label(self):
|
||||||
|
params = {'label': ['A', 'B']}
|
||||||
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
def test_type(self):
|
def test_type(self):
|
||||||
# TODO: Test for multiple values
|
# TODO: Test for multiple values
|
||||||
params = {'type': PortTypeChoices.TYPE_8P8C}
|
params = {'type': PortTypeChoices.TYPE_8P8C}
|
||||||
@ -2456,9 +2484,9 @@ class DeviceBayTestCase(TestCase):
|
|||||||
Device.objects.bulk_create(devices)
|
Device.objects.bulk_create(devices)
|
||||||
|
|
||||||
device_bays = (
|
device_bays = (
|
||||||
DeviceBay(device=devices[0], name='Device Bay 1', description='First'),
|
DeviceBay(device=devices[0], name='Device Bay 1', label='A', description='First'),
|
||||||
DeviceBay(device=devices[1], name='Device Bay 2', description='Second'),
|
DeviceBay(device=devices[1], name='Device Bay 2', label='B', description='Second'),
|
||||||
DeviceBay(device=devices[2], name='Device Bay 3', description='Third'),
|
DeviceBay(device=devices[2], name='Device Bay 3', label='C', description='Third'),
|
||||||
)
|
)
|
||||||
DeviceBay.objects.bulk_create(device_bays)
|
DeviceBay.objects.bulk_create(device_bays)
|
||||||
|
|
||||||
@ -2470,6 +2498,10 @@ class DeviceBayTestCase(TestCase):
|
|||||||
params = {'name': ['Device Bay 1', 'Device Bay 2']}
|
params = {'name': ['Device Bay 1', 'Device Bay 2']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
|
def test_label(self):
|
||||||
|
params = {'label': ['A', 'B']}
|
||||||
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
def test_description(self):
|
def test_description(self):
|
||||||
params = {'description': ['First', 'Second']}
|
params = {'description': ['First', 'Second']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
@ -2551,9 +2583,9 @@ class InventoryItemTestCase(TestCase):
|
|||||||
Device.objects.bulk_create(devices)
|
Device.objects.bulk_create(devices)
|
||||||
|
|
||||||
inventory_items = (
|
inventory_items = (
|
||||||
InventoryItem(device=devices[0], manufacturer=manufacturers[0], name='Inventory Item 1', part_id='1001', serial='ABC', asset_tag='1001', discovered=True, description='First'),
|
InventoryItem(device=devices[0], manufacturer=manufacturers[0], name='Inventory Item 1', label='A', part_id='1001', serial='ABC', asset_tag='1001', discovered=True, description='First'),
|
||||||
InventoryItem(device=devices[1], manufacturer=manufacturers[1], name='Inventory Item 2', part_id='1002', serial='DEF', asset_tag='1002', discovered=True, description='Second'),
|
InventoryItem(device=devices[1], manufacturer=manufacturers[1], name='Inventory Item 2', label='B', part_id='1002', serial='DEF', asset_tag='1002', discovered=True, description='Second'),
|
||||||
InventoryItem(device=devices[2], manufacturer=manufacturers[2], name='Inventory Item 3', part_id='1003', serial='GHI', asset_tag='1003', discovered=False, description='Third'),
|
InventoryItem(device=devices[2], manufacturer=manufacturers[2], name='Inventory Item 3', label='C', part_id='1003', serial='GHI', asset_tag='1003', discovered=False, description='Third'),
|
||||||
)
|
)
|
||||||
for i in inventory_items:
|
for i in inventory_items:
|
||||||
i.save()
|
i.save()
|
||||||
@ -2574,6 +2606,10 @@ class InventoryItemTestCase(TestCase):
|
|||||||
params = {'name': ['Inventory Item 1', 'Inventory Item 2']}
|
params = {'name': ['Inventory Item 1', 'Inventory Item 2']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
|
def test_label(self):
|
||||||
|
params = {'label': ['A', 'B']}
|
||||||
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
|
||||||
def test_part_id(self):
|
def test_part_id(self):
|
||||||
params = {'part_id': ['1001', '1002']}
|
params = {'part_id': ['1001', '1002']}
|
||||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||||
|
@ -2250,10 +2250,14 @@ class PathTraceView(generic.ObjectView):
|
|||||||
else:
|
else:
|
||||||
path = related_paths.first()
|
path = related_paths.first()
|
||||||
|
|
||||||
|
# Get the total length of the cable and whether the length is definitive (fully defined)
|
||||||
|
total_length, is_definitive = path.get_total_length if path else (None, False)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'path': path,
|
'path': path,
|
||||||
'related_paths': related_paths,
|
'related_paths': related_paths,
|
||||||
'total_length': path.get_total_length() if path else None,
|
'total_length': total_length,
|
||||||
|
'is_definitive': is_definitive
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -521,12 +521,14 @@ class PrefixCSVForm(CustomFieldModelCSVForm):
|
|||||||
|
|
||||||
if data:
|
if data:
|
||||||
|
|
||||||
# Limit vlan queryset by assigned site and group
|
# Limit VLAN queryset by assigned site and/or group (if specified)
|
||||||
params = {
|
params = {}
|
||||||
f"site__{self.fields['site'].to_field_name}": data.get('site'),
|
if data.get('site'):
|
||||||
f"group__{self.fields['vlan_group'].to_field_name}": data.get('vlan_group'),
|
params[f"site__{self.fields['site'].to_field_name}"] = data.get('site')
|
||||||
}
|
if data.get('vlan_group'):
|
||||||
self.fields['vlan'].queryset = self.fields['vlan'].queryset.filter(**params)
|
params[f"group__{self.fields['vlan_group'].to_field_name}"] = data.get('vlan_group')
|
||||||
|
if params:
|
||||||
|
self.fields['vlan'].queryset = self.fields['vlan'].queryset.filter(**params)
|
||||||
|
|
||||||
|
|
||||||
class PrefixBulkEditForm(BootstrapMixin, AddRemoveTagsForm, CustomFieldBulkEditForm):
|
class PrefixBulkEditForm(BootstrapMixin, AddRemoveTagsForm, CustomFieldBulkEditForm):
|
||||||
|
@ -18,7 +18,7 @@ PREFIX_LINK = """
|
|||||||
{% for i in record.parents|as_range %}
|
{% for i in record.parents|as_range %}
|
||||||
<i class="mdi mdi-circle-small"></i>
|
<i class="mdi mdi-circle-small"></i>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
<a href="{% if record.pk %}{% url 'ipam:prefix' pk=record.pk %}{% else %}{% url 'ipam:prefix_add' %}?prefix={{ record }}{% if parent.vrf %}&vrf={{ parent.vrf.pk }}{% endif %}{% if parent.site %}&site={{ parent.site.pk }}{% endif %}{% if parent.tenant %}&tenant_group={{ parent.tenant.group.pk }}&tenant={{ parent.tenant.pk }}{% endif %}{% endif %}">{{ record.prefix }}</a>
|
<a href="{% if record.pk %}{% url 'ipam:prefix' pk=record.pk %}{% else %}{% url 'ipam:prefix_add' %}?prefix={{ record }}{% if object.vrf %}&vrf={{ object.vrf.pk }}{% endif %}{% if object.site %}&site={{ object.site.pk }}{% endif %}{% if object.tenant %}&tenant_group={{ object.tenant.group.pk }}&tenant={{ object.tenant.pk }}{% endif %}{% endif %}">{{ record.prefix }}</a>
|
||||||
"""
|
"""
|
||||||
|
|
||||||
PREFIX_ROLE_LINK = """
|
PREFIX_ROLE_LINK = """
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
from .fields import ChoiceField, ContentTypeField, SerializedPKRelatedField, TimeZoneField
|
from .fields import ChoiceField, ContentTypeField, SerializedPKRelatedField
|
||||||
from .routers import OrderedDefaultRouter
|
from .routers import OrderedDefaultRouter
|
||||||
from .serializers import BulkOperationSerializer, ValidatedModelSerializer, WritableNestedSerializer
|
from .serializers import BulkOperationSerializer, ValidatedModelSerializer, WritableNestedSerializer
|
||||||
|
|
||||||
@ -9,7 +9,6 @@ __all__ = (
|
|||||||
'ContentTypeField',
|
'ContentTypeField',
|
||||||
'OrderedDefaultRouter',
|
'OrderedDefaultRouter',
|
||||||
'SerializedPKRelatedField',
|
'SerializedPKRelatedField',
|
||||||
'TimeZoneField',
|
|
||||||
'ValidatedModelSerializer',
|
'ValidatedModelSerializer',
|
||||||
'WritableNestedSerializer',
|
'WritableNestedSerializer',
|
||||||
)
|
)
|
||||||
|
@ -104,21 +104,6 @@ class ContentTypeField(RelatedField):
|
|||||||
return f"{obj.app_label}.{obj.model}"
|
return f"{obj.app_label}.{obj.model}"
|
||||||
|
|
||||||
|
|
||||||
class TimeZoneField(serializers.Field):
|
|
||||||
"""
|
|
||||||
Represent a pytz time zone.
|
|
||||||
"""
|
|
||||||
def to_representation(self, obj):
|
|
||||||
return obj.zone if obj else None
|
|
||||||
|
|
||||||
def to_internal_value(self, data):
|
|
||||||
if not data:
|
|
||||||
return ""
|
|
||||||
if data not in pytz.common_timezones:
|
|
||||||
raise ValidationError('Unknown time zone "{}" (see pytz.common_timezones for all options)'.format(data))
|
|
||||||
return pytz.timezone(data)
|
|
||||||
|
|
||||||
|
|
||||||
class SerializedPKRelatedField(PrimaryKeyRelatedField):
|
class SerializedPKRelatedField(PrimaryKeyRelatedField):
|
||||||
"""
|
"""
|
||||||
Extends PrimaryKeyRelatedField to return a serialized object on read. This is useful for representing related
|
Extends PrimaryKeyRelatedField to return a serialized object on read. This is useful for representing related
|
||||||
|
@ -154,6 +154,9 @@ LOGIN_TIMEOUT = None
|
|||||||
# Setting this to True will display a "maintenance mode" banner at the top of every page.
|
# Setting this to True will display a "maintenance mode" banner at the top of every page.
|
||||||
MAINTENANCE_MODE = False
|
MAINTENANCE_MODE = False
|
||||||
|
|
||||||
|
# The URL to use when mapping physical addresses or GPS coordinates
|
||||||
|
MAPS_URL = 'https://maps.google.com/?q='
|
||||||
|
|
||||||
# An API consumer can request an arbitrary number of objects =by appending the "limit" parameter to the URL (e.g.
|
# An API consumer can request an arbitrary number of objects =by appending the "limit" parameter to the URL (e.g.
|
||||||
# "?limit=1000"). This setting defines the maximum limit. Setting it to 0 or None will allow an API consumer to request
|
# "?limit=1000"). This setting defines the maximum limit. Setting it to 0 or None will allow an API consumer to request
|
||||||
# all objects by specifying "?limit=0".
|
# all objects by specifying "?limit=0".
|
||||||
|
@ -94,10 +94,9 @@ LOGGING = getattr(configuration, 'LOGGING', {})
|
|||||||
LOGIN_REQUIRED = getattr(configuration, 'LOGIN_REQUIRED', False)
|
LOGIN_REQUIRED = getattr(configuration, 'LOGIN_REQUIRED', False)
|
||||||
LOGIN_TIMEOUT = getattr(configuration, 'LOGIN_TIMEOUT', None)
|
LOGIN_TIMEOUT = getattr(configuration, 'LOGIN_TIMEOUT', None)
|
||||||
MAINTENANCE_MODE = getattr(configuration, 'MAINTENANCE_MODE', False)
|
MAINTENANCE_MODE = getattr(configuration, 'MAINTENANCE_MODE', False)
|
||||||
|
MAPS_URL = getattr(configuration, 'MAPS_URL', 'https://maps.google.com/?q=')
|
||||||
MAX_PAGE_SIZE = getattr(configuration, 'MAX_PAGE_SIZE', 1000)
|
MAX_PAGE_SIZE = getattr(configuration, 'MAX_PAGE_SIZE', 1000)
|
||||||
MEDIA_ROOT = getattr(configuration, 'MEDIA_ROOT', os.path.join(BASE_DIR, 'media')).rstrip('/')
|
MEDIA_ROOT = getattr(configuration, 'MEDIA_ROOT', os.path.join(BASE_DIR, 'media')).rstrip('/')
|
||||||
STORAGE_BACKEND = getattr(configuration, 'STORAGE_BACKEND', None)
|
|
||||||
STORAGE_CONFIG = getattr(configuration, 'STORAGE_CONFIG', {})
|
|
||||||
METRICS_ENABLED = getattr(configuration, 'METRICS_ENABLED', False)
|
METRICS_ENABLED = getattr(configuration, 'METRICS_ENABLED', False)
|
||||||
NAPALM_ARGS = getattr(configuration, 'NAPALM_ARGS', {})
|
NAPALM_ARGS = getattr(configuration, 'NAPALM_ARGS', {})
|
||||||
NAPALM_PASSWORD = getattr(configuration, 'NAPALM_PASSWORD', '')
|
NAPALM_PASSWORD = getattr(configuration, 'NAPALM_PASSWORD', '')
|
||||||
@ -124,18 +123,23 @@ SESSION_FILE_PATH = getattr(configuration, 'SESSION_FILE_PATH', None)
|
|||||||
SHORT_DATE_FORMAT = getattr(configuration, 'SHORT_DATE_FORMAT', 'Y-m-d')
|
SHORT_DATE_FORMAT = getattr(configuration, 'SHORT_DATE_FORMAT', 'Y-m-d')
|
||||||
SHORT_DATETIME_FORMAT = getattr(configuration, 'SHORT_DATETIME_FORMAT', 'Y-m-d H:i')
|
SHORT_DATETIME_FORMAT = getattr(configuration, 'SHORT_DATETIME_FORMAT', 'Y-m-d H:i')
|
||||||
SHORT_TIME_FORMAT = getattr(configuration, 'SHORT_TIME_FORMAT', 'H:i:s')
|
SHORT_TIME_FORMAT = getattr(configuration, 'SHORT_TIME_FORMAT', 'H:i:s')
|
||||||
|
STORAGE_BACKEND = getattr(configuration, 'STORAGE_BACKEND', None)
|
||||||
|
STORAGE_CONFIG = getattr(configuration, 'STORAGE_CONFIG', {})
|
||||||
TIME_FORMAT = getattr(configuration, 'TIME_FORMAT', 'g:i a')
|
TIME_FORMAT = getattr(configuration, 'TIME_FORMAT', 'g:i a')
|
||||||
TIME_ZONE = getattr(configuration, 'TIME_ZONE', 'UTC')
|
TIME_ZONE = getattr(configuration, 'TIME_ZONE', 'UTC')
|
||||||
|
|
||||||
# Validate update repo URL and timeout
|
# Validate update repo URL and timeout
|
||||||
if RELEASE_CHECK_URL:
|
if RELEASE_CHECK_URL:
|
||||||
try:
|
validator = URLValidator(
|
||||||
URLValidator(RELEASE_CHECK_URL)
|
message=(
|
||||||
except ValidationError:
|
|
||||||
raise ImproperlyConfigured(
|
|
||||||
"RELEASE_CHECK_URL must be a valid API URL. Example: "
|
"RELEASE_CHECK_URL must be a valid API URL. Example: "
|
||||||
"https://api.github.com/repos/netbox-community/netbox"
|
"https://api.github.com/repos/netbox-community/netbox"
|
||||||
)
|
)
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
validator(RELEASE_CHECK_URL)
|
||||||
|
except ValidationError as err:
|
||||||
|
raise ImproperlyConfigured(str(err))
|
||||||
|
|
||||||
# Enforce a minimum cache timeout for update checks
|
# Enforce a minimum cache timeout for update checks
|
||||||
if RELEASE_CHECK_TIMEOUT < 3600:
|
if RELEASE_CHECK_TIMEOUT < 3600:
|
||||||
|
@ -362,9 +362,6 @@ table.report th a {
|
|||||||
.text-nowrap {
|
.text-nowrap {
|
||||||
white-space: nowrap;
|
white-space: nowrap;
|
||||||
}
|
}
|
||||||
.banner-bottom {
|
|
||||||
margin-bottom: 50px;
|
|
||||||
}
|
|
||||||
.panel table {
|
.panel table {
|
||||||
margin-bottom: 0;
|
margin-bottom: 0;
|
||||||
}
|
}
|
||||||
|
@ -55,7 +55,7 @@
|
|||||||
{% block content %}{% endblock %}
|
{% block content %}{% endblock %}
|
||||||
<div class="push"></div>
|
<div class="push"></div>
|
||||||
{% if settings.BANNER_BOTTOM %}
|
{% if settings.BANNER_BOTTOM %}
|
||||||
<div class="alert alert-info text-center banner-bottom" role="alert">
|
<div class="alert alert-info text-center" style="margin-bottom: 50px" role="alert">
|
||||||
{{ settings.BANNER_BOTTOM|safe }}
|
{{ settings.BANNER_BOTTOM|safe }}
|
||||||
</div>
|
</div>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
@ -69,7 +69,7 @@
|
|||||||
<h5>Total segments: {{ traced_path|length }}</h5>
|
<h5>Total segments: {{ traced_path|length }}</h5>
|
||||||
<h5>Total length:
|
<h5>Total length:
|
||||||
{% if total_length %}
|
{% if total_length %}
|
||||||
{{ total_length|floatformat:"-2" }} Meters /
|
{{ total_length|floatformat:"-2" }}{% if not is_definitive %}+{% endif %} Meters /
|
||||||
{{ total_length|meters_to_feet|floatformat:"-2" }} Feet
|
{{ total_length|meters_to_feet|floatformat:"-2" }} Feet
|
||||||
{% else %}
|
{% else %}
|
||||||
<span class="text-muted">N/A</span>
|
<span class="text-muted">N/A</span>
|
||||||
|
@ -102,7 +102,7 @@
|
|||||||
<td>
|
<td>
|
||||||
{% if object.physical_address %}
|
{% if object.physical_address %}
|
||||||
<div class="pull-right noprint">
|
<div class="pull-right noprint">
|
||||||
<a href="https://maps.google.com/?q={{ object.physical_address|urlencode }}" target="_blank" class="btn btn-primary btn-xs">
|
<a href="{{ settings.MAPS_URL }}{{ object.physical_address|urlencode }}" target="_blank" class="btn btn-primary btn-xs">
|
||||||
<i class="mdi mdi-map-marker"></i> Map it
|
<i class="mdi mdi-map-marker"></i> Map it
|
||||||
</a>
|
</a>
|
||||||
</div>
|
</div>
|
||||||
@ -121,7 +121,7 @@
|
|||||||
<td>
|
<td>
|
||||||
{% if object.latitude and object.longitude %}
|
{% if object.latitude and object.longitude %}
|
||||||
<div class="pull-right noprint">
|
<div class="pull-right noprint">
|
||||||
<a href="https://maps.google.com/?q={{ object.latitude }},{{ object.longitude }}" target="_blank" class="btn btn-primary btn-xs">
|
<a href="{{ settings.MAPS_URL }}{{ object.latitude }},{{ object.longitude }}" target="_blank" class="btn btn-primary btn-xs">
|
||||||
<i class="mdi mdi-map-marker"></i> Map it
|
<i class="mdi mdi-map-marker"></i> Map it
|
||||||
</a>
|
</a>
|
||||||
</div>
|
</div>
|
||||||
|
@ -16,7 +16,7 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h1>{{ script }}</h1>
|
<h1>{{ script }}</h1>
|
||||||
<p>{{ script.Meta.description }}</p>
|
<p>{{ script.Meta.description|render_markdown }}</p>
|
||||||
<ul class="nav nav-tabs" role="tablist">
|
<ul class="nav nav-tabs" role="tablist">
|
||||||
<li role="presentation" class="active">
|
<li role="presentation" class="active">
|
||||||
<a href="#run" role="tab" data-toggle="tab" class="active">Run</a>
|
<a href="#run" role="tab" data-toggle="tab" class="active">Run</a>
|
||||||
|
@ -26,7 +26,7 @@
|
|||||||
<td>
|
<td>
|
||||||
{% include 'extras/inc/job_label.html' with result=script.result %}
|
{% include 'extras/inc/job_label.html' with result=script.result %}
|
||||||
</td>
|
</td>
|
||||||
<td>{{ script.Meta.description }}</td>
|
<td>{{ script.Meta.description|render_markdown }}</td>
|
||||||
{% if script.result %}
|
{% if script.result %}
|
||||||
<td class="text-right">
|
<td class="text-right">
|
||||||
<a href="{% url 'extras:script_result' job_result_pk=script.result.pk %}">{{ script.result.created }}</a>
|
<a href="{% url 'extras:script_result' job_result_pk=script.result.pk %}">{{ script.result.created }}</a>
|
||||||
|
@ -18,7 +18,7 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h1>{{ script }}</h1>
|
<h1>{{ script }}</h1>
|
||||||
<p>{{ script.Meta.description }}</p>
|
<p>{{ script.Meta.description|render_markdown }}</p>
|
||||||
<ul class="nav nav-tabs" role="tablist">
|
<ul class="nav nav-tabs" role="tablist">
|
||||||
<li role="presentation" class="active">
|
<li role="presentation" class="active">
|
||||||
<a href="#log" role="tab" data-toggle="tab" class="active">Log</a>
|
<a href="#log" role="tab" data-toggle="tab" class="active">Log</a>
|
||||||
@ -110,4 +110,4 @@ function jobTerminatedAction(){
|
|||||||
</script>
|
</script>
|
||||||
<script src="{% static 'js/job_result.js' %}?v{{ settings.VERSION }}"
|
<script src="{% static 'js/job_result.js' %}?v{{ settings.VERSION }}"
|
||||||
onerror="window.location='{% url 'media_failure' %}?filename=js/job_result.js'"></script>
|
onerror="window.location='{% url 'media_failure' %}?filename=js/job_result.js'"></script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
@ -304,13 +304,7 @@
|
|||||||
{% for change in changelog %}
|
{% for change in changelog %}
|
||||||
{% with action=change.get_action_display|lower %}
|
{% with action=change.get_action_display|lower %}
|
||||||
<div class="list-group-item">
|
<div class="list-group-item">
|
||||||
{% if action == 'created' %}
|
<span class="label label-{{ change.get_action_class }}">{{ change.get_action_display }}</span>
|
||||||
<span class="label label-success">Created</span>
|
|
||||||
{% elif action == 'updated' %}
|
|
||||||
<span class="label label-warning">Modified</span>
|
|
||||||
{% elif action == 'deleted' %}
|
|
||||||
<span class="label label-danger">Deleted</span>
|
|
||||||
{% endif %}
|
|
||||||
{{ change.changed_object_type.name|bettertitle }}
|
{{ change.changed_object_type.name|bettertitle }}
|
||||||
{% if change.changed_object.get_absolute_url %}
|
{% if change.changed_object.get_absolute_url %}
|
||||||
<a href="{{ change.changed_object.get_absolute_url }}">{{ change.changed_object }}</a>
|
<a href="{{ change.changed_object.get_absolute_url }}">{{ change.changed_object }}</a>
|
||||||
|
@ -220,7 +220,7 @@ class CommentField(forms.CharField):
|
|||||||
default_label = ''
|
default_label = ''
|
||||||
# TODO: Port Markdown cheat sheet to internal documentation
|
# TODO: Port Markdown cheat sheet to internal documentation
|
||||||
default_helptext = '<i class="mdi mdi-information-outline"></i> '\
|
default_helptext = '<i class="mdi mdi-information-outline"></i> '\
|
||||||
'<a href="https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet" target="_blank">'\
|
'<a href="https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet" target="_blank" tabindex="-1">'\
|
||||||
'Markdown</a> syntax is supported'
|
'Markdown</a> syntax is supported'
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args, **kwargs):
|
||||||
|
Reference in New Issue
Block a user