1
0
mirror of https://github.com/peeringdb/peeringdb.git synced 2024-05-11 05:55:09 +00:00
Files
peeringdb-peeringdb/peeringdb_server/api_cache.py
Matt Griswold ea55c4dc38 July updates (#762)
* Change label from primary ASN to ASN

* Raise validation error when trying to update ASN

* first steps for dotf importer procotol (#697)

* migrations (#697)

* Add translation to error meessage

* Make ASN readonly in table

* Add test now that ASN should not be able to update

* Set fac.rencode to '' for all entries and make it readonly in serializer

* Add unique constraints to network ixlan ip addresses

* Add migration to null out duplicate ipaddresses for deleted netixlans

* Add unique constraints to network ixlan ip addresses

* Add migration to null out duplicate ipaddresses for deleted netixlans

* remove old migrations (#697)

* fix netixlan ipaddr dedupe migration (#268)
add netixlan ipaddr unique constraint migration (#268)

* ixf_member_data migrations (#697)

* fix table name (#697)

* importer protocol (#697)

* fix netixlan ipaddr dedupe migration (#268)
add netixlan ipaddr unique constraint migration (#268)

* ixf proposed changes notifications (#697)

* Delete repeated query

* Add a test to show rencode is readonly

* Blank out rencode when mocking data

* Remove validator now that constraint exists

* Add back unique field validator w Check Deleted true

* conflict resolving (#697)

* UniqueFieldValidator raise error with code "unique" (#268)

* conflict resolution (#697)

* Add fixme comment to tests

* conflict resolution (#697)

* Remove now invalid undelete tests

* UniqueFieldValidator raise error with code "unique" (#268)

* delete admin tools for duplicate ip addresses

* Make migration to delete duplicateipnetworkixlan

* Add ixlan-ixpfx status matching validation, add corresponding test

* delete redundant checking in test

* resolve conflict ui (#697)

* fix migrations hierarchy

* squash migrations for ixf member data

* clean up preview and post-mortem tools

* remove non-sensical permission check when undeleting soft-deleted objects through unique integrity error handling

* only include the ix-f data url in notifications to admincom (#697)

* resolve on --skip-import (#697)

* ac conflict resolution (#697)

* Define more accurately the incompatible statuses for ixlan and ixpfx

* Add another status test

* Preventing disrupting changes (#697)

* fix tests (#697)

* Stop allow_ixp_update from being write only and add a global stat for automated networks

* Add tests for global stats that appear in footer

* Change how timezone is called with datetime, to get test_stats.py/test_generate_for_current_date to pass

* test for protected entities (#697)

* admincom conflict resolution refine readonly fields (#697)
network notifications only if the problem is actually actionable by the network (#697)

* ixp / ac notifcation when ix-f source cannot be parsed (#697)
fix issue with ixlan prefix protection (#697)

* migrations (#697)

* code documentation (#697)

* ux tweaks (#697)

* UX tweaks (#697)

* Fix typo

* fix netixlan returned in IXFMemberData.apply when adding a new one (#697)

* fix import log incosistencies (#697)

* Add IXFMemberData to test

* Update test data

* Add protocol tests

* Add tests for views

* always persist changes to remote data on set_conflict (#697)

* More tests

* always persist changes to remote data on set_conflict (#697)

* suggest-add test

* net_present_at_ix should check status (#697)

* Add more protocol tests

* Edit language of some tests

* django-peeringdb to 2.1.1
relock pipfile, pin django-ratelimit to <3 as it breaks stuff

* Add net_count_ixf field to ix object (#683)

* Add the IX-F Member Export URL to the ixlan API endpoint (#249)

* Lock some objects from being deleted by the owner (#696)

* regenerate api docs (#249)

* always persist changes to remote data on set_add and set_update (#697)

* IXFMemberData: always persist remote data changes during set_add and set_update, also allow for saving without touching the updated field

* always persist changes to remote data on set_add and set_update (#697)

* Fix suggest-add tests

* IXFMemberData: always persist remote data changes during set_add and set_update, also allow for saving without touching the updated field

* IXFMemberData: always persist remote data changes during set_add and set_update, also allow for saving without touching the updated field

* fix issue with deletion when ixfmemberdata for entry existed previously (#697)

* fix test_suggest_delete_local_ixf_no_flag (#697 tests)

* fix issue with deletion when ixfmemberdata for entry existed previously (#697)

* invalid ips get logged and notified to the ix via notify_error (#697)

* Fix more tests

* issue with previous_data when running without save (#697)
properly track speed errors (#697)

* reset errors on ixfmemberdata that go into pending_save (#697)

* add remote_data to admin view (#697)

* fix error reset inconsistency (#697)

* Refine invalid data tests

* remove debug output

* for notifications to ac include contact points for net and ix in the message (#697)

* settings to toggle ix-f tickets / emails (#697)

* allow turning off ix-f notifications for net and ix separately (#697)

* add jsonschema test

* Add idempotent tests to updater

* remove old ixf member tests

* Invalid data tests when ixp_updates are enabled

* fix speed error validation (#697)

* fix issue with rollback (#697)

* fix migration hierarchy

* fix ixfmemberdata _email

* django-peeringdb to 2.2 and relock

* add ixf rollback tests

* ixf email notifications off by default

* black formatted

* pyupgrade

Co-authored-by: egfrank <egfrank@20c.com>
Co-authored-by: Stefan Pratter <stefan@20c.com>
2020-07-15 07:07:01 +00:00

315 lines
9.7 KiB
Python

import os
import json
import collections
from django.conf import settings
from peeringdb_server.models import InternetExchange, IXLan, Network
import django_namespace_perms.util as nsp
class CacheRedirect(Exception):
"""
Raise this error to redirect to cache response during viewset.get_queryset
or viewset.list()
Argument should be an APICacheLoader instance
"""
def __init__(self, loader):
super(Exception, self).__init__(self, "Result to be loaded from cache")
self.loader = loader
###############################################################################
# API CACHE LOADER
class APICacheLoader:
"""
Checks if an API GET request qualifies for a cache load
and if it does allows you to provide the cached result
"""
def __init__(self, viewset, qset, filters):
request = viewset.request
self.request = request
self.qset = qset
self.filters = filters
self.model = viewset.model
self.viewset = viewset
self.depth = min(int(request.query_params.get("depth", 0)), 3)
self.limit = int(request.query_params.get("limit", 0))
self.skip = int(request.query_params.get("skip", 0))
self.since = int(request.query_params.get("since", 0))
self.fields = request.query_params.get("fields")
if self.fields:
self.fields = self.fields.split(",")
self.path = os.path.join(
settings.API_CACHE_ROOT,
f"{viewset.model.handleref.tag}-{self.depth}.json",
)
def qualifies(self):
"""
Check if request qualifies for a cache load
"""
# api cache use is disabled, no
if not getattr(settings, "API_CACHE_ENABLED", False):
return False
# no depth and a limit lower than 251 seems like a tipping point
# were non-cache retrieval is faster still
if (
not self.depth
and self.limit
and self.limit <= 250
and getattr(settings, "API_CACHE_ALL_LIMITS", False) is False
):
return False
# filters have been specified, no
if self.filters or self.since:
return False
# cache file non-existant, no
if not os.path.exists(self.path):
return False
# request method is anything but GET, no
if self.request.method != "GET":
return False
# primary key set in request, no
if self.viewset.kwargs:
return False
return True
def load(self):
"""
Load the cached response according to tag and depth
"""
# read cache file
with open(self.path) as f:
data = json.load(f)
data = data.get("data")
# apply permissions to data
fnc = getattr(self, "apply_permissions_%s" % self.model.handleref.tag, None)
if fnc:
data = fnc(data)
# apply pagination
if self.skip and self.limit:
data = data[self.skip : self.skip + self.limit]
elif self.skip:
data = data[self.skip :]
elif self.limit:
data = data[: self.limit]
return {"results": data, "__meta": {"generated": os.path.getmtime(self.path)}}
def apply_permissions(self, ns, data, ruleset={}):
"""
Wrapper function to apply permissions to a data row and
return the sanitized result
"""
if type(ns) != list:
ns = ns.split(".")
# prepare ruleset
if ruleset:
_ruleset = {}
namespace_str = ".".join(ns)
for section, rules in list(ruleset.items()):
_ruleset[section] = {}
for rule, perms in list(rules.items()):
_ruleset[section][f"{namespace_str}.{rule}"] = perms
ruleset = _ruleset
return nsp.dict_get_path(
nsp.permissions_apply(
nsp.dict_from_namespace(ns, data), self.request.user, ruleset=ruleset
),
ns,
)
def apply_permissions_generic(self, data, explicit=False, join_ids=[], **kwargs):
"""
Apply permissions to all rows according to rules
specified in parameters
explicit <function>
if explicit is passed as a function it will be called and the result will
determine whether or not explicit read perms are required for the row
join_ids [(target_id<str>, proxy_id<str>, model<handleref>), ..]
Since we are checking permissioning namespaces, and those namespaces may
consist of object ids that are not necessarily in the dataset you can
join those ids in via the join_ids parameter
"""
rv = []
joined_ids = collections.OrderedDict()
e = {}
inst = self.model()
# perform id joining
if join_ids:
for t, p, model in join_ids:
joined_ids[t] = {
"p": p,
"ids": self.join_ids(
data,
t,
p,
model,
list(joined_ids.get(p, e).get("ids", e).values()),
),
}
for row in data:
# create dict containing ids needed to build the permissioning
# namespace
init = {k: row.get(v) for k, v in list(kwargs.items())}
# joined ids
for t, j in list(joined_ids.items()):
if j["p"] in row:
init[t] = j["ids"].get(row.get(j["p"]))
elif t in joined_ids:
init[t] = joined_ids.get(t).get("ids").get(init[j["p"]])
# build permissioning namespace
ns = self.model.nsp_namespace_from_id(**init).lower()
# apply fields filter
if self.fields:
for k in list(row.keys()):
if k not in self.fields:
del row[k]
# determine whether or not read perms for this object need
# to be explicitly set
if explicit and callable(explicit):
expl = explicit(row)
else:
expl = False
# initial read perms check
if nsp.has_perms(self.request.user, ns, 0x01, explicit=expl):
ruleset = getattr(inst, "nsp_ruleset", {})
# apply permissions to tree
row = self.apply_permissions(ns, row, ruleset=ruleset)
applicator = getattr(
self.model, "api_cache_permissions_applicator", None
)
if applicator:
applicator(row, ns, self.request.user)
# if row still has data aftewards, append to results
if row:
rv.append(row)
return rv
def join_ids(self, data, target_id, proxy_id, model, stash=[]):
"""
Returns a dict mapping of (proxy_id, target_id)
target ids are obtained by fetching instances of specified
model that match the supplied proxy ids
proxy ids will be gotten from data or stash
data [<dict>, ..] list of data rows from cache load, the field
name provided in "proxy_id" will be used to obtain the id from
each row
stash [<int>,..] list of ids
if stash is set, data and proxy_field will be ignored
"""
if stash:
ids = stash
else:
ids = [r[proxy_id] for r in data]
return {
r["id"]: r[target_id]
for r in model.objects.filter(id__in=ids).values("id", target_id)
}
# permissioning functions for each handlref type
def apply_permissions_org(self, data):
return self.apply_permissions_generic(data, id="id")
def apply_permissions_fac(self, data):
return self.apply_permissions_generic(data, fac_id="id", org_id="org_id")
def apply_permissions_ix(self, data):
return self.apply_permissions_generic(data, ix_id="id", org_id="org_id")
def apply_permissions_net(self, data):
return self.apply_permissions_generic(data, net_id="id", org_id="org_id")
def apply_permissions_ixpfx(self, data):
return self.apply_permissions_generic(
data,
join_ids=[
("ix_id", "ixlan_id", IXLan),
("org_id", "ix_id", InternetExchange),
],
ixlan_id="ixlan_id",
id="id",
)
def apply_permissions_ixlan(self, data):
return self.apply_permissions_generic(
data,
join_ids=[("org_id", "ix_id", InternetExchange)],
ix_id="ix_id",
id="id",
)
def apply_permissions_ixfac(self, data):
return self.apply_permissions_generic(
data,
join_ids=[("org_id", "ix_id", InternetExchange)],
ix_id="ix_id",
id="id",
)
def apply_permissions_netfac(self, data):
return self.apply_permissions_generic(
data,
join_ids=[("org_id", "net_id", Network)],
net_id="net_id",
fac_id="fac_id",
)
def apply_permissions_netixlan(self, data):
return self.apply_permissions_generic(
data,
join_ids=[("org_id", "net_id", Network)],
net_id="net_id",
ixlan_id="ixlan_id",
)
def apply_permissions_poc(self, data):
return self.apply_permissions_generic(
data,
explicit=lambda x: (x.get("visible") != "Public"),
join_ids=[("org_id", "net_id", Network)],
vis="visible",
net_id="net_id",
)