mentat.datatype.sqldb module

Datatype model library for PostgreSQL backend storages.

Overview

The implementation is based on the great sqlalchemy library. This module provides models for following datatypes/objects:

mentat.datatype.sqldb.UserModel

Database representation of user account objects.

mentat.datatype.sqldb.GroupModel

Database representation of group objects.

mentat.datatype.sqldb.FilterModel

Database representation of group reporting filter objects.

mentat.datatype.sqldb.NetworkModel

Database representation of network record objects for internal whois.

mentat.datatype.sqldb.SettingsReportingModel

Database representation of group settings objects.

mentat.datatype.sqldb.EventStatisticsModel

Database representation of event statistics objects.

mentat.datatype.sqldb.EventReportModel

Database representation of report objects.

mentat.datatype.sqldb.ItemChangeLogModel

Database representation of object changelog.

mentat.datatype.sqldb.DetectorModel

Database representation of detector objects.

Warning

Current implementation is for optimalization purposes using some advanced features provided by the PostgreSQL database and no other engines are currently supported.

class mentat.datatype.sqldb.DetectorModel(**kwargs)[source]

Bases: Base

Class representing detectors objects within the SQL database mapped to detectors table.

createtime

Common table column for object creation timestamps, implementation is based on declared_attr pattern.

credibility
description
hits
id

Common table column for unique numeric identifier, implementation is based on declared_attr pattern.

name
registered
source
to_dict()[source]

Interface implementation: Implementation of mentat.datatype.sqldb.MODEL.to_dict() method.

class mentat.datatype.sqldb.EventReportModel(**kwargs)[source]

Bases: Base

Class representing event report objects within the SQL database mapped to reports_events table.

calculate_delta()[source]

Calculate delta between internal time interval boundaries.

children
createtime

Common table column for object creation timestamps, implementation is based on declared_attr pattern.

delta
dt_from
dt_to
evcount_all
evcount_det
evcount_det_blk
evcount_flt
evcount_flt_blk
evcount_new
evcount_rep
evcount_rlp
evcount_thr
evcount_thr_blk
filtering
flag_mailed
flag_testdata
generate_label()[source]

Generate and set label from internal attributes.

groups
id

Common table column for unique numeric identifier, implementation is based on declared_attr pattern.

label
mail_dt
mail_res
mail_to
message
parent_id
severity
statistics
structured_data
to_dict()[source]

Interface implementation: Implementation of mentat.datatype.sqldb.MODEL.to_dict() method.

type
class mentat.datatype.sqldb.EventStatisticsModel(**kwargs)[source]

Bases: Base

Class representing event statistics objects within the SQL database mapped to statistics_events table.

calculate_delta()[source]

Calculate and set delta between internal time interval boundaries.

calculate_interval()[source]

Calculate and set internal interval label.

count
createtime

Common table column for object creation timestamps, implementation is based on declared_attr pattern.

delta
dt_from
dt_to
static format_interval(dtl, dth)[source]

Format two given timestamps into single string desribing the interval between them. This string can be then used as a form of a label.

Parameters
  • dtl (datetime.datetime) – Lower interval boundary.

  • dth (datetime.datetime) – Upper interval boundary.

Returns

Interval between timestamps.

Return type

str

id

Common table column for unique numeric identifier, implementation is based on declared_attr pattern.

interval
stats_external
stats_internal
stats_overall
to_dict()[source]

Interface implementation: Implementation of mentat.datatype.sqldb.MODEL.to_dict() method.

class mentat.datatype.sqldb.FilterModel(**kwargs)[source]

Bases: Base

Class representing reporting filters objects within the SQL database mapped to filters table.

categories
createtime

Common table column for object creation timestamps, implementation is based on declared_attr pattern.

description
detectors
enabled
filter
group
group_id
hits
id

Common table column for unique numeric identifier, implementation is based on declared_attr pattern.

is_state_disabled()[source]

Check if current filter state is disabled.

is_state_enabled()[source]

Check if current filter state is enabled.

last_hit
name
set_state_disabled()[source]

Set current filter state to disabled.

set_state_enabled()[source]

Set current filter state to enabled.

sources
to_dict()[source]

Interface implementation: Implementation of mentat.datatype.sqldb.MODEL.to_dict() method.

type
valid_from
valid_to
class mentat.datatype.sqldb.GroupModel(**kwargs)[source]

Bases: Base

Class representing group objects within the SQL database mapped to groups table.

children
createtime

Common table column for object creation timestamps, implementation is based on declared_attr pattern.

description
enabled
filters
id

Common table column for unique numeric identifier, implementation is based on declared_attr pattern.

is_state_disabled()[source]

Check if current group state is disabled.

is_state_enabled()[source]

Check if current group state is enabled.

local_id
managers
members
members_wanted
name
networks
parent_id
reports
set_state_disabled()[source]

Set current group state to disabled.

set_state_enabled()[source]

Set current group state to enabled.

settings_rep
source
to_dict()[source]

Interface implementation: Implementation of mentat.datatype.sqldb.MODEL.to_dict() method.

class mentat.datatype.sqldb.ItemChangeLogModel(**kwargs)[source]

Bases: Base

Class representing item changelog records within the SQL database mapped to changelogs_items table.

after
author
author_id
before
calculate_diff()[source]

Calculate difference between internal before and after attributes and store it internally into diff attribute.

createtime

Common table column for object creation timestamps, implementation is based on declared_attr pattern.

diff
endpoint
id

Common table column for unique numeric identifier, implementation is based on declared_attr pattern.

model
model_id
module
operation
class mentat.datatype.sqldb.NetworkModel(**kwargs)[source]

Bases: Base

Class representing network records objects within the SQL database mapped to networks table.

createtime

Common table column for object creation timestamps, implementation is based on declared_attr pattern.

description
group
group_id
id

Common table column for unique numeric identifier, implementation is based on declared_attr pattern.

is_base
local_id
netname
network
rank
source
to_dict()[source]

Interface implementation: Implementation of mentat.datatype.sqldb.MODEL.to_dict() method.

class mentat.datatype.sqldb.ReportStatisticsJSONB(*args: Any, **kwargs: Any)[source]

Bases: TypeDecorator

Class representing a JSONB type used for report statistics in order to ensure compatibility with legacy reports.

cache_ok: Optional[bool] = True

Indicate if statements using this ExternalType are “safe to cache”.

The default value None will emit a warning and then not allow caching of a statement which includes this type. Set to False to disable statements using this type from being cached at all without a warning. When set to True, the object’s class and selected elements from its state will be used as part of the cache key. For example, using a TypeDecorator:

class MyType(TypeDecorator):
    impl = String

    cache_ok = True

    def __init__(self, choices):
        self.choices = tuple(choices)
        self.internal_only = True

The cache key for the above type would be equivalent to:

>>> MyType(["a", "b", "c"])._static_cache_key
(<class '__main__.MyType'>, ('choices', ('a', 'b', 'c')))

The caching scheme will extract attributes from the type that correspond to the names of parameters in the __init__() method. Above, the “choices” attribute becomes part of the cache key but “internal_only” does not, because there is no parameter named “internal_only”.

The requirements for cacheable elements is that they are hashable and also that they indicate the same SQL rendered for expressions using this type every time for a given cache value.

To accommodate for datatypes that refer to unhashable structures such as dictionaries, sets and lists, these objects can be made “cacheable” by assigning hashable structures to the attributes whose names correspond with the names of the arguments. For example, a datatype which accepts a dictionary of lookup values may publish this as a sorted series of tuples. Given a previously un-cacheable type as:

class LookupType(UserDefinedType):
    '''a custom type that accepts a dictionary as a parameter.

    this is the non-cacheable version, as "self.lookup" is not
    hashable.

    '''

    def __init__(self, lookup):
        self.lookup = lookup

    def get_col_spec(self, **kw):
        return "VARCHAR(255)"

    def bind_processor(self, dialect):
        # ...  works with "self.lookup" ...

Where “lookup” is a dictionary. The type will not be able to generate a cache key:

>>> type_ = LookupType({"a": 10, "b": 20})
>>> type_._static_cache_key
<stdin>:1: SAWarning: UserDefinedType LookupType({'a': 10, 'b': 20}) will not
produce a cache key because the ``cache_ok`` flag is not set to True.
Set this flag to True if this type object's state is safe to use
in a cache key, or False to disable this warning.
symbol('no_cache')

If we did set up such a cache key, it wouldn’t be usable. We would get a tuple structure that contains a dictionary inside of it, which cannot itself be used as a key in a “cache dictionary” such as SQLAlchemy’s statement cache, since Python dictionaries aren’t hashable:

>>> # set cache_ok = True
>>> type_.cache_ok = True

>>> # this is the cache key it would generate
>>> key = type_._static_cache_key
>>> key
(<class '__main__.LookupType'>, ('lookup', {'a': 10, 'b': 20}))

>>> # however this key is not hashable, will fail when used with
>>> # SQLAlchemy statement cache
>>> some_cache = {key: "some sql value"}
Traceback (most recent call last): File "<stdin>", line 1,
in <module> TypeError: unhashable type: 'dict'

The type may be made cacheable by assigning a sorted tuple of tuples to the “.lookup” attribute:

class LookupType(UserDefinedType):
    '''a custom type that accepts a dictionary as a parameter.

    The dictionary is stored both as itself in a private variable,
    and published in a public variable as a sorted tuple of tuples,
    which is hashable and will also return the same value for any
    two equivalent dictionaries.  Note it assumes the keys and
    values of the dictionary are themselves hashable.

    '''

    cache_ok = True

    def __init__(self, lookup):
        self._lookup = lookup

        # assume keys/values of "lookup" are hashable; otherwise
        # they would also need to be converted in some way here
        self.lookup = tuple(
            (key, lookup[key]) for key in sorted(lookup)
        )

    def get_col_spec(self, **kw):
        return "VARCHAR(255)"

    def bind_processor(self, dialect):
        # ...  works with "self._lookup" ...

Where above, the cache key for LookupType({"a": 10, "b": 20}) will be:

>>> LookupType({"a": 10, "b": 20})._static_cache_key
(<class '__main__.LookupType'>, ('lookup', (('a', 10), ('b', 20))))

New in version 1.4.14: - added the cache_ok flag to allow some configurability of caching for TypeDecorator classes.

New in version 1.4.28: - added the ExternalType mixin which generalizes the cache_ok flag to both the TypeDecorator and UserDefinedType classes.

See also

sql_caching

coerce_compared_value(op, value)[source]

Ensure proper coersion

impl

alias of JSONB

process_result_value(value, dialect)[source]

Rename ‘ips’ to ‘sources’

class mentat.datatype.sqldb.SettingsReportingModel(**kwargs)[source]

Bases: Base

Class representing reporting settings objects within the SQL database mapped to settings_reporting table.

createtime

Common table column for object creation timestamps, implementation is based on declared_attr pattern.

emails_critical
emails_high
emails_low
emails_medium
group
group_id
id

Common table column for unique numeric identifier, implementation is based on declared_attr pattern.

locale
mode
redirect
timezone
to_dict()[source]

Interface implementation: Implementation of mentat.datatype.sqldb.MODEL.to_dict() method.

class mentat.datatype.sqldb.UserModel(**kwargs)[source]

Bases: Base

Class representing user objects within the SQL database mapped to users table.

apikey
changelogs
check_password(password_plain)[source]

Check given plaintext password agains internal password hash.

convert_lower(key, value)[source]

Convert login and email to lowercase.

createtime

Common table column for object creation timestamps, implementation is based on declared_attr pattern.

email
enabled
fullname
get_id()[source]

Mandatory interface required by the flask_login extension.

has_no_role()[source]

Returns True if the user has no role.

has_role(role)[source]

Returns True if the user identifies with the specified role.

Parameters

role (str) – A role name.

id

Common table column for unique numeric identifier, implementation is based on declared_attr pattern.

property is_active

Mandatory interface required by the flask_login extension.

property is_anonymous

Mandatory interface required by the flask_login extension.

property is_authenticated

Mandatory interface required by the flask_login extension.

is_state_disabled()[source]

Check if current user account state is disabled.

is_state_enabled()[source]

Check if current user account state is enabled.

locale
login
logintime
managements
memberships
memberships_wanted
organization
password
roles
set_password(password_plain)[source]

Generate and set password hash from given plain text password.

set_state_disabled()[source]

Set current user account state to disabled.

set_state_enabled()[source]

Set current user account state to enabled.

timezone
to_dict()[source]

Interface implementation: Implementation of mentat.datatype.sqldb.MODEL.to_dict() method.

mentat.datatype.sqldb.detectormodel_from_typeddict(structure, defaults=None)[source]

Convenience method for creating mentat.datatype.sqldb.DetectorModel object from mentat.datatype.internal.Detector objects.

mentat.datatype.sqldb.dictdiff(dict_obj_a, dict_obj_b)[source]

Calculate the difference between two model objects given as dicts.

mentat.datatype.sqldb.diff(obj_a, obj_b)[source]

Calculate the difference between two model objects given as dicts.

mentat.datatype.sqldb.enforce_wanted_memberships_consistency(group, user, initiator)[source]

This event method is triggered if user is added to members of group, and it enforces consistency by removing him from members_wanted (if present).

mentat.datatype.sqldb.eventstatsmodel_from_typeddict(structure, defaults=None)[source]

Convenience method for creating mentat.datatype.sqldb.EventStatisticsModel object from mentat.datatype.internal.EventStat objects.

mentat.datatype.sqldb.filtermodel_from_typeddict(structure, defaults=None)[source]

Convenience method for creating mentat.datatype.sqldb.NetworkModel object from mentat.datatype.internal.NetworkRecord objects.

mentat.datatype.sqldb.groupmodel_from_typeddict(structure, defaults=None)[source]

Convenience method for creating mentat.datatype.sqldb.GroupModel object from mentat.datatype.internal.AbuseGroup objects.

class mentat.datatype.sqldb.iprange(*args, **kwds)[source]

Bases: UserDefinedType

bind_processor(dialect)[source]

Return a conversion function for processing bind values.

Returns a callable which will receive a bind parameter value as the sole positional argument and will return a value to send to the DB-API.

If processing is not necessary, the method should return None.

Note

This method is only called relative to a dialect specific type object, which is often private to a dialect in use and is not the same type object as the public facing one, which means it’s not feasible to subclass a types.TypeEngine class in order to provide an alternate _types.TypeEngine.bind_processor() method, unless subclassing the _types.UserDefinedType class explicitly.

To provide alternate behavior for _types.TypeEngine.bind_processor(), implement a _types.TypeDecorator class and provide an implementation of _types.TypeDecorator.process_bind_param().

See also

types_typedecorator

Parameters

dialect – Dialect instance in use.

cache_ok: Optional[bool] = True

Indicate if statements using this ExternalType are “safe to cache”.

The default value None will emit a warning and then not allow caching of a statement which includes this type. Set to False to disable statements using this type from being cached at all without a warning. When set to True, the object’s class and selected elements from its state will be used as part of the cache key. For example, using a TypeDecorator:

class MyType(TypeDecorator):
    impl = String

    cache_ok = True

    def __init__(self, choices):
        self.choices = tuple(choices)
        self.internal_only = True

The cache key for the above type would be equivalent to:

>>> MyType(["a", "b", "c"])._static_cache_key
(<class '__main__.MyType'>, ('choices', ('a', 'b', 'c')))

The caching scheme will extract attributes from the type that correspond to the names of parameters in the __init__() method. Above, the “choices” attribute becomes part of the cache key but “internal_only” does not, because there is no parameter named “internal_only”.

The requirements for cacheable elements is that they are hashable and also that they indicate the same SQL rendered for expressions using this type every time for a given cache value.

To accommodate for datatypes that refer to unhashable structures such as dictionaries, sets and lists, these objects can be made “cacheable” by assigning hashable structures to the attributes whose names correspond with the names of the arguments. For example, a datatype which accepts a dictionary of lookup values may publish this as a sorted series of tuples. Given a previously un-cacheable type as:

class LookupType(UserDefinedType):
    '''a custom type that accepts a dictionary as a parameter.

    this is the non-cacheable version, as "self.lookup" is not
    hashable.

    '''

    def __init__(self, lookup):
        self.lookup = lookup

    def get_col_spec(self, **kw):
        return "VARCHAR(255)"

    def bind_processor(self, dialect):
        # ...  works with "self.lookup" ...

Where “lookup” is a dictionary. The type will not be able to generate a cache key:

>>> type_ = LookupType({"a": 10, "b": 20})
>>> type_._static_cache_key
<stdin>:1: SAWarning: UserDefinedType LookupType({'a': 10, 'b': 20}) will not
produce a cache key because the ``cache_ok`` flag is not set to True.
Set this flag to True if this type object's state is safe to use
in a cache key, or False to disable this warning.
symbol('no_cache')

If we did set up such a cache key, it wouldn’t be usable. We would get a tuple structure that contains a dictionary inside of it, which cannot itself be used as a key in a “cache dictionary” such as SQLAlchemy’s statement cache, since Python dictionaries aren’t hashable:

>>> # set cache_ok = True
>>> type_.cache_ok = True

>>> # this is the cache key it would generate
>>> key = type_._static_cache_key
>>> key
(<class '__main__.LookupType'>, ('lookup', {'a': 10, 'b': 20}))

>>> # however this key is not hashable, will fail when used with
>>> # SQLAlchemy statement cache
>>> some_cache = {key: "some sql value"}
Traceback (most recent call last): File "<stdin>", line 1,
in <module> TypeError: unhashable type: 'dict'

The type may be made cacheable by assigning a sorted tuple of tuples to the “.lookup” attribute:

class LookupType(UserDefinedType):
    '''a custom type that accepts a dictionary as a parameter.

    The dictionary is stored both as itself in a private variable,
    and published in a public variable as a sorted tuple of tuples,
    which is hashable and will also return the same value for any
    two equivalent dictionaries.  Note it assumes the keys and
    values of the dictionary are themselves hashable.

    '''

    cache_ok = True

    def __init__(self, lookup):
        self._lookup = lookup

        # assume keys/values of "lookup" are hashable; otherwise
        # they would also need to be converted in some way here
        self.lookup = tuple(
            (key, lookup[key]) for key in sorted(lookup)
        )

    def get_col_spec(self, **kw):
        return "VARCHAR(255)"

    def bind_processor(self, dialect):
        # ...  works with "self._lookup" ...

Where above, the cache key for LookupType({"a": 10, "b": 20}) will be:

>>> LookupType({"a": 10, "b": 20})._static_cache_key
(<class '__main__.LookupType'>, ('lookup', (('a', 10), ('b', 20))))

New in version 1.4.14: - added the cache_ok flag to allow some configurability of caching for TypeDecorator classes.

New in version 1.4.28: - added the ExternalType mixin which generalizes the cache_ok flag to both the TypeDecorator and UserDefinedType classes.

See also

sql_caching

get_col_spec(**kw)[source]
result_processor(dialect, coltype)[source]

Return a conversion function for processing result row values.

Returns a callable which will receive a result row column value as the sole positional argument and will return a value to return to the user.

If processing is not necessary, the method should return None.

Note

This method is only called relative to a dialect specific type object, which is often private to a dialect in use and is not the same type object as the public facing one, which means it’s not feasible to subclass a types.TypeEngine class in order to provide an alternate _types.TypeEngine.result_processor() method, unless subclassing the _types.UserDefinedType class explicitly.

To provide alternate behavior for _types.TypeEngine.result_processor(), implement a _types.TypeDecorator class and provide an implementation of _types.TypeDecorator.process_result_value().

See also

types_typedecorator

Parameters
  • dialect – Dialect instance in use.

  • coltype – DBAPI coltype argument received in cursor.description.

mentat.datatype.sqldb.jsondiff(json_obj_a, json_obj_b)[source]

Calculate the difference between two model objects given as JSON strings.

mentat.datatype.sqldb.networkmodel_from_typeddict(structure, defaults=None)[source]

Convenience method for creating mentat.datatype.sqldb.NetworkModel object from mentat.datatype.internal.NetworkRecord objects.

mentat.datatype.sqldb.setrepmodel_from_typeddict(structure, defaults=None)[source]

Convenience method for creating mentat.datatype.sqldb.SettingsReportingModel object from mentat.datatype.internal.AbuseGroup objects.

mentat.datatype.sqldb.usermodel_from_typeddict(structure, defaults=None)[source]

Convenience method for creating mentat.datatype.sqldb.UserModel object from mentat.datatype.internal.User objects.