Module sverchok.data_structure
Functions
def Edg_pol_generate(prop)
def Matrix_generate(prop)
-
Generate Matrix() data from Sverchok data
def Matrix_listing(prop)
-
Convert Matrix() into Sverchok data
def Matrix_location(prop, to_list=False)
-
return a list of locations representing the translation of the matrices
def Matrix_rotation(prop, to_list=False)
-
return (Vector, rotation) utility function for Matrix Destructor. if list is true the Vector() is decomposed into tuple format.
def Matrix_scale(prop, to_list=False)
-
return a Vector()/list representing the scale factor of the matrices
def Vector_degenerate(prop)
-
return a simple list of values instead of Vector() objects
def Vector_generate(prop)
-
return a list of Vector() objects from a standard Sverchok data
def apply_mask(mask, lst)
def calc_mask(subset_data, set_data, level=0, negate=False, ignore_order=True)
-
Calculate mask: for each item in set_data, return True if it is present in subset_data. The function can work at any specified level.
subset_data: subset, for example [1] set_data: set, for example [1, 2, 3] level: 0 to check immediate members of set and subset; 1 to work with lists of lists and so on. negate: if True, then result will be negated (True if item of set is not present in subset). ignore_order: when comparing lists, ignore items order.
Raises an exception if nesting level of input sets is less than specified level parameter.
calc_mask([1], [1,2,3]) == [True, False, False]) calc_mask([1], [1,2,3], negate=True) == [False, True, True]
def changable_sockets(node, inputsocketname, outputsocketname)
-
It changes types of output sockets according to type of socket (other) connected to given input socket name !!! It does not work if the node have outputs with the same names !!! If input socket is not connected or its type is equal to type of first output socket it does nothing arguments: node, name of socket to follow, list of socket to change
def cross_indices_np(n)
-
create list with all the indices pairs for n=3 outputs a numpy array with: [0,1] [0,2] [1,2]
def cycle_for_length(lst, count)
def dataCorrect(data, nominal_dept=2)
-
data from nesting to standard: TO container( objects( lists( floats, ), ), )
def dataCorrect_np(data, nominal_dept=2)
-
data from nesting to standard: TO container( objects( lists( floats, ), ), )
def dataSpoil(data, dept)
-
from standard data to initial levels: to nested lists container( objects( lists( nested_lists( floats, ), ), ), ) это невозможно!
def data_standard(data, dept, nominal_dept)
-
data from nesting to standard: TO container( objects( lists( floats, ), ), )
def describe_data_shape(data)
-
Describe shape of data in human-readable form. Returns string. Can be used for debugging or for displaying information to user. Note: this method inspects only first element of each list/tuple, expecting they are all homogeneous (that is usually true in Sverchok).
describe_data_shape(None) == 'Level 0: NoneType' describe_data_shape(1) == 'Level 0: int' describe_data_shape([]) == 'Level 1: list [0]' describe_data_shape([1]) == 'Level 1: list [1] of int' describe_data_shape([[(1,2,3)]]) == 'Level 3: list [1] of list [1] of tuple [3] of int'
def describe_data_shape_by_level(data, include_numpy_nesting=True)
-
Describe shape of data in human-readable form. Returns tuple: * data nesting level * list of descriptions of data shapes at each nesting level
def describe_data_structure(data, data_types=(<class 'float'>, <class 'int'>, <class 'numpy.float64'>, <class 'numpy.int32'>, <class 'numpy.int64'>, <class 'str'>, <class 'Matrix'>))
def ensure_min_nesting(data, target_level, data_types=(<class 'float'>, <class 'int'>, <class 'numpy.float64'>, <class 'numpy.int32'>, <class 'numpy.int64'>, <class 'str'>, <class 'Matrix'>), input_name=None)
-
data: number, or list of numbers, or list of lists, etc. target_level: minimum data nesting level required for further processing. data_types: list or tuple of types. input_name: name of input socket data was taken from. Optional. If specified, used for error reporting.
Wraps data in so many [] as required to achieve target nesting level. If data already has too high nesting level the same data will be returned
ensure_min_nesting(17, 0) == 17 ensure_min_nesting(17, 1) == [17] ensure_min_nesting([17], 1) == [17] ensure_min_nesting([17], 2) == [[17]] ensure_min_nesting([(1,2,3)], 3) == [[(1,2,3)]] ensure_min_nesting([[[17]]], 1) => [[[17]]]
def ensure_nesting_level(data, target_level, data_types=(<class 'float'>, <class 'int'>, <class 'numpy.float64'>, <class 'numpy.int32'>, <class 'numpy.int64'>, <class 'str'>, <class 'Matrix'>), input_name=None)
-
data: number, or list of numbers, or list of lists, etc. target_level: data nesting level required for further processing. data_types: list or tuple of types. input_name: name of input socket data was taken from. Optional. If specified, used for error reporting.
Wraps data in so many [] as required to achieve target nesting level. Raises an exception, if data already has too high nesting level.
ensure_nesting_level(17, 0) == 17 ensure_nesting_level(17, 1) == [17] ensure_nesting_level([17], 1) == [17] ensure_nesting_level([17], 2) == [[17]] ensure_nesting_level([(1,2,3)], 3) == [[(1,2,3)]] ensure_nesting_level([[[17]]], 1) => exception
def enum_item(s)
-
return a list usable in enum property from a list with one value
def enum_item_4(s)
-
return a 4*n list usable in enum property from a list with one value
def enum_item_5(s, icons)
-
return a 4*n list usable in enum property from a list with one value
def extend_blender_class(cls)
-
It is class decorator for adding extra logic into base Blender classes Decorated class should have the same name as Blender class Take into account that this decorator does not delete anything onto reload event
def fixed_iter(data, iter_number, fill_value=0)
-
Creates iterator for given data which will be yielded iter_number times If data is shorter then iter_number last element will be cycled If data is empty [fill_value] list will be used instead
def flat_iter(data)
-
[1, [2, 3, [4]], 5] -> 1, 2, 3, 4, 5
def flatten_data(data, target_level=1, data_types=(<class 'float'>, <class 'int'>, <class 'numpy.float64'>, <class 'numpy.int32'>, <class 'numpy.int64'>, <class 'str'>, <class 'Matrix'>))
-
Reduce nesting level of
data
totarget_level
, by concatenating nested sub-lists. Raises an exception if nesting level is already less thantarget_level
. Refer to data_structure_tests.py for examples. def fullList(l, count)
-
extends list l so len is at least count if needed with the last element of l
def fullList_deep_copy(l, count)
-
the same that full list function but it have correct work with objects such as lists.
def fullList_np(l, count)
-
extends list l so len is at least count if needed with the last element of l
def get_data_nesting_level(data, data_types=(<class 'float'>, <class 'int'>, <class 'numpy.float64'>, <class 'numpy.int32'>, <class 'numpy.int64'>, <class 'str'>, <class 'Matrix'>), search_first_data=False)
-
data: number, or list of numbers, or list of lists, etc. data_types: list or tuple of types.
Detect nesting level of actual data. "Actual" data is detected by belonging to one of data_types. This method searches only for first instance of "actual data", so it does not support cases when different elements of source list have different nesting. Returns integer. Raises an exception if at some point it encounters element which is not a tuple, list, or one of data_types.
get_data_nesting_level(1) == 0 get_data_nesting_level([]) == 1 get_data_nesting_level([1]) == 1 get_data_nesting_level([[(1,2,3)]]) == 3
def get_edge_list(n)
-
Get the list of n edges connecting n+1 vertices.
e.g. [[0, 1], [1, 2], … , [n-1, n]]
NOTE: This uses an "edge cache" to accelerate the edge list generation. The cache is extended automatically as needed to satisfy the largest number of edges within the node tree and it is shared by all nodes using this method.
def get_edge_loop(n)
-
Get the loop list of n edges connecting n vertices.
e.g. [[0, 1], [1, 2], … , [n-2, n-1], [n-1, 0]]
NOTE: This uses an "edge cache" to accelerate the edge list generation. The cache is extended automatically as needed to satisfy the largest number of edges within the node tree and it is shared by all nodes using this method.
def get_other_socket(socket)
-
Get next real upstream socket. This should be expanded to support wifi nodes also. Will return None if there isn't a another socket connect so no need to check socket.links
def graft_data(data, item_level=1, wrap_level=1, data_types=(<class 'float'>, <class 'int'>, <class 'numpy.float64'>, <class 'numpy.int32'>, <class 'numpy.int64'>, <class 'str'>, <class 'Matrix'>))
-
For each nested item of the list, which has it's own nesting level of
target_level
, wrap that item into a pair of []. For example, with item_level==0, this means wrap each number in the nested list (however deep this number is nested) into pair of []. Refer to data_structure_tests.py for examples. def handle_check(handle, prop)
def handle_delete(handle)
def handle_read(handle)
def handle_write(handle, prop)
def has_element(pol_edge)
def invert_index_list(indexes, length)
-
Inverts indexes list indexes: List[Int] of Ndarray flat numpy array length: Int. Length of the base list
def is_ultimately(data, data_types)
-
Check if data is a nested list / tuple / array which ultimately consists of items of data_types.
def levelsOflist(lst)
-
calc list nesting only in countainment level integer
def levels_of_list_or_np(lst)
-
calc list nesting only in countainment level integer
def list_levels_adjust(data, instructions, data_types=(<class 'float'>, <class 'int'>, <class 'numpy.float64'>, <class 'numpy.int32'>, <class 'numpy.int64'>, <class 'str'>, <class 'Matrix'>))
def make_cyclers(lists)
def make_repeaters(lists)
def map_at_level(function, data, item_level=0, data_types=(<class 'float'>, <class 'int'>, <class 'numpy.float64'>, <class 'numpy.int32'>, <class 'numpy.int64'>, <class 'str'>, <class 'Matrix'>))
-
Given a nested list of object, apply
function
to each sub-list of items. Nesting structure of the result will be simpler than such of the input: most nested levels (item_level
of them) will be eliminated. Refer to data_structure_tests.py for examples. def map_recursive(fn, data, data_types=(<class 'float'>, <class 'int'>, <class 'numpy.float64'>, <class 'numpy.int32'>, <class 'numpy.int64'>, <class 'str'>, <class 'Matrix'>))
-
Given a nested list of items, apply
fn
to each of these items. Nesting structure of the result will be the same as in the input. def map_unzip_recursirve(fn, data, data_types=(<class 'float'>, <class 'int'>, <class 'numpy.float64'>, <class 'numpy.int32'>, <class 'numpy.int64'>, <class 'str'>, <class 'Matrix'>))
-
Given a nested list of items, apply
fn
to each of these items. This method expects thatfn
will return a tuple (or list) of results. After applyingfn
to each of items of data, "unzip" the result, so that each item of result offn
would be in a separate nested list. Nesting structure of each of items of the result of this method will be the same as nesting structure of input data. Refer to data_structure_tests.py for examples. def match_cross(lsts)
-
return cross matched lists [[1,2], [5,6,7]] -> [[1,1,1,2,2,2], [5,6,7,5,6,7]]
def match_cross2(lsts)
-
return cross matched lists [[1,2], [5,6,7]] ->[[1, 2, 1, 2, 1, 2], [5, 5, 6, 6, 7, 7]]
def match_long_cycle(lsts)
-
return matched list, cycling the shorter lists longest list matching, cycle [[1,2,3,4,5] ,[10,11]] -> [[1,2,3,4,5] ,[10,11,10,11,10]]
def match_long_repeat(lsts)
-
return matched list, using the last value to fill lists as needed longest list matching [[1,2,3,4,5], [10,11]] -> [[1,2,3,4,5], [10,11,11,11,11]]
lists passed into this function are not modified, it produces non-deep copies and extends those.
def match_short(lsts)
-
return lists of equal length using the Shortest list to decides length Shortest list decides output length [[1,2,3,4,5], [10,11]] -> [[1,2], [10, 11]]
def matrixdef(orig, loc, scale, rot, angle, vec_angle=[[]])
def multi_socket(node, min=1, start=0, breck=False, out_count=None)
-
min - integer, minimal number of sockets, at list 1 needed start - integer, starting socket. breck - boolean, adding bracket to name of socket x[0] x[1] x[2] etc output - integer, deal with output, if>0 counts number of outputs multi sockets base name added in separated node in self.base_name = 'some_name', i.e. 'x', 'data' node.multi_socket_type - type of socket, as .bl_idname
def no_space(s)
def node_id(node)
-
return a stable hash for the lifetime of the node needs StringProperty called n_id in the node
def numpy_full_list(array, desired_length)
-
retuns array with desired length by repeating last item
def numpy_full_list_cycle(array, desired_length)
-
retuns array with desired length by cycling
def numpy_match_long_cycle(list_of_arrays)
-
match numpy arrays length by cycling over the array
def numpy_match_long_repeat(list_of_arrays)
-
match numpy arrays length by repeating last one
def numpy_match_short(list_of_arrays)
-
match numpy arrays length by cutting the longer arrays
def partition(p, lst)
def post_load_call(function)
-
Usage: if you need function which should be called each time when blender is lunched or new file is opened use this decorator Limitation: the function should not get any properties because it will be called by handler
def repeat_last(lst)
-
creates an infinite iterator the first each element in lst and then keep repeating the last element, use with terminating input
def repeat_last_for_length(lst, count, deepcopy=False)
-
Repeat last item of the list enough times for result's length to be equal to
count
.repeat_last_for_length(None, n) = None repeat_last_for_length([], n) = [] repeat_last_for_length([1,2], 4) = [1, 2, 2, 2]
def replace_socket(socket, new_type, new_name=None, new_pos=None)
-
Replace a socket with a socket of new_type and keep links
is_linked attribute of replaced socket will be False whether it is connected or not - https://developer.blender.org/T82318
def rotate_list(l, y=1)
-
"Rotate" list by shifting it's items towards the end and putting last items to the beginning. For example,
rotate_list([1, 2, 3]) = [2, 3, 1] rotate_list([1, 2, 3], y=2) = [3, 1, 2]
def second_as_first_cycle(F, S)
def socket_id(socket)
-
return an usable and semi stable hash
def split_by_count(iterable, n, fillvalue=None)
-
Collect data into fixed-length chunks or blocks
def sv_lambda(**kwargs)
-
üsage: (like a named tuple)
structure = sv_lambda(keys=20, color=(1,0,0,0))
print(structure.keys) print(structure.color)
useful for passing a parameter to a function that expects to be able to do a dot lookup on the parameter, for instance a function that normally accepts "self" or "node", but the function only really looks up one or two..etc parameters.
def sv_zip(*iterables)
-
zip('ABCD', 'xy') –> Ax By like standard zip but list instead of tuple
def transpose_list(lst)
-
Transpose a list of lists.
transpose_list([[1,2], [3,4]]) == [[1,3], [2, 4]]
def unwrap_data(data, unwrap_level=1, socket=None)
def unzip_dict_recursive(data, item_type=builtins.dict, to_dict=None)
-
Given a nested list of dictionaries, return a dictionary of nested lists. Nesting structure of each of values of resulting dictionary will be similar to nesting structure of input data, only at the deepest level, instead of dictionaries you will have their values.
inputs: * data: nested list of dictionaries. * item_type: allows to use arbitrary class instead of standard python's dict. * to_dict: a function which translates data item into python's dict (or another class with the same interface). Identity by default.
output: dictionary of nested lists.
Refer to data_structure_tests.py for examples.
def updateNode(self, context)
-
When a node has changed state and need to call a partial update. For example a user exposed bpy.prop
def update_edge_cache(n)
-
Extend the edge list cache to contain at least n edges.
NOTE: This is called by the get_edge_list to make sure the edge cache is large enough, but it can also be called preemptively by the nodes prior to making multiple calls to get_edge_list in order to pre-augment the cache to a known size and thus accellearate the subsequent calls to get_edge_list as they will not have to augment the cache with every call.
def update_with_kwargs(update_function, **kwargs)
-
You can wrap property update function for adding extra key arguments to it, like this:
def update_prop(self, context, extra_arg=None): print(extra_arg)
node_prop_name: bpy.props.BoolProperty(update=update_with_kwargs(update_prop, extra_arg='node_prop_name'))
def wrap_data(data, wrap_level=1)
def zip_long_repeat(*lists)
Classes
class SvListLevelAdjustment (flatten=False, wrap=False)
-
Expand source code
class SvListLevelAdjustment(object): def __init__(self, flatten=False, wrap=False): self.flatten = flatten self.wrap = wrap def __repr__(self): return f"<Flatten={self.flatten}, Wrap={self.wrap}>"
class classproperty (fget)
-
Expand source code
def __get__(self, owner_self, owner_cls): return self.fget(owner_cls)