Transformations¶
Data transformations.
Composable transformations from one representation of data to another.
Used as the lubricant and glue between hardware objects. Some hardware objects
disagree about the way information should be represented – eg. cameras are very
partial to letting position information remain latent in a frame of a video, but
some other object might want the actual [x,y]
coordinates. Transformations help
negotiate (but don’t resolve their irreparably different worldviews :( )
Transformations are organized by modality, but this API is quite immature.
Transformations have a process
method that accepts and returns a single object.
They must also define the format of their inputs and outputs (format_in
and format_out
). That API is also a sketch.
The __add__()
method allows transforms to be combined, eg.:
from autopilot import transform as t
transform_me = t.Image.DLC('model_directory')
transform_me += t.selection.DLCSlice('point')
transform_me.process(frame)
# ... etcetera
Todo
This is a first draft of this module and it purely synchronous at the moment. It will be expanded to … * support multiple asynchronous processing rhythms * support automatic value coercion * make recursion checks – make sure a child hasn’t already been added to a processing chain. * idk participate at home! list your own shortcomings of this module, don’t be shy it likes it.
Functions:
|
Make a transform from a list of iterator specifications. |
-
make_transform
(transforms: List[dict])[source]¶ Make a transform from a list of iterator specifications.
- Parameters
transforms (list) –
A list of
Transform
s and parameterizations in the form:[ {'transform': Transform, 'args': (arg1, arg2,), # optional 'kwargs': {'key1':'val1', ...}, # optional {'transform': ...} ]
- Returns
Transform
Data transformations.
Experimental module.
Reusable transformations from one representation of data to another. eg. converting frames of a video to locations of objects, or locations of objects to area labels
Todo
This is a preliminary module and it purely synchronous at the moment. It will be expanded to … * support multiple asynchronous processing rhythms * support automatic value coercion
The following design features need to be added * recursion checks – make sure a child hasn’t already been added to a processing chain.
Classes:
|
|
|
Metaclass for data transformations |
-
class
Transform
(rhythm: autopilot.transform.transforms.TransformRhythm = <TransformRhythm.FILO: 2>, *args, **kwargs)[source]¶ Bases:
object
Metaclass for data transformations
Each subclass should define the following
process()
- a method that takes the input of the transoformation as its single argument and returns the transformed outputformat_in
- a dict that specifies the input formatformat_out
- a dict that specifies the output format
- Parameters
rhythm (
TransformRhythm
) – A rhythm by which the transformation object processes its inputs- Variables
(class (child) – Transform): Another Transform object chained after this one
Attributes:
If this Transform is in a chain of transforms, the transform that precedes it
Methods:
process
(input)reset
()If a transformation is stateful, reset state.
check_compatible
(child)Check that this Transformation’s
format_out
is compatible with another’sformat_in
__add__
(other)Add another Transformation in the chain to make a processing pipeline
-
property
rhythm
¶
-
property
format_in
¶
-
property
format_out
¶
-
property
parent
¶ If this Transform is in a chain of transforms, the transform that precedes it
- Returns
Transform
,None
if no parent.
-
check_compatible
(child: autopilot.transform.transforms.Transform)[source]¶ Check that this Transformation’s
format_out
is compatible with another’sformat_in
Todo
Check for types that can be automatically coerced into one another and set
_coercion
to appropriate function- Parameters
child (
Transform
) – Transformation to check compatibility- Returns
bool