kwimage.structs.detections

Structure for efficient access and modification of bounding boxes with associated scores and class labels. Builds on top of the kwimage.Boxes structure.

Also can optionally incorporate kwimage.PolygonList for segmentation masks and kwimage.PointsList for keypoints.

If you want to visualize boxes and scores you can do this:
>>> # Given data
>>> data = np.random.rand(10, 4) * 224
>>> scores = np.random.rand(10,)
>>> class_idxs = np.random.randint(0, 3, size=10)
>>> classes = ['class1', 'class2', 'class3']
>>> #
>>> # Wrap your data with a Detections object
>>> import kwimage
>>> dets = kwimage.Detections(
>>>     boxes=kwimage.Boxes(data, format='xywh'),
>>>     scores=scores,
>>>     class_idxs=class_idxs,
>>>     classes=classes,
>>> )
>>> dets.draw()
>>> import matplotlib.pyplot as plt
>>> plt.gca().set_xlim(0, 224)
>>> plt.gca().set_ylim(0, 224)

Module Contents

Classes

_DetDrawMixin

Non critical methods for visualizing detections

_DetAlgoMixin

Non critical methods for algorithmic manipulation of detections

Detections

Container for holding and manipulating multiple detections.

_UnitDoctTests

Hacking in unit tests as doctests the file itself so it is easy to move to

Functions

_dets_to_fcmaps(dets, bg_size, input_dims, bg_idx=0, pmin=0.6, pmax=1.0, soft=True, exclude=[])

Construct semantic segmentation detection targets from annotations in

Attributes

torch

kwimage.structs.detections.torch[source]
class kwimage.structs.detections._DetDrawMixin[source]

Non critical methods for visualizing detections

draw(self, color='blue', alpha=None, labels=True, centers=False, lw=2, fill=False, ax=None, radius=5, kpts=True, sseg=True, setlim=False, boxes=True)[source]

Draws boxes using matplotlib

Example

>>> # xdoc: +REQUIRES(module:kwplot)
>>> self = Detections.random(num=10, scale=512.0, rng=0, classes=['a', 'b', 'c'])
>>> self.boxes.translate((-128, -128), inplace=True)
>>> image = (np.random.rand(256, 256) * 255).astype(np.uint8)
>>> # xdoc: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> fig = kwplot.figure(fnum=1, doclf=True)
>>> kwplot.imshow(image)
>>> # xdoc: +REQUIRES(--show)
>>> self.draw(color='blue', alpha=None)
>>> # xdoc: +REQUIRES(--show)
>>> for o in fig.findobj():  # http://matplotlib.1069221.n5.nabble.com/How-to-turn-off-all-clipping-td1813.html
>>>     o.set_clip_on(False)
>>> kwplot.show_if_requested()
draw_on(self, image, color='blue', alpha=None, labels=True, radius=5, kpts=True, sseg=True, boxes=True, ssegkw=None, label_loc='top_left', thickness=2)[source]

Draws boxes directly on the image using OpenCV

Parameters
  • image (ndarray[uint8]) – must be in uint8 format

  • color (str | ColorLike | List[ColorLike]) – one color for all boxes or a list of colors for each box

  • alpha (float) – Transparency of overlay. can be a scalar or a list for each box

  • labels (bool | str | List[str]) – if True, use categorie names as the labels. See _make_labels for details. Otherwise a manually specified text label for each box.

  • boxes (bool) – if True draw the boxes

  • kpts (bool) – if True draw the keypoints

  • sseg (bool) – if True draw the segmentations

  • ssegkw (dict) – extra arguments passed to segmentations.draw_on

  • radius (float) – passed to keypoints.draw_on

  • label_loc (str) – indicates where labels (if specified) should be drawn. passed to boxes.draw_on

  • thickness (int, default=2) – rectangle thickness, negative values will draw a filled rectangle. passed to boxes.draw_on

Returns

image with labeled boxes drawn on it

Return type

ndarray[uint8]

CommandLine:

xdoctest -m kwimage.structs.detections _DetDrawMixin.draw_on:1 –profile –show

Example

>>> # xdoc: +REQUIRES(module:kwplot)
>>> import kwplot
>>> self = Detections.random(num=10, scale=512, rng=0)
>>> image = (np.random.rand(512, 512) * 255).astype(np.uint8)
>>> image2 = self.draw_on(image, color='blue')
>>> # xdoc: +REQUIRES(--show)
>>> kwplot.figure(fnum=2000, doclf=True)
>>> kwplot.autompl()
>>> kwplot.imshow(image2)
>>> kwplot.show_if_requested()

Example

>>> # xdoc: +REQUIRES(module:kwplot)
>>> from kwimage.structs.detections import *  # NOQA
>>> import kwplot
>>> self = Detections.random(num=10, scale=512, rng=0)
>>> image = (np.random.rand(512, 512) * 255).astype(np.uint8)
>>> image2 = self.draw_on(image, color='classes')
>>> # xdoc: +REQUIRES(--show)
>>> kwplot.figure(fnum=2000, doclf=True)
>>> kwplot.autompl()
>>> kwplot.imshow(image2)
>>> kwplot.show_if_requested()

Example

>>> # xdoc: +REQUIRES(module:kwplot)
>>> # xdoc: +REQUIRES(--profile)
>>> import kwplot
>>> self = Detections.random(num=100, scale=512, rng=0, keypoints=True, segmentations=True)
>>> image = (np.random.rand(512, 512) * 255).astype(np.uint8)
>>> image2 = self.draw_on(image, color='blue')
>>> # xdoc: +REQUIRES(--show)
>>> kwplot.figure(fnum=2000, doclf=True)
>>> kwplot.autompl()
>>> kwplot.imshow(image2)
>>> kwplot.show_if_requested()
Ignore:

import xdev globals().update(xdev.get_func_kwargs(kwimage.Detections.draw_on))

_make_colors(self, color)[source]

Handles special settings of color.

If color == ‘classes’, then choose a distinct color for each category

_make_alpha(self, alpha)[source]

Either passes through user specified alpha or chooses a sensible default

_make_labels(self, labels)[source]

Either passes through user specified labels or chooses a sensible default

class kwimage.structs.detections._DetAlgoMixin[source]

Non critical methods for algorithmic manipulation of detections

non_max_supression(self, thresh=0.0, perclass=False, impl='auto', daq=False, device_id=None)[source]

Find high scoring minimally overlapping detections

Parameters
  • thresh (float) – iou threshold between 0 and 1. A box is removed if it overlaps with a previously chosen box by more than this threshold. Higher values are are more permissive (more boxes are returned). A value of 0 means that returned boxes will have no overlap.

  • perclass (bool) – if True, works on a per-class basis

  • impl (str) – nms implementation to use

  • daq (Bool | Dict) – if False, uses reqgular nms, otherwise uses divide and conquor algorithm. If daq is a Dict, then it is used as the kwargs to kwimage.daq_spatial_nms

  • device_id – try not to use. only used if impl is gpu

Returns

indices of boxes to keep

Return type

ndarray[int]

non_max_supress(self, thresh=0.0, perclass=False, impl='auto', daq=False)[source]

Convinience method. Like non_max_supression, but returns to supressed boxes instead of the indices to keep.

rasterize(self, bg_size, input_dims, soften=1, tf_data_to_img=None, img_dims=None, exclude=[])[source]

Ambiguous conversion from a Heatmap to a Detections object.

SeeAlso:

Heatmap.detect

Returns

raster-space detections.

Return type

kwimage.Heatmap

Example

>>> # xdoctest: +REQUIRES(module:ndsampler)
>>> from kwimage.structs.detections import *  # NOQA
>>> self, iminfo, sampler = Detections.demo()
>>> image = iminfo['imdata'][:]
>>> input_dims = iminfo['imdata'].shape[0:2]
>>> bg_size = [100, 100]
>>> heatmap = self.rasterize(bg_size, input_dims)
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> kwplot.figure(fnum=1, pnum=(2, 2, 1))
>>> heatmap.draw(invert=True)
>>> kwplot.figure(fnum=1, pnum=(2, 2, 2))
>>> kwplot.imshow(heatmap.draw_on(image))
>>> kwplot.figure(fnum=1, pnum=(2, 1, 2))
>>> kwplot.imshow(heatmap.draw_stacked())
class kwimage.structs.detections.Detections(data=None, meta=None, datakeys=None, metakeys=None, checks=True, **kwargs)[source]

Bases: ubelt.NiceRepr, _DetAlgoMixin, _DetDrawMixin

Container for holding and manipulating multiple detections.

Variables
  • data (Dict) –

    dictionary containing corresponding lists. The length of each list is the number of detections. This contains the bounding boxes, confidence scores, and class indices. Details of the most common keys and types are as follows:

    boxes (kwimage.Boxes[ArrayLike]): multiple bounding boxes scores (ArrayLike): associated scores class_idxs (ArrayLike): associated class indices segmentations (ArrayLike): segmentations masks for each box,

    members can be Mask or MultiPolygon.

    keypoints (ArrayLike): keypoints for each box. Members should

    be Points.

    Additional custom keys may be specified as long as (a) the values are array-like and the first axis corresponds to the standard data values and (b) are custom keys are listed in the datakeys kwargs when constructing the Detections.

  • meta (Dict) – This contains contextual information about the detections. This includes the class names, which can be indexed into via the class indexes.

Example

>>> import kwimage
>>> dets = kwimage.Detections(
>>>     # there are expected keys that do not need registration
>>>     boxes=kwimage.Boxes.random(3),
>>>     class_idxs=[0, 1, 1],
>>>     classes=['a', 'b'],
>>>     # custom data attrs must align with boxes
>>>     myattr1=np.random.rand(3),
>>>     myattr2=np.random.rand(3, 2, 8),
>>>     # there are no restrictions on metadata
>>>     mymeta='a custom metadata string',
>>>     # Note that any key not in kwimage.Detections.__datakeys__ or
>>>     # kwimage.Detections.__metakeys__ must be registered at the
>>>     # time of construction.
>>>     datakeys=['myattr1', 'myattr2'],
>>>     metakeys=['mymeta'],
>>>     checks=True,
>>> )
>>> print('dets = {}'.format(dets))
dets = <Detections(3)>
__datakeys__ = ['boxes', 'scores', 'class_idxs', 'probs', 'weights', 'keypoints', 'segmentations'][source]
__metakeys__ = ['classes'][source]
__nice__(self)[source]
__len__(self)[source]
copy(self)[source]

Returns a deep copy of this Detections object

classmethod coerce(cls, data=None, **kwargs)[source]

The “try-anything to get what I want” constructor

Parameters
  • data

  • **kwargs – currently boxes and cnames

Example

>>> from kwimage.structs.detections import *  # NOQA
>>> import kwimage
>>> kwargs = dict(
>>>     boxes=kwimage.Boxes.random(4),
>>>     cnames=['a', 'b', 'c', 'c'],
>>> )
>>> data = {}
>>> self = kwimage.Detections.coerce(data, **kwargs)
classmethod from_coco_annots(cls, anns, cats=None, classes=None, kp_classes=None, shape=None, dset=None)[source]

Create a Detections object from a list of coco-like annotations.

Parameters
  • anns (List[Dict]) – list of coco-like annotation objects

  • dset (CocoDataset) – if specified, cats, classes, and kp_classes can are ignored.

  • cats (List[Dict]) – coco-format category information. Used only if dset is not specified.

  • classes (ndsampler.CategoryTree) – category tree with coco class info. Used only if dset is not specified.

  • kp_classes (ndsampler.CategoryTree) – keypoint category tree with coco keypoint class info. Used only if dset is not specified.

  • shape (tuple) – shape of parent image

Returns

a detections object

Return type

Detections

Example

>>> from kwimage.structs.detections import *  # NOQA
>>> # xdoctest: +REQUIRES(--module:ndsampler)
>>> anns = [{
>>>     'id': 0,
>>>     'image_id': 1,
>>>     'category_id': 2,
>>>     'bbox': [2, 3, 10, 10],
>>>     'keypoints': [4.5, 4.5, 2],
>>>     'segmentation': {
>>>         'counts': '_11a04M2O0O20N101N3L_5',
>>>         'size': [20, 20],
>>>     },
>>> }]
>>> dataset = {
>>>     'images': [],
>>>     'annotations': [],
>>>     'categories': [
>>>         {'id': 0, 'name': 'background'},
>>>         {'id': 2, 'name': 'class1', 'keypoints': ['spot']}
>>>     ]
>>> }
>>> #import ndsampler
>>> #dset = ndsampler.CocoDataset(dataset)
>>> cats = dataset['categories']
>>> dets = Detections.from_coco_annots(anns, cats)

Example

>>> # xdoctest: +REQUIRES(--module:ndsampler)
>>> # Test case with no category information
>>> from kwimage.structs.detections import *  # NOQA
>>> anns = [{
>>>     'id': 0,
>>>     'image_id': 1,
>>>     'category_id': None,
>>>     'bbox': [2, 3, 10, 10],
>>>     'prob': [.1, .9],
>>> }]
>>> cats = [
>>>     {'id': 0, 'name': 'background'},
>>>     {'id': 2, 'name': 'class1'}
>>> ]
>>> dets = Detections.from_coco_annots(anns, cats)

Example

>>> import kwimage
>>> # xdoctest: +REQUIRES(--module:ndsampler)
>>> import ndsampler
>>> sampler = ndsampler.CocoSampler.demo('photos')
>>> iminfo, anns = sampler.load_image_with_annots(1)
>>> shape = iminfo['imdata'].shape[0:2]
>>> kp_classes = sampler.dset.keypoint_categories()
>>> dets = kwimage.Detections.from_coco_annots(
>>>     anns, sampler.dset.dataset['categories'], sampler.catgraph,
>>>     kp_classes, shape=shape)
to_coco(self, cname_to_cat=None, style='orig', image_id=None, dset=None)[source]

Converts this set of detections into coco-like annotation dictionaries.

Notes

Not all aspects of the MS-COCO format can be accurately represented, so some liberties are taken. The MS-COCO standard defines that annotations should specifiy a category_id field, but in some cases this information is not available so we will populate a ‘category_name’ field if possible and in the worst case fall back to ‘category_index’.

Additionally, detections may contain additional information beyond the MS-COCO standard, and this information (e.g. weight, prob, score) is added as forign fields.

Parameters
  • cname_to_cat – currently ignored.

  • style (str, default=’orig’) – either ‘orig’ (for the original coco format) or ‘new’ for the more general kwcoco-style coco format.

  • image_id (int, default=None) – if specified, populates the image_id field of each image

  • dset (CocoDataset, default=None) – if specified, attempts to populate the category_id field to be compatible with this coco dataset.

Yields

dict – coco-like annotation structures

Example

>>> # xdoctest: +REQUIRES(module:ndsampler)
>>> from kwimage.structs.detections import *
>>> self = Detections.demo()[0]
>>> cname_to_cat = None
>>> list(self.to_coco())
property boxes(self)[source]
property class_idxs(self)[source]
property scores(self)[source]

typically only populated for predicted detections

property probs(self)[source]

typically only populated for predicted detections

property weights(self)[source]

typically only populated for groundtruth detections

property classes(self)[source]
num_boxes(self)[source]
warp(self, transform, input_dims=None, output_dims=None, inplace=False)[source]

Spatially warp the detections.

Example

>>> import skimage
>>> transform = skimage.transform.AffineTransform(scale=(2, 3), translation=(4, 5))
>>> self = Detections.random(2)
>>> new = self.warp(transform)
>>> assert new.boxes == self.boxes.warp(transform)
>>> assert new != self
scale(self, factor, output_dims=None, inplace=False)[source]

Spatially warp the detections.

Example

>>> import skimage
>>> transform = skimage.transform.AffineTransform(scale=(2, 3), translation=(4, 5))
>>> self = Detections.random(2)
>>> new = self.warp(transform)
>>> assert new.boxes == self.boxes.warp(transform)
>>> assert new != self
translate(self, offset, output_dims=None, inplace=False)[source]

Spatially warp the detections.

Example

>>> import skimage
>>> self = Detections.random(2)
>>> new = self.translate(10)
classmethod concatenate(cls, dets)[source]
Parameters

boxes (Sequence[Detections]) – list of detections to concatenate

Returns

stacked detections

Return type

Detections

Example

>>> self = Detections.random(2)
>>> other = Detections.random(3)
>>> dets = [self, other]
>>> new = Detections.concatenate(dets)
>>> assert new.num_boxes() == 5
>>> self = Detections.random(2, segmentations=True)
>>> other = Detections.random(3, segmentations=True)
>>> dets = [self, other]
>>> new = Detections.concatenate(dets)
>>> assert new.num_boxes() == 5
argsort(self, reverse=True)[source]

Sorts detection indices by descending (or ascending) scores

Returns

sorted indices

Return type

ndarray[int]

sort(self, reverse=True)[source]

Sorts detections by descending (or ascending) scores

Returns

sorted copy of self

Return type

kwimage.structs.Detections

compress(self, flags, axis=0)[source]

Returns a subset where corresponding locations are True.

Parameters

flags (ndarray[bool]) – mask marking selected items

Returns

subset of self

Return type

kwimage.structs.Detections

CommandLine:

xdoctest -m kwimage.structs.detections Detections.compress

Example

>>> # xdoctest: +REQUIRES(module:torch)
>>> import kwimage
>>> dets = kwimage.Detections.random(keypoints='dense')
>>> flags = np.random.rand(len(dets)) > 0.5
>>> subset = dets.compress(flags)
>>> assert len(subset) == flags.sum()
>>> subset = dets.tensor().compress(flags)
>>> assert len(subset) == flags.sum()
take(self, indices, axis=0)[source]

Returns a subset specified by indices

Parameters

indices (ndarray[int]) – indices to select

Returns

subset of self

Return type

kwimage.structs.Detections

Example

>>> import kwimage
>>> dets = kwimage.Detections(boxes=kwimage.Boxes.random(10))
>>> subset = dets.take([2, 3, 5, 7])
>>> assert len(subset) == 4
>>> # xdoctest: +REQUIRES(module:torch)
>>> subset = dets.tensor().take([2, 3, 5, 7])
>>> assert len(subset) == 4
__getitem__(self, index)[source]

Fancy slicing / subset / indexing.

Note: scalar indices are always coerced into index lists of length 1.

Example

>>> import kwimage
>>> import kwarray
>>> dets = kwimage.Detections(boxes=kwimage.Boxes.random(10))
>>> indices = [2, 3, 5, 7]
>>> flags = kwarray.boolmask(indices, len(dets))
>>> assert dets[flags].data == dets[indices].data
property device(self)[source]

If the backend is torch returns the data device, otherwise None

is_tensor(self)[source]

is the backend fueled by torch?

is_numpy(self)[source]

is the backend fueled by numpy?

numpy(self)[source]

Converts tensors to numpy. Does not change memory if possible.

Example

>>> # xdoctest: +REQUIRES(module:torch)
>>> self = Detections.random(3).tensor()
>>> newself = self.numpy()
>>> self.scores[0] = 0
>>> assert newself.scores[0] == 0
>>> self.scores[0] = 1
>>> assert self.scores[0] == 1
>>> self.numpy().numpy()
property dtype(self)[source]
tensor(self, device=ub.NoParam)[source]

Converts numpy to tensors. Does not change memory if possible.

Example

>>> # xdoctest: +REQUIRES(module:torch)
>>> from kwimage.structs.detections import *
>>> self = Detections.random(3)
>>> newself = self.tensor()
>>> self.scores[0] = 0
>>> assert newself.scores[0] == 0
>>> self.scores[0] = 1
>>> assert self.scores[0] == 1
>>> self.tensor().tensor()
classmethod demo(Detections)[source]
classmethod random(cls, num=10, scale=1.0, classes=3, keypoints=False, segmentations=False, tensor=False, rng=None)[source]

Creates dummy data, suitable for use in tests and benchmarks

Parameters
  • num (int) – number of boxes

  • scale (float | tuple, default=1.0) – bounding image size

  • classes (int | Sequence) – list of class labels or number of classes

  • keypoints (bool, default=False) – if True include random keypoints for each box.

  • segmentations (bool, default=False) – if True include random segmentations for each box.

  • tensor (bool, default=False) – determines backend. DEPRECATED. Call tensor on resulting object instead.

  • rng (np.random.RandomState) – random state

Example

>>> import kwimage
>>> dets = kwimage.Detections.random(keypoints='jagged')
>>> dets.data['keypoints'].data[0].data
>>> dets.data['keypoints'].meta
>>> dets = kwimage.Detections.random(keypoints='dense')
>>> dets = kwimage.Detections.random(keypoints='dense', segmentations=True).scale(1000)
>>> # xdoctest:+REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> dets.draw(setlim=True)

Example

>>> import kwimage
>>> dets = kwimage.Detections.random(
>>>     keypoints='jagged', segmentations=True, rng=0).scale(1000)
>>> print('dets = {}'.format(dets))
dets = <Detections(10)>
>>> dets.data['boxes'].quantize(inplace=True)
>>> print('dets.data = {}'.format(ub.repr2(
>>>     dets.data, nl=1, with_dtype=False, strvals=True)))
dets.data = {
    'boxes': <Boxes(xywh,
                 array([[548, 544,  55, 172],
                        [423, 645,  15, 247],
                        [791, 383, 173, 146],
                        [ 71,  87, 498, 839],
                        [ 20, 832, 759,  39],
                        [461, 780, 518,  20],
                        [118, 639,  26, 306],
                        [264, 414, 258, 361],
                        [ 18, 568, 439,  50],
                        [612, 616, 332,  66]], dtype=int32))>,
    'class_idxs': [1, 2, 0, 0, 2, 0, 0, 0, 0, 0],
    'keypoints': <PointsList(n=10)>,
    'scores': [0.3595079 , 0.43703195, 0.6976312 , 0.06022547, 0.66676672, 0.67063787,0.21038256, 0.1289263 , 0.31542835, 0.36371077],
    'segmentations': <SegmentationList(n=10)>,
}
>>> # xdoctest:+REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> dets.draw(setlim=True)

Example

>>> # Boxes position/shape within 0-1 space should be uniform.
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> fig = kwplot.figure(fnum=1, doclf=True)
>>> fig.gca().set_xlim(0, 128)
>>> fig.gca().set_ylim(0, 128)
>>> import kwimage
>>> kwimage.Detections.random(num=10, segmentations=True).scale(128).draw()
kwimage.structs.detections._dets_to_fcmaps(dets, bg_size, input_dims, bg_idx=0, pmin=0.6, pmax=1.0, soft=True, exclude=[])[source]

Construct semantic segmentation detection targets from annotations in dictionary format.

Rasterize detections.

Parameters
  • dets (kwimage.Detections)

  • bg_size (tuple) – size (W, H) to predict for backgrounds

  • input_dims (tuple) – window H, W

Returns

with keys

size : 2D ndarray containing the W,H of the object dxdy : 2D ndarray containing the x,y offset of the object cidx : 2D ndarray containing the class index of the object

Return type

dict

Ignore:

import xdev globals().update(xdev.get_func_kwargs(_dets_to_fcmaps))

Example

>>> # xdoctest: +REQUIRES(module:ndsampler)
>>> from kwimage.structs.detections import *  # NOQA
>>> from kwimage.structs.detections import _dets_to_fcmaps
>>> import kwimage
>>> import ndsampler
>>> sampler = ndsampler.CocoSampler.demo('photos')
>>> iminfo, anns = sampler.load_image_with_annots(1)
>>> image = iminfo['imdata']
>>> input_dims = image.shape[0:2]
>>> kp_classes = sampler.dset.keypoint_categories()
>>> dets = kwimage.Detections.from_coco_annots(
>>>     anns, sampler.dset.dataset['categories'],
>>>     sampler.catgraph, kp_classes, shape=input_dims)
>>> bg_size = [100, 100]
>>> bg_idxs = sampler.catgraph.index('background')
>>> fcn_target = _dets_to_fcmaps(dets, bg_size, input_dims, bg_idxs)
>>> fcn_target.keys()
>>> print('fcn_target: ' + ub.repr2(ub.map_vals(lambda x: x.shape, fcn_target), nl=1))
fcn_target: {
    'cidx': (512, 512),
    'class_probs': (10, 512, 512),
    'dxdy': (2, 512, 512),
    'kpts': (2, 7, 512, 512),
    'kpts_ignore': (7, 512, 512),
    'size': (2, 512, 512),
}
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> size_mask = fcn_target['size']
>>> dxdy_mask = fcn_target['dxdy']
>>> cidx_mask = fcn_target['cidx']
>>> kpts_mask = fcn_target['kpts']
>>> def _vizmask(dxdy_mask):
>>>     dx, dy = dxdy_mask
>>>     mag = np.sqrt(dx ** 2 + dy ** 2)
>>>     mag /= (mag.max() + 1e-9)
>>>     mask = (cidx_mask != 0).astype(np.float32)
>>>     angle = np.arctan2(dy, dx)
>>>     orimask = kwplot.make_orimask(angle, mask, alpha=mag)
>>>     vecmask = kwplot.make_vector_field(
>>>         dx, dy, stride=4, scale=0.1, thickness=1, tipLength=.2,
>>>         line_type=16)
>>>     return [vecmask, orimask]
>>> vecmask, orimask = _vizmask(dxdy_mask)
>>> raster = kwimage.overlay_alpha_layers(
>>>     [vecmask, orimask, image], keepalpha=False)
>>> raster = dets.draw_on((raster * 255).astype(np.uint8),
>>>                       labels=True, alpha=None)
>>> kwplot.imshow(raster)
>>> kwplot.show_if_requested()

raster = (kwimage.overlay_alpha_layers(_vizmask(kpts_mask[:, 5]) + [image], keepalpha=False) * 255).astype(np.uint8) kwplot.imshow(raster, pnum=(1, 3, 2), fnum=1) raster = (kwimage.overlay_alpha_layers(_vizmask(kpts_mask[:, 6]) + [image], keepalpha=False) * 255).astype(np.uint8) kwplot.imshow(raster, pnum=(1, 3, 3), fnum=1) raster = (kwimage.overlay_alpha_layers(_vizmask(dxdy_mask) + [image], keepalpha=False) * 255).astype(np.uint8) raster = dets.draw_on(raster, labels=True, alpha=None) kwplot.imshow(raster, pnum=(1, 3, 1), fnum=1) raster = kwimage.overlay_alpha_layers(

[vecmask, orimask, image], keepalpha=False)

raster = dets.draw_on((raster * 255).astype(np.uint8),

labels=True, alpha=None)

kwplot.imshow(raster) kwplot.show_if_requested()

class kwimage.structs.detections._UnitDoctTests[source]

Hacking in unit tests as doctests the file itself so it is easy to move to kwannot when I finally get around to that.

_test_foreign_keys_compress()[source]

A detections object should be able to maintain foreign keys through compress operations.

Example

>>> from kwimage.structs.detections import _UnitDoctTests
>>> from kwimage.structs.detections import _generic
>>> _UnitDoctTests._test_foreign_keys_compress()