Change Log#
[v1.2.1] - 2021-11-24#
Added#
Support for TFLite trained SSDLite MobileNet v2 models
TFLite compiled binary wheel files for raspberry pi devices are included in
python_wheels/tflite_runtime
at present installing through pip is experimental on RPi 4 and not possible on RPi Zero W, must be built from source.
animal.py
modified to run inference using tflite networks or yolov4-tinyanimal.py
run_raw
andrun
modified to produce ahuman_confidence
as well asanimal_confidence
configuration for human detection, fast animal detection (tflite model) and thresholds for each added to settings.
Fixed#
- fixed output queue length in `event_rememberer` to 1 not 10, this stops the system loading way too many motion events and exhausting memory (kswaps)
- fixed indexing error in reading from motion buffer within `camera_to_disk.py`
Changed#
- methods which use `animal.run` have been modified to accept human and animal values
- changes in `setup.sh` insure tflite install is attempted
[v1.2.0] - 2021-10-31#
Added#
Camera recording to disk module, modified PiCamera; DynCamera
EventRememberer which to load events from disk into processing pipeline
Filtermodes BY_FRAME and BY_EVENT to add functionality to process events with Filter module
Event output added to comms.py
modified main.py to support old and new pipelines depending on settings
Modified settings to include parameters for: pipeline, bitrate, framerate divisor, buffer length, raw image format and detector fraction
Changed#
Motion Queue Settings semantics changed to processing settings. To better fit other pipeline which does not use a motion queue.
[v1.1.0] - 2021-08-15#
Added#
Check for
settings.json
version vs. DynAIkonTrap version in case settings are copied from one trap to anotherAdded support for multiple output video codecs and settings to choose between them
Pillow to requirements.txt easiest way to load raw images as far as I can tell. If this can be done with OpenCV it would be nicer.
Fixed#
Implementation of UrSense interface following updated documentation
Catches pycamera
ModuleNotFoundError
when running the camera trap with emulated input on desktop
Changed#
Video sensor logs to JSON for easier machine reading – parsing this back to the previous VTT output is trivial
Interface to initialise
Output
– output mode is now handled internallyDocumentation – including wiki – migrated to Sphinx
Added#
context buffer so that clips of animals include “run in” and “trail off” frames.
LabelledFrames
now include amotion_status
label of enumerated typeMotionStatus
filtering.py
adds all frames to amotion_sequence
regardless of motion score but labels frames passing through it.frames without motion are assigned a priority of -1.0 and a
MotionStatus
ofSTILL
this ensures they are never returned byget_highest_priority()
- thus never assessed for containing an animal.MotionQueue
does not add motion sequences to its queue which do not contain motion. ieend_motion_sequence()
now searches the sequence to make sure at least one frame is labelled with motion before appending to queue.
Changed#
MotionSequence
class is now calledSequence
MotionQueue
class is now calledMotionLabelledQueue
[v1.0.0] - 2021-06-12#
Added#
First release of DynAIkonTrap