By Phoebe Parker, Senior Consultant, Altis Consulting
The Tasmanian Devil Unzoo, run by the Tasmanian Nature Company (TNC) is a global leader in shaping the way zoos evolve in the 21st century. The Unzoo invites visitors into natural habitats in which cages and barriers are removed or concealed so that animals (wild and resident) are encouraged to interact naturally with the environment.
The Tasman Peninsula in South East Tasmania is the last place where there are no wild Tasmanian Devils with the deadly Devil Facial Tumour Disease (DFTD). This is due a canal that only leaves a narrow strip of land that connects the area to the rest of Tasmania. The DFTD emerged in 1996 and still has no cure. The devils transmit the illness to each other when fighting and playing as part of their normal behaviour. The TNC is playing a critical role in preserving Tassie Devils in the Tasman Peninsula – the last safe refuge for wild devils.
The TNC have been capturing photos and videos from their camera traps since 2015 as part of their ‘Devil Tracker’ program, but they have not had the time to make use of the valuable data they have collected. Currently, someone must manually sort through every photo, select the ones with Tassie Devils in them, and analyse it for injuries or signs of DFTD. This is extremely time consuming and prone to human error. Our goal is to have an automated process that can identify Tassie Devils, Tassie Devil injuries, and DFTD.
The Data4Good project team narrowed it down to three key questions that the solution should answer:
- Is there a Tassie Devil in the photo?
- Are there any signs of DFTD or injuries?
- How often are individuals passing through the same spots?
Our first step was to create a model that can accurately assess if there is a Tassie Devil in the photo or not, as this would greatly reduce the amount of time the TNC team need to spend on this step.
We explored three image labelling methods: manual, Amazon Rekognition, and dataset enhancement using open-source machine learning tools.
For the manual method we used LabelImg on python. This method is the most straightforward, but also the most time consuming and prone to human error. Manual labelling allowed us to make a start on training the model and try other automated methods such as Amazon Rekognition.
Our next steps are to identify injured devils and devils with facial tumours. This will be more difficult as a lot of the images do not have the devil’s face in them.
If you’re interested in this topic, join the Data4Good Webinar on May 18th 2022 @ 12pm AEST, where Phoebe will talk about the technology used in this project and what they hope to achieve as a long term goal.
Register at the link below: