Technical report

File type: PDF

Lawson C. L., Mumby P., Roelfsema C., Chartrand K., Kolluru A., Shi H., Tan A., Yuan Toh Z., Ganesan C., Navindra Kothalawal V., Iyengar A., Senn N., Vozzo B., Uusitalo J. and A. Ridley, 2023. Great Reef Census - a case study to integrate citizen science data into research output for marine habitat management. Report to the Reef and Rainforest Research Centre, Cairns, Queensland.

2022

Overview

To maximise our understanding of our marine and coastal environment, we need to take advantage of emerging technologies and approaches. This includes citizen science and community monitoring. Technology has greatly reduced the gap between mainstream science and citizen science to the point they may become almost identical in some integrated programs, especially when involving the collection of in-field information. The challenge for science is to integrate with the vast opportunities afforded by this congruence.

This project had two primary objectives. First, we compared and combined expert and citizen scientist analyses of geo-referenced images from a large-scale citizen science program, the Great Reef Census (GRC). Additionally, we examined the effectiveness of an updated online analysis platform (beta version) that incorporated both machine learning (artificial intelligence; AI) and citizen scientist validation steps for analysing the GRC Year 2 image collection.

To accomplish these goals, we 1) established a validation framework for GRC Year 1 & 2 analyses, using University of Queensland expert analysis as a reference for reliable data, to filter the citizen dataset, which may contain both accurate and erroneous results, 2) extracted insights to streamline image processing and training within the online platform, 3) evaluated the AI system’s performance in identifying key coral groups in images from GRC Years 1 & 2, and 4) assessed a refined online platform, including AI integration, with a subset of citizen science users against a calibration expert pool’s analyses from the GRC Year 2 image library to enhance data quality.

Here, we analyse the performance of a machine learning platform and a citizen science program deployed in schools across Australia in analysing the coral cover and coral type contained in images collected over two years by the Great Reef Census. The categories analysed were reef structure coverage (% of image), total coral cover (% of reef structure), branching coral cover (% of reef structure), table coral cover (% of reef structure), massive coral cover (% of reef structure), and other coral cover (% of reef structure).

The AI model was developed over three key versions. First, a basic Dell Technologies server hosted a rudimentary platform where AI-generated polygons required user labelling into categories. Despite being slow, buggy, and lacking design and training elements, it served as a starting point, tested by 200 corporate volunteers. Second, the software evolved into a more refined version, integrated into the Great Reef Census website and tested with around 100 corporate volunteers. Feedback revealed lingering issues, including overlapping polygons and small, hard-to-identify ones. Finally, the third platform iteration addressed feedback by eliminating small polygons for a smoother user experience.

The Great Reef Census School Program allowed us to compare the third AI platform iteration with the original non-AI analysis software used in 2021. The non-AI method involved 6,000 people analysing 30,000 images over 12 weeks, while the new AI method processed almost as many images (24,000) in just six weeks, with only 5% of the participants (300).

Image analysis was conducted by “experts”: individuals compensated for their time and who had experience in marine science and coral identification (e.g., holding a Bachelor’s degree in Marine Biology). These experts, affiliated with the University of Queensland, received training from the GRC science team to identify major coral groups of interest and calculate

percent cover of each. By comparing the results of citizen science and AI analyses with those of the experts, we validated these methods as dependable tools for future reef health surveys.

The AI estimates were accurate and precise across almost all categories, in most cases reaching ± 5% accuracy while using just 20 images. However, the AI performed less accurately in general for images/reefs that had high (>70%) or low (< 20%) coral coverage; here, it may be beneficial to collect more images per location and develop improved analysis methodology, such as further training of the AI.

In general, the school program showed results that were reliable enough to inform research and management (i.e. within 10% of trained experts), however, it exhibited lower accuracy and was more variable compared to AI analysis. Further investigation may enable the school program system to fill in gaps where the AI performs poorly, e.g. images with very high coral cover. Combining the school program analysis with AI as a filtering mechanism enhanced AI accuracy when disparities between the two were observed, with the filter consistently working across diverse reef conditions. The mean accuracy of the AI performance was improved by up to ~3% if images with discrepancies (10% or more) between the school program and AI were removed. Consequently, citizen science may complement and improve AI analysis. However, the citizen science analysis may offer greater value for specific tasks, such as identifying coral health indicators that the AI isn’t trained to detect.

The comparative approach of different techniques used in this project allowed us to assess their respective strengths, weaknesses, and suitability for diverse scenarios. The analysis tool and spatially explicit data validated from this project will be available to Commonwealth and regional management agencies as well as on-ground researchers, Traditional Owners and rangers to guide environmental decision-making and on-ground action.

BACK TO TOP