Research Article |
Corresponding author: Leif Howard ( leif.howard@flbs.umt.edu ) Academic editor: Ingolf Kühn
© 2022 Leif Howard, Charles B. van Rees, Zoe Dahlquist, Gordon Luikart, Brian K. Hand.
This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Citation:
Howard L, van Rees CB, Dahlquist Z, Luikart G, Hand BK (2022) A review of invasive species reporting apps for citizen science and opportunities for innovation. NeoBiota 71: 165-188. https://doi.org/10.3897/neobiota.71.79597
|
Smartphone apps have enhanced the potential for monitoring of invasive alien species (IAS) through citizen science. They now have the capacity to massively increase the volume and spatiotemporal coverage of IAS occurrence data accrued in centralised databases. While more reporting apps are developed each year, innovation across diverse functionalities and data management in this field are occurring separately and simultaneously amongst numerous research groups with little attention to trends, priorities and opportunities for improvement. This creates the risk of duplication of effort and missed opportunities for implementing new and existing functionalities that would directly benefit IAS research and management. Using a literature search of Early Detection and Rapid Response implementation, smartphone app development and invasive species reporting apps, we developed a rubric for quantitatively assessing the functionality of IAS reporting apps and applied this rubric to 41 free, English-language IAS reporting apps, available via major mobile app stores in North America. The five highest performing apps achieved scores of 61.90% to 66.35% relative to a hypothetical maximum score, indicating that many app features and functionalities, acknowledged to be useful for IAS reporting in literature, are not present in sampled apps. This suggests that current IAS reporting apps do not make use of all available and known functionalities that could maximise their efficacy. Major implementation gaps, highlighted by this rubric analysis, included limited implementation in user engagement (particularly gamification elements and social media compatibility), ancillary information on search effort, detection method, the ability to report absences and local habitat characteristics. The greatest advancement in IAS early detection would likely result from app gamification. This would make IAS reporting more engaging for a growing community of non-professional contributors and encourage frequent and prolonged participation. We discuss these implementation gaps in relation to the increasingly urgent need for Early Detection and Rapid Response frameworks. We also recommend future innovations in IAS reporting app development to help slow the spread of IAS and curb the global economic and biodiversity extinction crises. We also suggest that further funding and investment in this and other implementation gaps could greatly increase the efficacy of current IAS reporting apps and increase their contributions to addressing the contemporary biological invasion threat.
biosurveillance, citizen science, early detection and rapid response, invasive species, mobile device, species occurrence, wildlife technology
Invasive alien species (IAS) are a leading contributor to biodiversity loss (Bellard et al. 2013; Simberloff et al. 2013; IPBES 2019) and cause annual economic damage in the order of hundreds of billions of US dollars in each of many countries around the world (
Reports from volunteers (commonly called community or citizen scientists) make growing contributions to meeting these monitoring data needs, from providing first detections of new invasions (
However, there are concerns that the current use of IAS mobile reporting apps is not maximising the potential of this powerful new technology for upscaling EDRR implementation needed to combat the worsening invasive species crisis (
There are a growing number of published articles describing IAS reporting apps (e.g. LaForest et al. 2011; Goëau et al. 2013;
We synthesised existing literature across the disciplines of invasion biology, citizen science and mobile app development to design a comprehensive rubric for assessing IAS app functionalities that could greatly improve the contribution of reporting apps to ongoing EDRR efforts (
We modelled our rubric format after
App review workflow. Each box header displays the workflow stage, examples of search terms used, and the number of papers used for that stage. The top-left box shows the search terms used for identifying rubric domains. These papers were reviewed to identify broad categories into which smartphone features could be organized for the rubric. The bottom left panel depicts the search string used to identify app dimensions within these domains (N = 498 papers, see also Table
The Data Collection domain includes app functionalities pertaining to the type, method, geographic scale and taxonomic scope of data that an app can collect, while the Reporting domain focuses on how user-submitted data are input, collected and managed. The Identification domain pertains to features that aid in taxonomic identification, with features like built-in field guides or machine learning for image recognition. Finally, the User Engagement domain entails all participant-focused features, including options for guidance, help and feedback, ease of use and features to promote participation and sustained use, such as games and social networking elements.
We then conducted targeted searches on the Web of Science (WOS) and Google Scholar to identify the dimensions for our rubric (Fig.
App dimensions organized by the four rubric domains with source information (relevant literature) and rubric scoring scale used to rank smartphone mobile apps. Domains are indicated by superscript prefix as follows: DC = Data Collection; ID = Identification; Rep = Reporting; Eng = User Engagement. Letters within the parentheses following each dimension name correspond to that dimension in Figure
Dimension | Definition | Rubric Scoring Scale | Relevant Literature |
---|---|---|---|
DCAbsence Data (A) | Users can submit negative reports or the absence of a specific IAS. | 0 = not present; 3 = present. |
|
DCAbundance/ Area (B) | Users can enter the number of individuals, abundance, or area covered by the observed IAS. | 0 = not present; 3 = present. | Schade et al. 2019; |
DCCatch Per Unit Feature/ Time Spent for Observation (C) | User can include information on time spent looking for IAS. This can be used to calculate catch per unit effort and potentially estimate abundance. | 0 = not present; 3 = present. | Bannerot, S. P., and Austin, C. B. 1983 |
DCClimate/ Habitat Data (D) | Reporting interface includes climate and habitat/site context-related metadata fields (i.e., temperature, water flow rate, substrate, etc.) | 0 = not present; 3 = present. |
|
DCExternal Sensors (E) | Users can sync external devices that collect data or detect IAS and/or the app allows upload of additional data types (sound recordings, rapid genetic identification results from biofouling or propagule analysis, eDNA/PCR/ddPCR results). | 0 = not present; 3 = present. |
|
DCInternal Sensors (F) | App has access to utilize smartphone’s thermometer, gyroscope, air humidity sensor, internal clock, barometer, and GPS to gather background data for sighting. | 0 = not present; 1 = one internal sensor used; 2 = two internal sensors used; 3 = three or more internal sensors used |
|
DCLarge Geographical Range (G) | Data collection is not limited by the spatial focus of the app. | 0 = local; 1 = state/ province wide; 2 = regional; 3 = national or international |
|
DCManual Notes/ Data Entry (H) | Allows users to input manual notes to capture observation/situational data that is not part of the formatted reporting form. | 0 = not present; 3 = present. | Scott et al. 2020. |
DCSighting Type/ Status Documentation (Alive/Dead and/or LifeStage) (I) | User can document the life stage, infestation stage or condition of the species observed (alive vs dead). | 0 = not present; 3 = present. | Pochon et al., 2017 |
DCSampling Method Documentation (J) | User can indicate type of sampling method (i.e., visual observation, hook and line, snorkeling, trail camera, etc.) | 0 = not present; 3 = present. | Shuster et al., 2005 |
DCTaxonomic Range (K) | Data collection is not limited by the taxonomic focus of the app. Data can be recorded for all types of IAS. | 0 = single species; 1 = single taxonomic group (e..g, genus, family) ; 2= multiple, non-nested taxonomic groups; 3 = any species or taxon |
|
DCTracks Target Over Time (L) | Allows monitoring specific target or location over time to track spread and changes to abundance or area covered by an IAS. Prompts follow up searches or reporting over time. App allows the user to report follow up visits or allows the second user/visit to validate sightings through comments on existing record | 0 = not present; 3 = present. |
|
IDAI/Photo ID (M) | App identifies taxa or returns results based on photo and machine learning or uses machine learning to train algorithms to gather data. | 0 = not present; 3 = present. | Hosseinpour et al. 2019; Veenhof et al. 2019. |
IDIAS List/ Field Guide (N) | App includes a list of known and common invasives with pictures and information or the app includes an interactive key that allows users to choose from IAS morphological attributes and the app makes suggestions to assist with identification. | 0 = not present; 3 = present. |
|
IDMap w/ Observations (O) | App has a map screen with points for verified IAS sightings. Ideally, this map is interactive allowing the user to access observational data by tapping the point. | 0 = not present; 3 = present. |
|
IDPhoto Upload (P) | App has access to the onboard camera, and the user can take the picture and upload an image of the encountered IAS with timestamp and GPS data. | 0 = not present; 3 = present. |
|
IDReport Verification (Q) | Reports submitted via app are verified by trained authority before being added to the database or posted on the user interface. | 0 = none or relies on user selection of species from list; 1 = expert only or AI only verification; 2 = multiple levels of verification; 3 = multiple levels of verification that are indicated on observation/record within app. |
|
IDSearch/List Filter (R) | User interface allows searching for specific IAS taxa, IAS type or by geographic region. | 0 = not present; 3 = present. | Zamberg et al. 2020. |
IDUnknown Reporting (S) | Previously undocumented or unidentified IAS can be reported. Allows reports of unknown species that are not listed in the app. | 0 = not present; 3 = present. |
|
RepAutomated Outlier Rejection (T) | App uses algorithms combined with internal or external sensors to exclude non-targeted data/ reports (i.e., uses GPS to exclude reports of desert species in tidal marsh). | 0 = not present; 3 = present. | Kvapilova et al. 2019; Pastick et al. 2020; Wu et al. 2019. |
RepIntegrates Previous Reports (U) | Data from established IAS reports/sightings and historical presence/absence data, which can be visualized by users | 0 = not present; 1 = data available as a static distribution map; 2 = data integrate user observations that were previously submitted; 3 = data integrate user observations that were previously submitted plus data from other sources (e.g., government surveys) |
|
RepOffline Reporting (V) | App stores data from reports when offline to be uploaded when the user returns to service. | 0 = not present; 3 = present. |
|
RepReporting Form (W) | App has a formatted reporting structure that includes all data required for EDRR/ report has required fields to standardize the data reported. | 0 = not present; 3 = present. |
|
RepReports to Central Database (X) | Reports are submitted to a national IAS database for verification and use by appropriate IAS decision- making entities. | 0 = no database; 1 = Stores data that could be accessed and filtered for IAS data; 2 = App/project has dedicated IAS database; 3 = App sends data directly to central/ national or management/ agency IAS-centric database. |
|
RepWebsite/ Dashboard (Y) | Website reporting component and online frontend user dashboard to access IAS information and support the IAS app. | 0 = not present; 1 = link to parent site with program or developer info only; 2 = link to parent program site w/ reporting ability; 3 = link to program site with reporting and user interface |
|
EngDevice Compatibility (Z) | Available on both IOS platforms (Android/ iPhone). Users are not limited by the type of smartphone owned. | 0 = not available; 1 = One IOS platform only; 3 = both major platforms |
|
EngFeedback Feature (AA) | Users can contact admin or developer with comments or suggestions and this feature is readily accessible within the user interface. | 0 = not present; 2 = buried in secondary screens; 3 = accessible from main menu | |
EngGamification (AB) | App includes features to promote user engagement through competition (i.e., Leader Boards, Rankings, Quizzes, or Contests to promote use. Badges, Trophies, Unlockable Content, Tracking Progress). | 0 = not present; 3 = present. | Aebli 2019; |
EngHelp Content (AC) | App includes guidance materials on how to use the app or link to Frequently Asked Questions / troubleshooting solutions for common questions and user-related concerns. | 0 = not present; 2 = a help link is available to separate support page; 3 = in-app help functionalities and information are available |
|
EngIAS Related Title (AD) | App title implies purpose is IAS reporting. | 0 = title has no mention or indication of relation to IAS; 1 = mentions an invasive species or taxon; 2 = uses the acronym IAS in the title; 3 = includes the term “invasive” or “invasion” |
|
EngNews Feed/ Notifications (AE) | In-app feature to build a sense of community. Interface where recent sightings are highlighted, and app or IAS related news can be viewed by the end user/ In app notifications from admin to user or via social media notifications. | 0 = not present; 3 = present. | Joseph et al. 2020; Szinay et al. 2020. |
EngSocial Media Outlet (AF) | Users can upload/post verified reports to social media platforms directly from the IAS app. Allows users to share status, trophies, number of verified reports. App allows login using social media platform login info to connect directly to users’ social media outlet of choice. | 0 = none; 1 = Function to share observations or keep private within the app; 2 = has a share icon that allows user to send messages or share via individual’s personal social media accounts; 3 = App has dedicated social media platform accounts for posting shared observations |
|
EngUpdated Regularly (AG) | Developers and Admin regularly update the app to fix bugs and add new dimensions as they become available and relevant. | 0 = Last updated four or more years ago; 1 = Last updated three years ago, 2 = Last updated two years ago; 3 = Updated in the last year | Castaneda et al. 2019. |
EngUser Account/ Login (AH) | Users can create a private unique user ID and password to protect information stored on the app. Can be set to stay logged in or prefill login info to increase ease of reporting via preferences. | 0 = no user account system; 1 = users log in for every use; 2 = user ID’s saved for automatic login; 3 = User ID’s saved and linked to e-mail address or other contact information |
|
EngUser-Centered Design (AI) | User-friendly interface. Easily navigable design. Users can easily send reports without going through a bunch of screens or submitting unnecessary information. | 0 = text only; 1 = simple user interface with report form; 2 = basic and intuitive user interface; 3 = multiple-page user interface with buttons, images, visual guides, and dropdowns |
|
Our final rubric consisted of 35 dimensions which are listed by domain along with definitions and source information in Table
Next, we compiled a list of all free, English-language IAS reporting apps on the Google Play and Apple iTunes online app stores using a methodised search (Fig.
We then collected additional information from online store descriptions and metadata for all apps to gain insight into regional trends, the types of agencies using app data, app publishers, download trends and temporal trends in app release and availability. Download statistics were based on Google downloads and were not available for four apps. Download statistics are reported by Google in numerical bins (i.e. ≥ 5, ≥ 100, ≥ 1,000), so we calculated summary statistics as approximations.
Apps were then downloaded and three reviewers independently applied our rubric to each app. Scores for each rubric dimension were determined, based upon the presence and functionality of each feature within the app and feature descriptions from mobile app stores. Each reviewer received training in how to interpret dimension scores and categories to increase consistency. Reviewers then completed the rubric for all apps independently. We assessed the concordance amongst reviewer scores to check for any major inconsistencies using Spearman’s non-parametric correlation with the package Hmisc (Harrell Jr 2021) implemented in R (version 4.0.3; R Core Team 2021). We assessed reviewer concordance for all total, subtotal and dimension-specific scores. We calculated mean scores for all total, subtotal and dimension-specific scores and used these as the primary method of comparison and ranking among apps.
We then examined the distribution of rubric scores for individual apps, as well as within domains and individual dimensions. For domain- and dimension-specific scores, we report scores for the top apps after reporting scores for the entire sample. This allows for comparison of overall app corpus performance and top apps with respect to the idealised suite of mobile app functionalities specified in our rubric and with respect to the top-performing apps being used. Here, we present total rubric scores and domain subtotal scores as percentages and provide raw scores in parentheses.
We found strong concordance between app total scores amongst all three reviewers, with pairwise Spearman’s correlation values ranging from 0.72 to 0.82 (p values, all < 0.0001; Suppl. material
Distribution of total rubric scores across all apps. Rubric scores were near-normally distributed around the mean (dashed vertical line).
The Data Collection Domain had 36 available points from 12 dimensions (Table
Distributions for domain subtotal scores across all reviewed apps w/ distribution of scores for each dimension within the domain a Data collection b User engagement c Reporting d Identification. Mean is indicated by the dashed vertical line for domain subtotals and by the points for dimensions.
Apps could score a maximum of 30 points from 10 dimensions in the User Engagement domain. Observed scores ranged from 16.67% to 65.57% of available points (5.00 – 19.67 points) with a mean of 44.20% ± 12.67% (13.26 ± 3.80; Fig.
Eighteen possible points from six dimensions were available within the Reporting domain. Observed scores ranged from 22.22% to 77.78% (4.00 – 14.00 points) with a mean of 54.65% ± 16.13% (9.84 ± 2.90; Fig.
The Identification domain had a maximum of 21 points from seven dimensions and observed scores ranged from 9.52% to 88.90% of maximum (2.00 – 18.67 points) with a mean of 54.70% ± 18.23% (11.49 ± 3.83 points; Fig.
We found that 28 of 41 (68.29%) sampled apps were from North America, followed by five apps from the European Union, two apps from the United Kingdom, three from Australia and one app focused on Eastern and Southern Africa (Suppl. material
The number of downloads for each app was highly right-skewed and ranged from 5 to 1,000,000 (mean = 27,600 ± 162,000). Only two apps (iNaturalist and Asian Hornet Watch) exceeded 100,000 downloads and two other apps had more than 10,000 downloads. Twenty-seven of the reviewed apps had 1,000 or fewer downloads (Suppl. material
We synthesised existing literature in invasion biology, citizen science and mobile app development to generate a rubric describing the functionality of an idealised IAS reporting app and applied this rubric to the available, free, English language IAS reporting apps on two major app-indexing software platforms (Google Play and Apple App Store). We measured the breadth of implementation of various technologies and functionalities amongst the current corpus of IAS reporting apps to identify opportunities for improvement and innovation in mobile apps for IAS detection and monitoring.
Our review highlights the major implementation gaps and provides a formalised rubric for holistically and quantitatively assessing app design, relative to best practices and recommendations from literature and the scientific community. The repeatability and transparency of this rubric for future assessments is especially helpful given the proliferation of IAS reporting apps and their variable use lifetimes. Five of the 24 European IAS apps, reviewed by
A worthwhile caveat is that, although our rubric summarises current suggested features and best practices for IAS mobile reporting apps, an app need not receive a perfect score to be functional and effective. A hypothetical app achieving a perfect score in our rubric would be easy to use, include value-added and gaming functionalities to encourage user uptake and sustained participation, enlist multiple onboard smartphone sensors to collect ancillary information, use machine-learning functionalities for automated taxonomic identification, provide visualisations of past reports and sightings for target taxa, facilitate researcher-user interaction to reduce data collection bias and would collect data in standard formats that enabled data sharing and interoperability with other monitoring systems. This is no doubt much to ask of any developer or project, but patterns and trends in our study nonetheless point in the direction of helpful innovations for invasive species apps going forward.
Many important functionalities found in only a few sampled apps, reinforcing the notion that better use of available technology could make major contributions to IAS research and management, particularly the implementation of EDRR approaches (Lahoz-Monfort et al. 2019;
The substantial variation observed amongst rubric scores for sampled apps further suggests that there is little consistency in app functionality and design between developers, a finding that echoes the observations of other researchers that IAS mobile app development is not well coordinated amongst projects (
The five top-scoring apps were set apart by including functionality for reporting absences or non-detections, unknown or unidentified taxa and detection metadata (i.e. survey method, time and effort). Rubric dimensions and corresponding mobile app functionalities that were absent from this higher-scoring subgroup are indicative of major gaps in IAS reporting app implementation and development. These also included automated quality control features like outlier flagging (to highlight potential first detections of an unreported species in an area for expert review) or rejection (for species that cannot occur in the indicated area; for example, a marine species on top of a mountain), the use of integrated mobile device sensors (e.g. thermometer, altimeter and barometer) and user-focused elements, such as social media compatibility and game features.
We observed the lowest proportional rubric performance in the Data Collection domain, which includes app features pertaining to how and what data are included in a user report. These low scores were driven by only a small number of apps allowing absence (non-detection) or abundance reporting or collecting ancillary information on habitat variables and little use of onboard sensor technology (even amongst top apps, as noted above). Absence (or non-detection) data are important in their own right for biosurveillance (i.e. confirming that a species has not reached or established in an area); such periodic verification of IAS absence or non-detection constitutes the biosurveillance needed for EDRR implementation.
Beyond monitoring (biosurveillance), absence data are also valuable as a complement to presence data, enabling much more robust statistical modelling of species distributions (Elith et al. 2017). Such models lie at the core of a proactive approach to IAS research and management because they enable spatially-explicit risk assessment and forecasting (
Abundance and other quantitative data can, in turn, enable more explicit modelling of population behaviour, facilitating a mechanistic understanding of invasion dynamics (
The onboard sensors and instrumentation available in contemporary mobile devices are increasing in diversity and quality and now include a barometer, gyroscope, accelerometer, microphone and ambient light sensor, along with gigabytes of data storage capacity (
Reviewed apps also had generally low scores in the User Engagement domain, indicating that there is substantial room for innovation and growth in the degree and manner in which the volunteer community is engaged in IAS data collection. At the time of review, Invasive Plants of Arizona, ERWP Invasives Reporter, PlantNet, iNaturalist, Squishr, CSMON-LIFE Observation, FeralScan Pest Mapping and NJ Invasives allowed users to share observations via social media feeds. Other apps have begun to include this feature in more recent updates (e.g. Flora Incognita). Only iNaturalist and Squishr integrate leaderboards which introduce a competitive element to promote user engagement and retention. iNaturalist allows users to access and comment on reports/confirm or dispute taxonomic identification (Pimm et al. 2015).
The success and efficacy of highly popular reporting apps like eBird (
In addition to increasing user engagement and increasing data submissions, gamification elements could also allow better coordination between researchers and community scientists, increasing the value of individual reports for management and policy objectives (
Certain key functionalities for reporting data and automating quality control were also largely absent from our sampled apps: few apps allowed users to submit reports offline or save them for later submission and only two included automated quality control mechanisms, such as outlier rejection or flagging. Inclusion of these features could increase the quantity and quality of data from IAS reporting apps. For example, the use of machine learning to flag or eliminate false reports, could reduce the time spent on verification of submitted reports, especially where data volume exceeds the capabilities of experts or trained volunteers who are typically responsible for verification (e.g.
Taxonomic identification is a priority for EDRR risk assessment and eliciting the proper level of response to a detection. Photo ID and machine-learning algorithms could streamline the reporting process by cutting out the need for users to identify an IAS prior to being able to submit a report and improving report accuracy (
Additional data collected from app descriptions indicated that non-standardised data from many mobile apps are being sent to a plethora of non-interconnected regional or local databases (
This review was limited to English language IAS reporting apps available in North America through the Apple App Store and Google Play, introducing a geographical and linguistic bias to our study sample. Further work should expand this review to apps in other languages and available in other parts of the world, although the number of existing IAS mobile apps and their users are also strongly biased towards Western Europe and North America (
Another caveat is the need for more publicly available information (e.g. use metrics), which could greatly facilitate further analysis of app performance and sustained use. In the absence of such data on actual use for each sampled app, this analysis was limited to their range of functionalities and basic information on number of downloads. Download statistics are, however, a flawed metric of the success or performance of an app, as effective data collection could take place on a small-scale, regional basis with relatively few downloads with an enthusiastic user base. Our inability to access user statistics or submitted data for the surveyed apps made such metrics unfeasible, but finding ways to share such information in ways that protect the privacy of users would help scientists investigate correlates of success across biodiversity apps. Despite these limitations, our results provide a useful framework for investigating the functionality of existing IAS apps and the degree to which they manifest best practices from EDRR and app development literature.
Future efforts in IAS reporting app development should emphasise better use of existing technologies, data sharing and management and interoperability and game features that can both increase user participation and coordination between researchers and app users. The development and implementation of gamification functionalities could greatly increase app uptake and sustained use and is compatible with potential mechanisms to improve the quality of data recorded by non-professionals through spatial prioritisation and reward systems. Further research on the prevalence of different motivating factors in IAS reporting app participation would support efforts to increase uptake and provide valuable guidance for marketing and gamification. Given the bellicose terminology and adversarial popular thinking around invasive species (
The cost of designing apps, especially ones providing the multitude of functionalities described above, is a potential obstacle to further innovation. App design and creation often cost in the order of tens to hundreds of thousands of US dollars (Odenwald 2019). The development of a generalised, customisable app template with multiple options for functionalities (including gamification and user rewards), but with consistent metadata, back-end data management and storage infrastructure could simultaneously reduce the data fragmentation amongst IAS mobile apps (
Although our framework gave greater credit to apps with larger taxonomic ranges, regionally-focused apps may have an advantage in connecting and identifying with the interests and attitudes of local users, increasing the volume and quality of participation. For example, Aquahunter, an app produced by a county-level invasive species department in Minnesota (USA), integrates features of larger focal scale apps, such as a photo recognition tool, the ability to share an observation on Twitter/Facebook and an interactive map with observations. Such implementations of social media may be more effective at smaller scales, where users are more likely to be socially connected prior to using the app. A template model allowing customisation for regional applications would maintain these advantages, while overcoming ongoing problems of data fragmentation and lack of interoperability amongst existing apps.
Smartphone apps, if widely used, are amongst the most promising approaches to monitor, predict and reduce the spread of invasive alien species. Wide-spread use of mobile apps could massively increase the spatiotemporal coverage of IAS data collection, yielding new modelling insights into invasion dynamics. Future apps would attract a greater and more consistent user base with the addition of gaming functions (e.g. leaderboards, reward systems), social media connections (e.g. sharing functionalities), the ability to report absences and valuable ancillary data on surrounding habitats, survey methods and survey effort. With broader participation, more informative reporting forms and more consistent and structured data management, IAS reporting apps could make much larger contributions to Early Detection and Rapid Response efforts worldwide. This, in turn, could save local, regional and national economies millions to billions of dollars annually, while protecting valuable ecological and agricultural systems for future generations.
This research was funded in part by NASA award #80NSSC19K0185 to GL, J. Kimball and BKH and a USGS Northwest Climate Adaptation Science Center award G17AC000218 to CBvR. We thank Rebekah Wallace (UGA Center for Invasive Species & Ecosystem Health), Helen Roy (UK Center for Ecology and Hydrology), Tim Adriaens (Instituut Natuur-En Bosonderzoek/INBO), Steve Amish (University of Montana), Katie Graybeal (U. Montana) and Phil Matson (Flathead Lake Biological Station) for helpful conversations and suggestions that enhanced the manuscript. We would also like to extend a special thanks to Anna Sansalone (U. of Wake Forest) for assisting in calibrating and testing the app rubric.
Table S1. Search parameters
Data type: metadata
Explanation note: Literature review search terms and filtering.
Table S2. Reviewer correlation
Data type: statistics
Explanation note: Reviewer correlation results.
Table S3. App Metadata
Data type: metadata
Explanation note: Metadata, total rubric scores and scores by domain for each reviewed app.
Table S4. Mean dimension scores by app
Data type: metadata
Explanation note: Mean dimension scores by app.