Soil solarization based on natural soil moisture: a practical approach for reducing the seed bank of invasive plants in wetlands

Soil solarization is a well-established method to disinfect soil for efficient weed control. However, the feasibility of applying this method in the restoration of invaded natural habitats is unclear. This is because soil moisture is necessary for the success of solarization, but pre-irrigation in natural ecosystems is often not applicable, or demands high labor investment, making it unsuitable for use in restoration. The present study was based on the idea that the relatively high soil moisture in wetlands might obviate the need for pre-irrigation, rendering this method much more applicable in natural habitats. We examined the efficacy of soil solarization using natural soil moisture to control the seed bank of the invasive plant, Acacia saligna, in a wetland, using large-scale experimental plots (0.38 ha each). An old, dense A. saligna grove was cut down and the roots were removed by a bulldozer. The plot was mulched with a transparent polyethylene sheet in early July and left on the soil for 14 weeks. Soil solarization significantly reduced the viability of seeds of A. saligna that had been experimentally buried. Additionally, viability of seeds in the natural seed bank was reduced, and seedling emergence was close to zero. Exposing seeds to soil temperature and soil moisture levels equivalent to those obtained during field soil solarization under controlled conditions significantly increased the release from dormancy of the seeds, suggesting that release from dormancy during the early stage of solarization is a critical stage leading to seed weakening or mortality in the soil. Soil solarization also decreased the cover and abundance of the natural vegetation; therefore, active revegetation is required to restore the natural vegetation and to conserve endangered and endemic species. Copyright Oded Cohen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. NeoBiota 51: 1–18 (2019) doi: 10.3897/neobiota.51.36838 http://neobiota.pensoft.net RESEARCH ARTICLE Advancing research on alien species and biological invasions A peer-reviewed open-access journal


Introduction
The world has lost 87% of its wetlands since 1700 AD (Davidson 2014). In recent decades, the loss and degradation of the wetlands has accelerated due to anthropogenic factors, including the proliferation of invasive species. It is well known that wetlands are especially vulnerable to plant invasion (Zedler and Kercher 2004) . The re-establishment of invasive plants from long-term persistent soil seed banks is one of the most important factors leading to the failure of restoration efforts (Zavaleta et al. 2001;Erskine Ogden and Rejmánek 2005;Reid et al. 2009;Le Maitre et al. 2011).
Attempts to control the seed banks of invasive plants having physically dormant seeds in natural habitats have been based mainly on the use of prescribed burning (Richardson and Kluge 2008). However, unsatisfactory results and the limitations of using prescribed burning in many natural habitats have led to efforts to develop new efficient control methods. Recently, for example, microwave soil heating has been suggested as a potential method to control the seed bank of invasive plants in natural habitats (De Wilde et al. 2017;Hess et al. 2018), but an applicable device has not yet been developed. Our group, using Acacia saligna as a model plant, demonstrated that soil solarization has high potential as a control method for this purpose in natural habitats undergoing restoration.
Soil solarization is a well-established soil disinfestation practice in agriculture. It is used as a pre-planting treatment, and was originally designed to reduce populations of pests, pathogens, and weeds. Soil solarization consists of mulching a moist soil with transparent polyethylene sheets during the hot season. The trapped solar radiation warms the soil and transfers the heat generated to the deep soil layers. Hence, soil temperatures are raised to lethal or sub-lethal levels for a wide spectrum of soil organisms (Horowitz et al. 1983;Gamliel 2012;Bainbridge 2016). Soil moisture, which is usually acquired by irrigation, is necessary for heat penetration into the deep soil layers and to increase the sensitivity of organisms to the thermal effect. With respect to weed seeds, soil solarization can induce seed bank deterioration through three processes: 1. breaking dormancy which results in seed germination; 2. seed mortality; 3. weakening effect, i.e reduced seed vigor which results in non-normal germination and vulnerability to biotic stresses (Katan 2003;Cohen and Rubin 2007).
In recent years, our group has been working on adapting the soil solarization methods commonly used in agriculture for the restoration of invaded natural habitats. The model plant which has been used in our studies is A. saligna (Labill.) Wendel. F. (Port Jackson Willow), a small legume tree belonging to the invasive Australian Acacia group (Le Maitre et al. 2011). This species has been planted at a wide scale (about 600,000 ha) outside of Australia and has become a serious invader in various countries worldwide characterized by a Mediterranean climate (Griffin et al. 2011). Acacia saligna produces thousands of physically dormant seeds per square meter yearly that accumulate into an exceptionally long-lived, persistent seed bank (Milton and Hall 1981;Holmes and Newton 2004). Cohen et al. (2008) first applied the soil solarization method to control woody Acacia species in an experimental farm and showed that the solarization treatment resulted in almost complete eradication of buried seeds of A. saligna and two other Australian Acacia species, A. murrayana F. Muell. ex Benth. and A. sclerosperma F.Muell. Recently, Cohen et al. (2018) reported that soil solarization was more effective than prescribed burning in reducing the viability of buried seeds of A. saligna in a Mediterranean coastal plain and almost completely reduced its seedling emergence from the natural seed bank during two successive years after treatment. Pre-irrigation in natural ecosystems is often not applicable or demands high labor investment, rendering soil solarization unsuitable for use in restoration programs. Therefore, attempts have been made to develop methods which will eliminate the need for irrigation. Cohen et al. (2019) recently reported the successful application of rain-based solarization to control the seed bank of A. saligna. This method is based on trapping the soil moisture caused by the last rainfall in the early spring. The data obtained demonstrated a significant reduction of A. saligna seed bank in both Mediterranean and semi-arid climates and in different soil types. This treatment also completely reduced the seed viability of three other invasive legumes that were buried in the experimental site, A. victoriae Benth., Parkinsonia aculeata L., and Leucaena leucocephala (Lam.) de Wit. Our previous studies demonstrated that moderate soil moisture can effectively reduce the seed bank of A. saligna at the moderate soil temperatures created by soil solarization (Cohen et al. 2019).
The objective of the present study was to assess the effectiveness of dry-soil solarization (hereafter, called simply solarization) on reducing the seed bank of A. saligna. This solarization entailed covering the soil with transparent polyethylene sheets during the hot summer season without pre-irrigation. It is well accepted that in wetlands, the high water table contributes to increased soil moisture in the overlying soil layers (Miguez-Macho et al. 2008 and literature cited therein). Under these conditions, the evaporated soil moisture condenses on the polyethylene sheet, drips back onto the soil surface, and rewets it (Al-Karaghouli and Al-Kayssi 2001). Thus, in wetlands, solarization might raise both temperature and moisture in the upper soil layer above the threshold levels required for release from dormancy, thereby accelerating seed bank deterioration.

Experimental site
The experimental site is located in the Samach Estuary on the eastern bank of the Sea of Galilee (Lake Kinneret, 32°50'03"N, 38°35'57"E). The climate is semi-arid (264 mm average annual precipitation). The experiment was conducted at about 500 m from the shoreline. The soil is alluvial, comprising 46% sand, 34% silt, and 20% clay. The experiment was initiated in early July and lasted for 14 weeks. This period is regarded as the most effective solarization period in the Mediterranean region. According to the Beit Saida meteorological station, the average maximum daily air temperature during this period was 35.4 °C. The local natural vegetation outside the A. saligna stand in this habitat is dominated by species such as Prosopis farcta (Banks & Sol.) J.F.Macbr and Glycyrrhiza glabra L., which extend along the margins of irrigated fields. These species naturally occupy the wetland edges in Israel. The experiment was set up in an established A. saligna grove, which was planted in 1972. From the original planted groove, A. saligna trees have spread to the surrounding habitats (~19.3 ha), both in disturbed and undisturbed areas, and previous attempts to control the invasive trees have failed. The canopy cover of the trees in the experimental site was very high (80-100% shade).

Preliminary measurements
Prior to the initiation of the experiment, the seed bank density of A. saligna and vegetation cover were evaluated along two transects running along the length and width of the area under the A. saligna canopies. Eight soil cores, 7.2 cm in diameter and 20 cm long, were sampled along each transect by means of a metal pipe. The average seed density was 132.1 ± 54.6 seeds/L, with no significant differences among sites along the two transects. Vegetation charts were made in eight plots (10 m × 10 m, about 50 m apart from each other) along the two transects mentioned above. Vegetation cover and composition under the A. saligna canopy were homogenous. Excluding A. saligna seedlings, the vegetation cover in the plots constituted 1-20% of the area under the A. saligna tree canopies. This vegetation included a total of 15 plant species, dominated by the nitrophilic species, Notobasis syriaca (L.) Cass., Mercurialis annua L., and Torillis arvensis (Huds.) Link.

Experimental design
As no significant changes in A. saligna seed bank density and vegetation cover and composition were evident along the two transects under the A. saligna canopy, the experimental area was divided along its length into two treatments, control (non-solarized bare soil) and solarization. Each treatment was conducted in a large plot of 0.38 ha. Large plot size has the benefit of simulating the practical application of the treatment.
All A. saligna trees in the grove were cut down in November 2014, piled, and burned. Glyphosate (Rodeo, 53.8% active ingredient, Dow Chemical Company, MI, USA) at a concentration of 50% was applied to the surface of the remaining stumps. In June 2015, the tree stumps were uprooted with a D9 bulldozer root rake (50-cm teeth), and the soil was leveled. On July 1, 2015, the solarization plot was mulched with a transparent polyethylene sheet (anti-fog 100 µm, Politiv, Kibbutz Einat, Israel). The parameters measured during the experiment included soil moisture, soil temperature, dynamics of buried seeds, i.e. transition from dormant seed fraction to non-dormant or non-viable seed fractions, density of the A. saligna seed bank and seedling emergence from the seed bank, and the density, growth, and composition of the regenerated vegetation.

Soil moisture and soil temperature
Soil moisture was monitored during the experiment in soil samples taken at 6, 28, 43, 51, and 66 days after mulching. At each sampling date, four soil cores, 7.2 cm in diameter and 20 cm long, were sampled from random locations in each treatment, at 20-m spacing, by means of a core auger. Each soil sample was divided into two subsamples representing two different depths: 0-5 cm (shallow layer) and 15-20 cm (deep layer). Samples of 250 ml soil were taken from each subsample, and weighed before and after drying at 105 °C for 24 h. Soil moisture was calculated as a percentage of the sample dry weight.
Soil temperature in the control and solarization plots was continuously recorded at depths of 5, 10, 15, and 20 cm, using a type T thermocouple connected to a micrologger (10×, Campbell Scientific Inc. Logan, UT). Air temperature was recorded with a portable meteorological station positioned in the shade at the edge of the solarization plot, 0.5 m above ground level.

Evaluation of buried seed dynamics
Acacia saligna seeds were collected in mid-June from 10 trees within a radius of 2 km from the study site. The viability of the collected seeds was 100%, and 96.7% of them were dormant. The seeds were placed in nylon net bags, 30 seeds in each bag (Cohen et al. 2008). Four seed bags were tied separately to a nylon string and buried in the soil so that each bag was buried at a different soil depth: 1-4, 6-9, 11-14, and 16-19 cm. Sixteen such strings were buried in each treatment plot. Four strings with seed bags were removed from the soil at 31, 43, 52, and 72 days after mulching. The seed dynamics was determined in the laboratory by two successive germination tests. The first test was conducted on intact seeds and the second was conducted after scarification of the seed coat. The seeds were placed between moist filter papers for 20 days in the first germination test and for 10 days in the second germination test. Seeds were considered germinated when the primary root was longer than 2 mm. Seed dynamics were classified into the following categories: 1. seeds that germinated in the first test were defined as non-dormant; 2. seeds that germinated only in the second test were defined as dormant; 3. seeds that did not germinate in either germination test were defined as non-viable.

Acacia saligna natural seed bank density and seedling emergence
The effect of solarization on the density of the A. saligna seed bank and seedling emergence from the seed bank was examined in the first spring (March) after treatment, We observed that A. saligna seeds are concentrated in the upper 5 cm soil layer, even after deep tillage (data not shown). Therefore, 16 soil cores in each treatment were sampled up to 5 cm depth from random locations at about 20 m spacing using a core auger as described above. The soil cores were sampled in March (spring), when the soil was moist. Emerging seedlings were counted in the soil samples, then the soil was sieved, and the seeds obtained were tested for viability as described above. The density of the viable seeds in the soil was calculated as number per liter of soil. The density of emerged seedlings was calculated as number per square meter.

Natural vegetation cover
The effect of solarization on the regeneration of the natural vegetation was evaluated by constructing vegetation charts in the first spring after treatment in four sites of 100 m 2 in each treatment. The charts included the relative cover of each plant species per area, as well as the following revegetation parameters: vegetation cover per area (%), number of species, vegetation height, and Shannon diversity index (i).

Effect of soil temperature and soil moisture on A. saligna seed dynamics under controlled conditions
Thirty A. saligna seeds were placed in glass tubes containing dry sand pre-heated to 105 °C for 24 hours and adjusted to 5 or 11% water content. The tubes were sealed and incubated in a water bath at 24, 48, and 57 °C for 72 hours. The selected exposure temperatures comply with those recorded in the field experiment in the solarization treatment (Fig. 2). The effect of the treatments on the seed dynamics (dormant, non-dormant, and non-viable fractions) was examined using the germination tests as described above. The experimental design was fully factorial and included five replicates, 30 seeds in each.

Statistics
The JMP 13 statistical package was used for data analysis. A Levene's test (P = 0.05) for equality of variances was used for the soil moisture data. Percentage values were transformed to log. A three-way ANOVA (P = 0.05) was used to examine the effect of treatment, depth, experimental duration, and their interactions on soil moisture, followed by post-hoc t-test (P = 0.05) for means comparisons between treatments. Buried seed data were analyzed by three-way ANOVA (P = 0.05) to examine the effect of treatment, soil depth, experimental duration, and their interactions. The ANOVA was followed by posthoc t-test (P = 0.05) for means comparisons between treatments or Tukey's test (P = 0.05) for means comparisons between all main effects and their interactions. In situations of interaction between treatment, soil depth, and duration, the data from the solarization treatment were compared to the control under each set of conditions (soil depth and duration) using a preplanned contrast t-test (P = 0.05). All percentage values of the various seed fractions were transformed to arcsine. Seed bank density was analyzed using t-tests (P = 0.05) for means comparisons between treatments or Tukey's test (P = 0.05) for means comparisons between soil depths within a treatment. The depth of seedling emergence was analyzed by contrast t-tests (P = 0.05). All the revegetation parameters -relative cover, vegetation height, species richness, and Shannon diversity index (H') -were also analyzed by t-tests (P = 0.05). The laboratory data on the effect of temperature and soil moisture on the buried seed fractions were analyzed by a two-way ANOVA (P = 0.05), followed by a post-hoc Tukey's test (P = 0.05) for means comparisons between treatment combinations.

Effect of solarization on soil moisture and soil temperature
Soil water moisture in the solarization treatment was 5.4%, significantly higher (t = 5.6941, P < 0.0001) than that in the control (3.1%, Fig. 1.). It should be noted that the soil moisture varied greatly between measurements in the solarization treatment. Based on three-way ANOVA (Suppl. material 1: Table S1), solarization appears to be the only significant factor affecting soil moisture (F 1,6 = 10.63, P < 0.017).
The maximum daily temperature in the solarization treatment at soil depths of 1, 5, 10, and 20 cm was 64.3, 57.6, 49.8, and 42.6 °C, respectively, compared with 58.2, 48.9, 41.8, and 36.4 °C, respectively, in the control (Fig. 2). The minimum daily temperature in the solarization treatment at depths of 1, 5, 10, and 20 cm was 28.7, 32.1, 35.5, and 37.9 °C, respectively, and in the control, 23.8, 27.5, 30.4, and 32.7 °C, respectively. While at 1 cm depth, the soil temperature during the day in the control exceeded 55 °C for 2 to 3 h, in the solarization treatment these conditions continued for 5-6 h. At this depth, soil temperature exceeded 60 °C only in the solarization treatment, for 3-4 h a day.

Effect of solarization on buried seed dynamics
The effect of solarization on buried seed dynamics, i.e. the dormant, non-dormant, and non-viable fractions, at four soil depths was studied on four dates during the 16week experimental duration (Fig. 3). Solarization significantly reduced the dormant seed fraction in all measurements to 22.0% from 63.3% observed in the control (ttest, P < 0.0001). However, while the dormant fraction in the control did not change significantly with soil depth (Tukey's-test, P < 0.005), the effect of solarization on the release from seed dormancy decreased significantly with increasing soil depth, the dormant fraction being 8.3% at 1-4 cm compared to 42.9% at 11-16 cm soil depth. According to the three-way ANOVA analysis (Suppl. material 2: Tables S2, S3a), the dormant seed fraction was affected by both treatment (F 1,25 = 121, P < 0.0001) and depth (F 3,73 = 8.46, P < 0.0001) and by their interaction (F 3,73 = 9.82, P < 0.0001). The experimental duration did not affect the dormant seed fraction, either as a main factor or in interaction with the other factors. Comparison of the soil moisture between the control and the solarization treatment. Soil measurements were performed at 0-5 and 15-20 cm soil depths on four dates (4, 6, 11, and 14 weeks) after initiation of the treatment. The box-plot for each treatment includes the median, quartile, minimum, and maximum values. Points represent observations; n = four replicates × two soil depths × four experimental durations, *** = P < 0.0001 according to a post-hoc t-test (0.05) following three-way ANOVA (see Suppl. material 1: Table S1).   The temperature was measured at four soil depths (1, 5, 10, and 20 cm). The data represent the soil temperature during two successive days in mid-July. Air temperature is represented by the gray polygon in each figure.  (dormant, non-dormant, and non-viable fractions) in the control and the solarization treatment. The data were collected in 16 combinations of soil depth and experimental duration. Significance levels (* = P < 0.05; ** = P < 0.01; *** = P < 0.001) of a priori means comparison t-tests (P = 0.05) are presented for the non-viable and the non-dormant fractions. The dormant fraction was not significantly affected by the interaction of treatment, soil depth, and experimental duration. n = four seed bags for each combination, 30 seeds in each.
Solarization significantly increased the non-dormant seed fraction during the experimental period at all soil depths compared to the control (Fig. 3). In both the control and the solarization treatment, the non-dormant seed fraction decreased significantly with the increase of the experimental duration, from 40.0 to 22.1% in the control and from 28.1 to 0.8% in the solarization treatment after 4 and 14 weeks, respectively. According to the three-way ANOVA analysis (Suppl. material 2: Tables S2, S3b), the non-dormant seed fraction was significantly affected by treatment (F 1,25 = 92.25, P < 0.0001), soil depth (F 3,73 = 7.95, P < 0.0001), experimental duration (F 3,25 = 22.6, P < 0.0001), the interaction between treatment and duration (F 3,25 = 3.79, P < 0.0001), and the interaction between treatment, depth, and duration (F 9,73 = 2.59, P = 0.012).
The solarization treatment significantly increased the non-viable fraction above that measured in the control, excluding the fraction at 16-19 cm after 14 weeks (Fig.  3). In addition, while the results of Tukey's test (P = 0.05) show that the non-viable seed fraction in the control did not vary with soil depth or experimental duration, these factors had a significant effect on this fraction in the solarization treatment: the non-viable fraction decreased significantly with soil depth and increased with the increase in the experimental duration only in the upper 9 cm of the soil profile. According to the three-way ANOVA (Suppl. material 2: Tables S2, S3c), the non-viable seed fraction (i.e. seed mortality) was affected by treatment (F 1,25 = 220.2, P < 0.0001), the interaction between treatment and soil depth (F 3,72 = 14.05, P < 0.0001), experimental duration (F 3,25 = 3.80, P < 0.0228), and the interaction between treatment, soil depth, and experimental duration (F 9,72 = 4.63, P < 0.001).

Effect of solarization on A. saligna seed bank and seedling emergence
Solarization significantly reduced the density of the A. saligna seed bank (t = 5.4, P < 0.0001, Fig. 4a) and completely eliminated seedling emergence from the seed bank (Figs 4b,5). It should be noted that seedling emergence in the control was closed to 1,000 seedlings per square meter.

Effect of solarization on the regeneration of the natural vegetation
Solarization significantly decreased the regenerated vegetation cover per area compared to the control (Fig. 6a, t = 39.432, P < 0.0001). Similar results were obtained for the vegetation height (Fig. 6b, t = 5.125, P = 0.0143). The species number (Fig. 6c) and the Shannon diversity index (Fig. 6d) did not differ significantly between treatments.

Effect of soil temperature and soil moisture on A. saligna seed dynamics under controlled conditions
The viable seed fraction exceeded 96% in all the six treatment combinations of soil temperature and moisture (Fig. 7). There was a strong expression of release from dormancy, i.e. a decrease in the dormant seed fraction, and a concurrent increase in the non-dormant fraction with the increase in soil temperature and soil moisture. The dormant seed fraction decreased significantly with the increase in soil temperature and soil moisture from 89.8% at 24 °C and 5% moisture to 35.4% at 56 °C and 11% moisture. The results of the two-way ANOVA show that the dormant fraction was significantly affected by soil temperature (F 2,0.5 = 84.61, P < 0.0001), soil moisture (F 1,0.05 = 16.08, P = 0.0005), and the interaction between them (F 21,0.04 = 7.02, P = 0.0042).  Figure 4. Comparison of Acacia saligna seed bank density (a) and seedling emergence from the seed bank (b) in the control and the solarization treatment. The samples were taken in the first spring (March) after treatment. Means of either seed number per liter of soil or seedling emergence per m 2 represent 16 samples (replicates) of 0-5 cm soil depth. Significance levels *** = P < 0.001 according to a contrast t-test. Figure 5. Demonstration of the efficacy of solarization in controlling Acacia saligna seedling emergence and vegetative sprouting. The picture was taken in the first winter after treatment (mid-February). The solarization plot is almost completely void of vegetation. Vegetation cover in the control plot includes almost one thousand A. saligna seedlings per m 2 , A. saligna vegetative sprouts, and other nitrophilic plants that are common in revegetated areas following A. saligna removal, including Brassicaceae, Compositae, and Malvaceae species.

Effect of solarization on A. saligna seed viability and seedling emergence from the seed bank
Solarization was found to be highly effective in reducing the seed viability of both buried seeds and the seed bank of A. saligna. Solarization significantly increased the non-viable fraction of buried seeds compared to the control at all soil depths during the entire experimental duration, except for the 6-19 cm depth after 14 weeks. Exposing buried seeds to 14 weeks of solarization almost completely eliminated their viability throughout the upper 9 cm of the soil profile. It is interesting to note that although a large number of seeds in this treatment remained viable in the deeper soil layers, their root and shoot growth was very limited during the germination test. It is reasonable to assume that these "weakened seeds" would fail to emerge from the soil under natural conditions. Although there was an interaction between treatment, soil depth, and experimental duration, the viability rates at all soil depths and durations in the treated soil were significantly lower than those at respective depths and durations in untreated soil. Solarization almost completely eliminated the viable seed fraction of the A. saligna seed bank. Consequently, seedling emergence was negligible in this treatment, in contrast to about 1,000 of emerging seedlings per square meter in the control. A small-scale solarization experiment (36 m 2 plot) was conducted during the following year (2016) in undisturbed soil (trees were cut and removed, but without bulldozer involvement) at a distance of 150 m from the site of the first main experiment (2015). A similar trend of reduction in both seed bank density and seedling emergence was observed.
Although solarization almost completely eliminated seedling emergence, a few small patches of densely germinating seeds were observed outside the sampled plots. We assume that these patches appeared in areas where the polyethylene sheet was punctured by sparks produced during prescribed burning, conducted adjacent to the experimental site. Sampling the soil in these patches showed that no seeds remained dormant in these sites (data not presented), indicating that the accumulated heat was sufficiently higher than the threshold of dormancy release, but not high enough for the loss of viability.

The underlying mechanism of seed bank reduction
The differences in maximum daily soil temperature and soil moisture recorded in the control and the solarization treatment were not large (48 and 56 °C and 5 and 11% moisture at 5 cm depth in the control and the solarization plots, respectively). However, our laboratory experiment demonstrated that these small differences in soil temperature and soil moisture profoundly affected the rate of release from seed dormancy. The interaction between these two main factors significantly affected the release from dormancy. At 56 °C and 11% moisture, the rate of release from dormancy was 60%, six-fold higher than at 48 °C and 5% moisture. Indeed, in the early stage of the field experiment, i.e. 4 weeks after the onset of the solarization treatment, the non-dormant fraction was higher in the solarization treatment than in the control. As suggested in our previous studies (Cohen et al. 2008(Cohen et al. , 2018(Cohen et al. , 2019, we assume that the release from dormancy is probably the critical stage leading to the deterioration of the seed bank through a weakening effect occurring during soil solarization.

Is solarization a habitat-specific method?
As noted in the description of the study site, the habitat of the present study is characterized by relatively moist soil. Although our results show that the soil moisture in the bare soil was very low, the dominating presence of P. farcta and G. glabra in this site implies a high water table. There are data indicating that a high water table increases the soil moisture above it at a rate depending on the soil type (Miguez-Macho et al. 2008). In the present study, soil moisture increased significantly in the solarization treatment during the experimental period, probably due to condensation of water vapors under the polyethylene sheet, which rewetted the soil. This phenomenon could positively affect the efficacy of solarization. We assume that under non-optimal conditions for solarization, such as those prevailing in regions with a shorter or cooler summer or in a very dry soil, the solarization process might result in a lower seed mortality rate. In observations made in a dry soil with a deep water table, there was no change in seed viability, but an increase in the release from dormancy, which usually leads to high seedling emergence, was observed (unpublished data). Therefore, in such habitats, the release from dormancy alone might also be beneficial when integrated management that includes chemical control of the seedlings is recommended to complete the restoration process in the first winter following soil solarization.

Solarization is not a species-specific method
Solarization is not a species-specific method and might be applied to control the seed banks of a large spectrum of invasive plants. Our results show that solarization almost completely reduced the emergence of various species with physically dormant seeds, such as Medicago polymorpha L., Geranium rotundifolium L., and Malva parviflora L., which proliferated in the control plot. Moreover, solarization reduced the emergence of not only physically dormant seeds, but also of seeds with other types of dormancy mechanisms, including seeds with physiological dormancy (Amaranthus albus L., Galium aparine L., and Glebionis coronarium (L.) Tzvelev), seeds with combinational dormancy (physical and physiological) (Geranium molle L.), and seeds with morphophysiological dormancy (Parietaria lusitanica L).
From a restoration perspective, soil solarization is a nonspecific disinfestation technique. If vegetation cover is planned to regenerate naturally, i.e. using passive management, the adverse effect of seed bank reduction by soil solarization must be considered.
However, the experience gained in restoration programs indicates that the reduction in density of the invasive plants caused by the control operation generally results in proliferation of other invasive plants (secondary invasion) or of local environmental weeds (D'Antonio and Meyerson 2002;Buckley et al. 2007;Le Maitre et al. 2011). In the current study, most of the plants that regenerated naturally in the control plots were local environmental weeds. In such cases, active revegetation using planting and/or seed sowing is essential for rehabilitation of the natural vegetation (Le Maitre et al. 2011). When active revegetation is planned, using soil solarization provides a significant advantage in preparing the area for targeted native species by reducing competition with undesirable plants.

Application and implications
Soil moisture is an essential component for the success of solarization (Shlevin et al. 2004;Cohen et al. 2008). Therefore, in dry habitats, the regular soil solarization method, which includes pre-irrigation, is recommended. Alternatively, satisfactory results can also be achieved by covering the soil with transparent polyethylene following the last rains (RBS method, Cohen et al. 2019). In wetlands, covering the soil during the summer without pre-irrigation (i.e., dry solarization, as used in this study) has also been found to be effective. The advantage of dry solarization over RBS is expressed by a shorter soil mulching duration, thus ensuring the intactness of the polyethylene sheet during the effective period of solarization. The current study demonstrates the versatility and efficacy of using this solarization approach in restoration programs in natural habitats.

Increasing impacts by Antarctica's most widespread invasive plant species as result of direct competition with native vascular plants Introduction
Biological invasions represent significant conservation challenges. A focus on their early stages, such as the pathways of, and barriers to, invasion is valuable given a costefficacy continuum exists from prevention, through early detection and rapid response, to eradication (Simberloff et al. 2013). The latter is usually the most costly management option and may have unanticipated consequences (Zavaleta et al. 2001). Such efforts are predicated on the assumption many invasive non-native species either have or will have substantial impacts (Catford et al. 2012;Richardson and Ricciardi 2013). However, recent reviews have argued that our understanding of impacts often remains poor, and: (1) a more substantive evidence base is required to improve management of non-native species given criticisms that impacts are unproven or cannot be predicted (Hulme et al. 2013); (2) despite considerable advances in understanding the early stages of the invasion pathway, forecasts of the conditions under which substantial impacts will be realized remain weak (Ricciardi et al. 2013); (3) generalizations regarding the groups most likely to cause impact, the suites of traits associated with impact, and the environments most sensitive to impacts, remain uncommon (Pyšek et al. 2012). A consistent theme across these reviews is that predictions of impact are needed because impact is often used to assess the need for early intervention, and specifically which species or groups of species, and under what conditions, should be the subject of such intervention. Much uncertainty remains, however, about the species that will have most impact and the conditions under which such impact will be realized (McGeoch et al. 2010;Ricciardi et al. 2013). Although data mining approaches and meta-analyses are beginning to reduce this uncertainty (Vilà et al. 2011;Greenslade and Convey 2012;Pyšek et al. 2012), further quantification and forecasts of impacts are essential to improve management efficacy and reduce the impacts of biological invasions (Simberloff et al. 2013;Richardson and Ricciardi 2013). Although these priorities apply to invasions generally, they are particularly significant in the context of the conservation challenges faced by protected areas. Understanding the correlates or determinants of non-native species richness variation can assist with managing risk in the early stages of the invasion process (Wilson et al. 2009;Foxcroft et al. 2011). However, for management decision-making about eradication or control, either after initial detection or later in the invasion process, understanding the potential for negative effects on the ecophysiological performance of native species and the community functioning is essential.
Antarctica (including the sub-Antarctic islands) is considered to include many examples of the world's last remaining wilderness areas (Convey and Lebouvier 2009;Shaw et al. 2014). The continent itself is protected under the Protocol on Environmental Protection to the Antarctic Treaty (Tin et al. 2009), and parts of most of the surrounding sub-Antarctic islands are either formally protected under national legislation, have World Heritage status, or both (Chown et al. 2001;de Villiers et al. 2005). Biological invasions (along with climate change and interactions among these two change drivers) are the most significant terrestrial conservation challenges facing the region (Frenot et al. 2005;Chown et al. 2015a;Hughes et al. 2015). In consequence, an increasing amount is known about the correlates of invasion, the pathways for and vectors of non-native species, and the management strategies required to limit inadvertent introductions, especially given deliberate introductions are, for the most part, not permitted to the continent and its surrounding islands (Hughes and Convey 2012;Molina-Montenegro et al. 2014;McGeoch et al. 2015;Hughes and Pertierra 2016). Nonetheless, as is more broadly the case, the extent of information on impacts is surprisingly limited, particularly for plants (Gremmen et al. 1998;Frenot et al. 2001;Le Roux et al. 2013, Molina-Montenegro et al. 2012a, which is remarkable given that plants together with invertebrates are the most speciose groups of non-native species across the region (Frenot et al. 2005;Hughes et al. 2015), and are showing propensity for establishment on the continent itself (Molina-Montenegro et al. 2014). In consequence, the evidence base for decisions about management actions to be taken for either established species or new arrivals is currently small (Hughes and Convey 2012). Nevertheless, some efforts have been conducted to eradicate non-native plants from Antarctica (Galera et al. 2017;Pertierra et al. 2017a). For example, Poa pratensis was successfully eradicated in January 2015, providing pivotal information about eradication actions, allowing for the generation of a management protocol with high cost-efficacy, likely applicable to another non-native plant species in Antarctica (Pertierra et al. 2017a).
Here we begin to address some aspects about the impacts and management for the most widespread non-native vascular plant species in the Antarctic, Poa annua, which currently is the only non-native species of flowering plant that has successfully established a reproducing population on the Antarctic Peninsula (Frenot et al. 2005;Chwedorzewska 2009). This species is commonly associated with anthropogenicallymodified habitats worldwide, but currently can also be found as an introduced species in natural habitats on most sub-Antarctic and some maritime Antarctic islands as well as a number of locations on the north-west coast of the Antarctic Peninsula (Molina-Montenegro et al. 2012a;Chwedorzewska et al. 2015;Hughes et al. 2015;Atala et al. 2019), and has been forecast to become more widespread (Chown et al. 2012;Pertierra et al. 2017b). Experimental laboratory studies have shown this species can potentially outcompete the only two flowering plant species indigenous to the Antarctic continent, the grass Deschampsia antarctica and the pearlwort Colobanthus quitensis (Molina-Montenegro et al. 2012a, 2016. Here, using field 'common garden' experiments on King George Island (South Shetland Islands), we examined interactions between these three plant species with regard to variation in relative density of each, as density has been identified as an important factor influencing the invasion process, since higher densities enhance the competitive ability of a given species in a community (Lockwood et al. 2005;Arii and Parrott 2006). We also focus on water availability, as it is the primary limiting component in most Antarctic terrestrial ecosystems (Convey et al. 2014), with D. antarctica occupying a wider range of habitats compared with C. quitensis in the context of the water regime (Torres-Mellado et al. 2011). Furthermore, coastal parts of the Antarctic Peninsula region have experienced increased precipitation over the last century, a pattern which is forecast to continue (Bromwich et al. 2011;Turner et al. 2014;Thomas et al. 2015;Lee et al. 2017). As well as the indicated increase of temperatures for this area, an increase of precipitations is expected, generating higher soil water availability and hence, enhancing the physiological performance of vascular plants (Torres-Diaz et al. 2016). Combining these two elements (relative competitive ability and increased water availability), this study tested the following hypotheses: i) the most widespread non-native plant in Antarctica, P. annua, will exert a stronger competitive effect than the native vascular plants C. quitensis and D. antarctica, as assessed by survival and growth, and ii) these competitive effects currently are greater at higher relative densities of individuals of P. annua and under higher resource availability, as predicted under a simulated future climate change scenario. The ultimate goal was to provide initial predictions of how P. annua will affect native plants in both the short and long term, to aid in evidence-based conservation decision making within the region.

Study site and target species
The common garden component of the study was conducted on the western shore of Admiralty Bay (King George Island, South Shetland Islands) in the vicinity of the Henryk Arctowski Polish Antarctic Station (62°09'S, 58°27'W). Individuals of P. annua used to perform this experiment were collected from a single population. Mean annual temperature at this location is -2.8 °C, and mean annual precipitation is 700 mm, falling mainly as snow, but increasingly as rain in summer (Kejna et al. 2013). Soils in this area typically have a high content of coarse mineral particles, high total organic carbon, low C/N ratio, acidic pH, and local enrichment of nutrients due to input by seabirds (Beyer et al. 2000;Androsiuk et al. 2015).
The well-developed vegetation of this area includes communities dominated by Colobanthus quitensis, Deschampsia antarctica, and many cryptogams (Smith 2003). D. antarctica demonstrates wide ecological amplitude and environmental tolerance here, colonizing habitats ranging from mineral to organic soils and from dry to waterlogged areas (Bravo et al. 2009). C. quitensis, although often reported as co-occurring with D. antarctica (Convey 1996), is less tolerant to extreme conditions, preferring moist soils (Smith 2003). P. annua, is conspicuous in the area, occurring in plant communities with the two native flowering plants (Fig. 1), and as a pioneer in glacial forelands (Olech and Chwedorzewska 2011). P. annua was recorded for the first time in the Antarctic in 1953 on Deception Island, South Shetland Islands . In the austral summer of 1985/1986, some individuals were observed for the first time adjacent to the Polish Antarctic Station Henryk Arctowski (Olech 1996). Subsequently, increases in density within the original area, and spread into new areas dominated by native vegetation, were documented. More recently, the development of flowers and production of fertile seeds in the majority of individuals of P. annua in this area has been recorded (

Field experiments
Manipulative field transplant experiments were established to assess the effects of P. annua on growth and survival of C. quitensis and D. antarctica, as well as the competitive interactions among the native species, using plant density and soil water as independent variables. Thus the elements within these common garden trials were three species and four density challenges under two climate states (current and predicted future).
Individual adult healthy plants/tussocks (6-7 cm height) of all three species were collected randomly in the vicinity of Arctowski station in January 2011. Each plant/ tussock was carefully uprooted with soil around its roots (ca. 100 g) and maintained well-watered in a plastic box under natural light and temperature (1420 ± 120 µmol m -2 s -1 and 3.7 ± 0.8 °C) conditions for a maximum of 2 h until transplanted. Plant status was visually assessed just before the next step in the transplant procedure to ensure undamaged individuals were used (plants showing foliar and/or root damage were excluded). These common garden trials were established above the shoreline, where they were exposed to seawater aerosols, and fertilized by water rich in nutrients flowing down from a nearby penguin rookery. The natural vegetation of this site includes dense continuous patches of D. antarctica as well as C. quitensis, mosses and lichens (Chwedorzewska et al. 2008). Nevertheless, this trial was carried out in a patch without cover vegetation in order to avoid modifying the competition intensity, as well as availability of resources in the soil.
The two-way density challenge consisted of the 'focal species vs. the 'competitor' species at four relative plant densities (i.e. 4 density treatments) in an experimental unit (0.25 m 2 ) with each of the three species being both the target or competitor species in the experimental design (see Fig. 2). High relative density consisted of 10 individuals of the focal species vs five individuals of the competitor species. Medium relative density consisted of seven individuals of the focal species vs. eight individuals of the competitor species. Low relative density consisted of five individuals of the focal species vs. 10 individuals of the competitor species. Finally, 15 individuals of each species (P. annua, C. quitensis and D. antarctica) in monoculture were planted without the presence of another species, as controls. Individuals were planted at least 5 cm apart. Each density treatment was replicated five times across the total transplant plot of 2500 m 2 (50 x 50 m) and two plots were constructed, one for the current and the other for the future climate scenario. The future climatic scenario focused on our calculation of future water availability.
Water regime was examined for both current conditions and a simulation of projected conditions for the region within the next 100 years, which involves an increase in soil water availability of ca. 20-25% (IPCC 2013;Turner et al. 2014). Current conditions were assessed by sampling soil moisture in the study site early in the growing season (January 2010). Using a tensiometer (2725 Series Jet Fill, CO, USA), matric water potential of the soil at 5-7 cm depth at 10 points in the study site was measured. Points were randomly selected and separated by 2-3 m. Soil moisture recorded in the field was -29 ± 0.51 (Mean ± SE) kPa. Based on these data, a matric water potential of -20 kPa was estimated for the future scenario. A pilot trial in the field was then undertaken to determine the volume of water to be added to individuals transplanted into the field situation to attain this soil moisture level . This required an extra 120 ml of water per week. Thus, two treatments were applied to each experimental unit: current conditions (no manipulation) and future climate condition (weekly water addition) to sampling plots for the entire period of the experiment (2 months). Matric water potential was measured five times over the duration of the experiment to verify that differences between treatments were maintained (mean values recorded for current and future climate condition: -29.4 ± 3.7 and -19.2 ± 2.1, respectively).
Every plant collected in the study area was randomly assigned to one of the experimental plots and measured prior to the start of experimental treatments. The plants' height was measured using a digital caliper (Mitutoyo; resolution: 0.01 mm) and initial wet weight was measured using a digital balance (Boeco BPS 52 plus; resolution:

Figure 2.
A schematic of the design of the common garden experiment, illustrating all combinations of competitive interactions performed between the three study species (Deschampsia antarctica, Colobanthus quitensis and Poa annua) at high, medium and low relative density, as well as the controls (monocultures). This design was replicated five times in the field for both current and future climate scenarios. 0.01 g). Before recording the biomass, the soil was carefully removed, avoiding damage to the roots in order to record only the vegetation tissue. A two-way ANOVA showed no differences in initial height among individuals of each species that were assigned to the different treatments and no differences in wet weights for those individuals of C. quitensis assigned to different treatments (F 3, 16 = 0.34; p = 0.79 and F 1, 16 = 1.47; p = 0.24, respectively), and likewise for D. antarctica (F 3, 16 = 0.27; p = 0.84 and F 1, 16 = 1.69; p = 0.21, respectively), and for P. annua (F 3, 16 = 0.54; p = 0.66 and F 1, 16 = 0.41; p = 0.53, respectively).
Transplants were carried out during the 2010-2011 growing season and fresh biomass and survival were evaluated over 8 weeks. Survival percentage both in native and non-native species was evaluated in situ every two weeks and estimated by means of the Kaplan-Meier method, and statistical differences were assessed with Cox-Mantel test (Fox 1993). At the end of the experiment, all individuals that survived were removed from the site, weighed to obtain final dry mass (dried for 48 h at 60 °C and weighed) and then incinerated. Each plot site was rehabilitated by smoothing disturbed soil to match the surrounding surface pavement as closely as could be achieved without causing further disturbance.
The final biomass and survival values were compared using analyses of variance (ANOVA). Initially, all data were compared to investigate differences in the main factors of species, relative density and climatic scenario (current conditions and simulated future wetter conditions). Then, a two-way ANOVA was used to assess total biomass and survival at the end of the experiment. All analyses were conducted separately for the current conditions and the future scenario, considering the species (P. annua, C. quitensis or D. antarctica), relative densities (low, medium, high or control) and treatment (growing in monoculture, with a native or with a invasive species) as main factors. For all the ANOVAs, the assumptions of normality and homogeneity of variances were evaluated using Shapiro-Wilks and Bartlett tests, respectively (Sokal and Rohlf 1997). All analyses were performed with Statistica 6.0.

Field experiments
Overall, mean plant biomass at the end of the experiment did not differ for any of the three species, C. quitensis, D. antarctica or P. annua, under current climate conditions compared with the wetting scenario (F 1, 72 = 3.96 p < 0.23, F 1, 72 = 2.12 p < 0.43 and F 1 72 = 1.98 p < 0.46, respectively). Similarly, mean survival did not differ between climate scenarios in any of the species (F 1, 72 = 2.06 p < 0.44, F 1, 72 = 2.01 p < 0.51 and F 1, 72 = 3.18 p < 0.11, respectively). Nevertheless, several interactions were significant, indicating that under wetting conditions the invasive P. annua could exert a stronger competitive effect on both native species.

Evaluation of survival patterns over time
Under current water conditions, survival percentage of C. quitensis at high relative densities (i.e. 15 plants in monoculture or 10 individuals of C. quitensis and 5 individuals of other species) was significantly higher in monoculture or high density than when growing with the invasive P. annua (Cox-Mantel test = 10.21; p = 0.031), but not different when growing with the native D. antarctica (Cox-Mantel test = 0.23, p = 0.97). The survival percentage of C. quitensis in low relative density declined significantly when growing with D. antarctica or with P. annua (Cox-Mantel test = 12.74, p = 0.004 and 17.86, p < 0.001, respectively). Although survival percentage of C. quitensis decreased significantly when grown with P. annua, this trend was more evident at higher relative density, with ca. 50% mortality in the first two weeks. High mortality was not evident in other transplants in such a short time frame. On the other hand, survival in D. antarctica at high relative density decreased significantly only when grown with the invasive P. annua (Cox-Mantel test = 8.60, p = 0.033). At a low relative density of D. antarctica, survival percentage decreased significantly when grown with C. quitensis or with P. annua (Cox-Mantel test = 12.48, p = 0.021 and 16.46, p < 0.001, respectively) compared with the monoculture treatment. At low relative density of D. antarctica, 50% mortality was realized at six weeks when grown with P. annua. Finally, P. annua showed no differences in survival when growing at high relative density with either C. quitensis or D. antarctica (Cox-Mantel test = 3.30, p = 0.12 and 2.82, p = 0.33, respectively). However, when P. annua was grown at a low relative density its survival also declined significantly (ca. 50%) in the presence of D. antarctica (Cox-Mantel test = 5.24, p = 0.039), but nonsignificantly in the presence of C. quitensis (Cox-Mantel test = 2.21, p = 0.069).
Under the future, less water-limited scenario, C. quitensis at high relative density showed significant mortality when growing with P. annua (Cox-Mantel test = 6.80, p = 0.034), but not when growing with D. antarctica (Cox-Mantel test = 0.12, p = 0.93). Similarly, C. quitensis at low relative density showed a sharp decrease in survival over time when growing with D. antarctica or with P. annua (Cox-Mantel test = 6.54, p = 0.038 and 8.76, p < 0.001, respectively). C. quitensis showed an abrupt decrease during the first week (ca. 60% mortality) when growing with P. annua. On the other hand, D. antarctica at high relative density showed a smooth but non-significant decrease in survival over time when grown in association with either C. quitensis or P. annua (Cox-Mantel test = 3.72, p = 0.072 and 4.68, p = 0.055, respectively). At low relative density the survival of D. antarctica was significantly lowered when growing with C. quitensis or P. annua (Cox-Mantel test = 6.31 p = 0.038 and 12.92 p < 0.001, respectively). Finally, P. annua at high relative density showed similar survival curves over time both in monoculture and when growing with D. antarctica or C. quitensis (Cox-Mantel test = 2.11 p = 0.089 and 1.99 p = 0.11, respectively). However, at low relative density, P. annua survival declined significantly when growing with C. quitensis or with D. antarctica (Cox-Mantel test = 15.71 p < 0.001 and 11.18 p = 0.034, respectively), but only when growing with C. quitensis was a sharp decrease in survival, of over 50% at four weeks, found.

Evaluation of biomass and survival at end of field experiments
Under current water conditions, the final survival percentage of both native plant species significantly decreased with increase of the relative density of competitors, this being more evident when grown in presence of the invasive P. annua ( Fig. 3; Table 1). In addition, survival percentage of P. annua was significantly decreased only at higher relative density of competitors, in this case being more evident when grown in the presence of D. antarctica ( Fig. 3; Table 1). Similarly, under the simulated wetting scenario, survival of D. antarctica and C. quitensis decreased significantly when grown together with P. annua compared to the monoculture treatment (Table 1), with a greater effect apparent at higher relative density of competitors (Fig. 3). In addition, the survival of P. annua was significantly higher when grown in the presence of either of the native species D. antarctica and C. quitensis, compared with those individuals growing in monoculture ( Fig. 3; Table 1).  Final biomass in both native species was significantly lower when grown in the presence of P. annua, particularly at higher relative density of the invasive species (Fig. 4, Table 1). Conversely, the final biomass of P. annua was not affected by increase in the relative density of competitors (Table 1), and significantly increased when grown with D. antarctica or C. quitensis compared with the monoculture condition (Fig. 4). Under the wetting scenario, the negative effect of P. annua on biomass was greater for D. antarctica than C. quitensis, and more evident with increase in the relative density of competitors ( Fig. 4; Table 1). However, the biomass of P. annua significantly decreased with the increase in the relative density of competitors (Table 1), this being more evident when grown with C. quitensis than D. antarctica (Fig. 4).

Discussion
The combined outcomes of this field study demonstrate explicitly the negative potential impacts of an invasive plant on the native Antarctic vascular flora, and can inform models of how invasion scenarios are likely to play out given current and predicted future climatic conditions. Previous investigations have identified a range of Antarctic areas most susceptible to colonization (Chown et al. 2012;Pertierra et al. 2017b) and several of these areas are already being colonized (Olech and Chwedorzewska 2011; Molina-Montenegro et al. 2012a;Hughes et al. 2015). This study advances current understanding by seeking to identify impacts in the field, providing evidence for the relative density required and climatic conditions it may take for a non-native species to invade and then to displace its native competitors.
The marked asymmetry of competitive effects identified, based on the field experiment with P. annua and the two native species, suggests that the future spread of P. annua may result in the local displacement of both native species. In addition, the data and analyses indicate that knowledge of the relative local frequency dependence of performance between species in competition is important when evaluating the potential for invasion of non-native species in Antarctica. Although P. annua performed better than C. quitensis or D. antarctica at all densities of competitors tested, in general, even low densities of P. annua individuals would be sufficient to outcompete and invade the local vegetated areas, both under current climatic conditions and the future, wetter, scenario examined. In addition, other key aspects of potential for invasion, as propagule pressure should be assessed (Colautti et al. 2006;Simberloff 2009), under the specific conditions found at King George Island, in order to know the potential impacts of P. annua on the community structure and functioning. Habitats on this island are representative of much of the maritime Antarctic. In the context of the large numbers of propagules estimated to be entering the Antarctic annually (> 70 000 - Chown et al. 2012), including those of P. annua and other species that are pre-adapted to the environmental conditions of the region, this finding is of particular concern.
Based on the observation that P. annua currently grows associated with other plant species as well as on bare ground on King George Island, we also demonstrate that the probability of invasion depends on an interaction between the native plant species and the specific wetter climate scenario. Thus, invasion of P. annua in any new area will depend on whether the area is currently dominated by C. quitensis or D. antarctica. Under current climate conditions the competitive effect of P. annua on C. quitensis is greater than on D. antarctica. This may be due to D. antarctica having a set of functional traits that enables higher performance than C. quitensis (see Smith 2003), or because invaders that are functionally dissimilar from native species are often favored (see Richardson and Pyšek 2006;Mayfield and Levine 2010;Gallien et al. 2015). However, under a future scenario of higher soil moisture availability, P. annua exerted a weaker competitive effect on C. quitensis than on D. antarctica. This switching in the competitive effect exerted by the invasive P. annua on native species appears to be the result of an increase in the competitive ability of C. quitensis under moister conditions. Alternatively, there may also be an increase in niche overlap of P. annua and D. antarctica (sensu Hutchinson 1957).
Numerous studies have shown relationships between competitive effects and phylogenetic or functional structure in plant communities (Kraft and Ackerly 2010;Kunstler et al. 2012). These studies are based on the assumption that ecological similarity tends to lead to more intense resource competition than ecological dissimilarity (Kunstler et al. 2012). Ecological similarity can be quantified by using functional traits on the basis that these traits are linked to competitive ability such as rapid resource acquisition or biotic tolerance (see Chave et al. 2009). Nevertheless, it has been suggested that processes other than phylogenetic or trait similarity could drive competition between plants, generating a hierarchy in the competitive ability of species (Chesson 2000; Mayfield and Levine 2010). Our results suggest that the hypothesis of phylogenetic or trait similarity must be qualified as a generalized driver of competitive outcomes among Antarctic plants, because the competitive effect induced by P. annua on native plants was altered under different abiotic conditions. Previous studies have shown that P. annua exerts higher competitive effects on D. antarctica under two simulated climate change scenarios (well-watered condition or higher nutrient availability) compared with current climate conditions, due to higher resource use efficiency (Molina-Montenegro et al. 2016). In addition, it has been shown that C. quitensis possesses high phenotypic plasticity, improving its resource acquisition and ecophysiological performance under well-watered soil conditions (Molina-Montenegro et al. 2012b). On the other hand, Casanova-Katny and Cavieres (2012) showed that D. antarctica performs better when grown in moister microhabitats such as those provided by mosses compared with those in the bare ground, suggesting that this vascular species can be negatively affected in its physiological performance and growth when faced with low water availability. Thus, we suggest that competitive ability in these Antarctic plant communities could be governed by hierarchical differences driven primarily by climate conditions and secondarily by phylogenetic similarity. Such outcomes would also be in keeping with studies illustrating the importance of abiotic conditions altering the outcome of competitive interactions (see Keddy 1989). Although these hypotheses cannot be differentiated in the current study, further field experiments can be designed to unravel the mechanisms' underlying interactions between P. annua and native plants under current and future climate scenarios.
There are indications that the well-documented trend of rapid regional warming in the Antarctic Peninsula region over the second half of the Twentieth Century has temporarily ceased (Turner et al. 2016). However, it is clear that over the last several decades the patterns of precipitation and temperature have changed in this region of Antarctica, along with nutrient input to the soil (Vaughan et al. 2003;IPCC 2013;Turner et al. 2014), with significant impacts on plant populations and communities (Parnikoza et al. 2009;Cannone et al. 2016Cannone et al. , 2017. In a complex global change scenario, with simultaneous variation in different factors such as nutrients, temperature and water availability (see also Convey et al. 2014), formerly excluded areas may become available for colonization by those species with higher capacity to acquire the resources or improve performance, such as many invasive species (Dawson et al. 2012). Scenarios which then include the complexities of community interactions (Grime 2006), including native and alien species with varying functional similarity, in the context of the abiotic variation suggest that the trajectory of influence will differ over time, as the hierarchy of competitive ability is altered, and more complex communities potentially facilitate colonization (Bruno et al. 2003). Thus, studies such as that described here provide the basis for further investigation of how invasive plant species respond to multiple changing abiotic factors in a natural setting in Antarctica. In so doing, the work will also extend understanding of how impacts are realized more generally (see discussion in Catford et al. 2012;Richardson and Ricciardi 2013), and contribute to understanding of the role of ecological similarity in determining competitive outcomes in the context of invasion success, especially under changing climates (Chown et al. 2015b;Gallien et al. 2015;Hulme 2016).

Conservation implications
Overall, this study indicates that the substantial concerns already expressed about invasive plant species for the Antarctic continent (Shaw et al. 2014;Hughes et al. 2015) are warranted, and particularly so for P. annua which is already spreading in the region These findings underpin the growing number of biosecurity actions in the region and the importance of adherence to mitigation recommendations in the Antarctic Treaty's Non-Native Species Manual (CEP 2011). Clearly, interventions at an early stage in the invasion pathway will be most efficient and cost effective (Simberloff et al. 2013), but substantial investment in their implementation is only likely if it can be demonstrated that negative effects will ensue from colonization by non-indigenous species (Hulme et al. 2013;Ricciardi et al. 2013). This study shows that, without such interventions, impacts will not only take place, but are also likely to change as water availability changes in the future along the Antarctic Peninsula. Indeed, P. annua is clearly a competitor with moderate (MO) to major impact (MR), or at least potentially so for the continent, as defined under the recently developed unified classification for alien species based on the magnitude of their environmental impacts . Such species cause local population decline (MO) or extinction (MR) of at least one native species, and in the case of MR species lead to changes in the structure of communities and the abiotic or biotic composition of ecosystems. In consequence, much impetus exists to improve biosecurity for the region, especially given that its implementation is currently inconsistent among different operators in Antarctica, with improvements required from many operators (Braun et al. 2001;Hughes and Pertierra 2016).
Nonetheless, the impact of P. annua is being realized on a continent that is considered a natural reserve, and one of the planet's last wilderness areas and one with expanding ice-free areas (Shaw et al. 2014;Lee et al. 2017). Recent reports confirming that the species has colonized areas away from stations and is expanding along the Antarctic Peninsula are concerning (Olech and Chwedorzewska 2011; Molina-Montenegro et al. 2012aAtala et al. 2019). Given that spread can be relatively fast (see Wilson et al. 2007), and we have demonstrated here that P. annua is capable of negatively impacting Antarctica's two native vascular species, we encourage the development of a program of eradication that also will enable an effective evidence-based conservation decision-making protocol to be developed and applied in the region.

Introduction
Globally, the number of introduced, non-native species (NNS) continues to increase (Seebens et al. 2017) while biological invasions already are one of the major causes of global biodiversity loss, and inflict massive economic and societal costs (Bradshaw et al. 2016;Paini et al. 2016). Yet predicting the magnitude of NNS' impacts remains particularly difficult (Courchamp et al. 2017;Dick et al. 2017). To identify those NNS that are most likely to cause substantial ecological and/or socio-economic damage, a wide range of prioritization tools have been proposed. These tools are generally based on previous records of impact (Kulhanek et al. 2011). Some protocols are geared towards specific types of impacts, taxa or geographical areas (e.g. Copp et al. 2009;Sandvik et al. 2013), while others aim to be more generically applicable (e.g. Nentwig et al. 2016). These impact assessments take into account existing evidence regarding NNS impacts to aid conservation managers and policy-makers in deciding how conservation resources can best be allocated. However, impact prioritization protocols differ in how they treat available evidence on NNS impacts (e.g. relying on peer-reviewed literature only versus accepting Box 1. The Precautionary Principle. The Precautionary Principle according to the Rio Declaration: "Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation." (UNCED 1992, Principle 15).
The Precautionary Principle according to the European Commission: "According to the European Commission the precautionary principle may be invoked when a phenomenon, product or process may have a dangerous effect, identified by a scientific and objective evaluation, if this evaluation does not allow the risk to be determined with sufficient certainty. Recourse to the principle belongs in the general framework of risk analysis (which, besides risk evaluation, includes risk management and risk communication), and more particularly in the context of risk management which corresponds to the decision-making phase. The Commission stresses that the precautionary principle may only be invoked in the event of a potential risk and that it can never justify arbitrary decisions. The precautionary principle may only be invoked when the three preliminary conditions are met: (1) identification of potentially adverse effects, (2) evaluation of the scientific data available, and (3) the extent of scientific uncertainty." (European Commission 2000). grey literature as well), while several protocols invoke the precautionary principle (see Box 1) to guide scoring of NNS impacts. Here, we propose an integrative strategy for building and organizing the evidence base underlying NNS impact assessments. In addition, given the challenges in predicting which NNS are likely to have the most severe impacts, we support the use of the precautionary principle in the risk management stage of the risk analysis of NNS (Ahteensuu and Sandin 2012). Explicitly and publicly acknowledging the evidence included, and the choices made in NNS impact assessments, is vital for the legitimacy of any NNS management policy (Bartz and Kowarik 2019). Therefore, we suggest how one can build a comprehensive, transparent and reproducible database, and argue that applying this public database to available impact prioritization protocols will allow anyone to track and (re-)evaluate NNS rankings. In this essay, we first describe risk analysis (Fig. 1) for non-native species to provide an organized and comprehensive view of this process. We then focus on the impact assessment step and highlight some key issues and propose a novel framework that allows us to simultaneously answer many of those challenges. Finally, we specifically discuss issues regarding the application of the precautionary principle in the impact assessment stage, and discuss how this important principle may be used in NNS risk analysis.

A primer on (NNS) risk analysis
To ensure unequivocal use of words pertaining to NNS risk assessment, this paragraph focusses on clarifying and standardizing the terminology used in this essay. In its most general form, 'risk' equals the likelihood that harm will occur multiplied by the consequences of that harm. The main standard-setting organizations (e.g. the CAC, regulating food safety; the FAO/WHO, regulating animal health; the IPPC, regulating plant health) consider a 'risk analysis' to include several discrete components, categorized as 'hazard identification', 'risk assessment', 'risk management' and 'risk communication' (Fig. 1, Fig. 2). Accordingly, the first component of a full NNS risk analysis, the 'hazard identification' step, is where it is decided for which species a risk analysis is to be conducted. This can be done proactively, for example when a horizon-scanning exercise has identified a set of potential NNS; or reactively, when an early-detection network has uncovered emerging populations of a NNS. The second component, the 'risk assessment', collates scientific evidence pertaining to the species under consideration. Risk assessments take many forms, from experimental manipulations to soliciting expert opinion (Speirs-Bridge et al. 2010; Aven 2017), but are fundamentally tools for obtaining, organizing, presenting and summarizing information for further use in management processes (Fairbrother and Bennett 1999). The risk assessment component forms the cornerstone of risk analysis, and in the context of NNS, it is typically separated into four subcomponents, corresponding to distinct components of invasion, namely evaluation of the potential/likelihood of introduction, establishment, spread and impact (Fig. 1). Commonly used assessment protocols may focus on specific components of risk assessment (e.g. the impact component only, such as the The third component is known as 'risk management'. Here, decision-makers consider the information and evidence collected in the preceding risk assessment, and weigh it against any other economic, political or societal factors. Risk management thus is about the selection and application of specific management policies, procedures and practices to reduce or mitigate the proliferation of damaging NNS (Abt et al. 2010;Tollington et al. 2017). Fourth and finally, 'risk communication', is closely related to the principle of transparency and with the right of societies to participate in the process of decision making. Its major function should be to ensure that all information and opinions essential for effective NNS management are exchanged among stakeholders and incorporated into the decision making process (Goldschmidt 2017).

Current issues in NNS impact assessment
A main challenge in NNS impact assessment is the ability to evaluate, compare or even predict the magnitude of impacts attributable to a wide range of non-native taxa, often based on limited evidence (Hulme et al. 2013;). Progress has been made in devising generic impact scoring protocols that are applicable across a wide range of taxa and habitats. Recent studies have further proposed methods for Figure 2. Schematic representation visualizing the main differences between current practices in NNS risk assessment and the framework for evidence mapping proposed here. Building a transparent and searchable database whereby the evidence base is grouped according to the relevance of the geographical area from where the information is reported, the type of publication, study design and impact direction facilitates evaluating the robustness of NNS impact assessments, and makes them more legitimate for policy decisions. ensuring quality control and reducing disagreement between expert assessors Vanderhoeven et al. 2017). For example, González-Moreno et al. (2019) propose that in order to reduce inconsistencies in research findings, assessment protocols should use a five-level scoring rule, maximum aggregation of impacts and the moderation of expertise requirements. However, more fundamentally, protocols differ strongly in the kind of information they accept as an evidence base. Firstly, some protocols consider only impacts originating from the (invaded) area (e.g. EICAT) for which the assessment is done whereas others recommend incorporating impact information from other invaded ranges, or even from the native range (e.g. NNRA; Table 1). Secondly, some protocols focus on impacts reported in the peer-reviewed literature only, while others allow any kind of grey literature or expert opinion to be used (Table 2). Thirdly, protocols also may not clearly discriminate between study designs, risking largely anecdotal observations to be considered as equally informative as experimental studies (Table 2). While protocols often include a general level of confidence for the overall (impact) assessment, this typically does not allow one to identify the type or source of uncertainty. Consequently, reasons for self-reported low levels of confidence in the output of (impact) assessments remain mostly unexplained .
Finally, multiple protocols invoke the precautionary principle as a guideline for building and interpreting the impact evidence base (Table 3). Impact assessment protocols typically consist of a list of questions regarding invader impacts for distinct impact categories. Using the precautionary principle as justification, several protocols instruct expert assessors to give a single answer (i.e. a single impact score) for each category, using only the most severe impact case study encountered -thereby effectively ignoring studies showing less severe impacts. Additionally, when aggregating the answers across impact categories into an overall score, multiple protocols also refer to the precautionary principle when recommending to rank NNS based on the most severe impact scores only. This is controversial, as the consensus view is that the precautionary principle is relevant only for the risk 'management component', and not in risk 'assessment' (Aven 2011; Ahteensuu and Sandin 2012, Fig. 1, Fig. 2, Box 1). Indeed, the precautionary principle is a normative principle that allows policy-makers to opt for certain cost-effective measures when there are threats of serious or irreversible damage, even if there is a lack of full scientific certainty (UNCED 1992, European Commission 2000, Ahteensuu and Sandin 2012, Garnett and Parsons 2017).
These issues underlying current NNS impact assessments are problematic for several reasons. First, even if conducted for the same species and the same region, differences across protocols in the evidence utilized may lead to inconsistencies in NNS rankings (Matthews et al. 2017). This hinders the acceptance and uptake of NNS biosecurity strategies by decision makers and the general public (McGeoch et al. 2012). Second, NNS impact reports typically derive from a wide range of sources. Although observational and experimental peer-reviewed studies are an important source of impact information, many impacts are only reported in the grey literature. Especially when data is scarce, all available information can be valuable -whether it comes from grey or peerreviewed literature. This, however, makes it especially relevant to explicitly document which data is considered, as for instance, assessments that include anecdotal information versus experimental data only inherently differ in the quality of their underlying evidence. Accepting different quality levels may result in inconsistencies in assessment outcomes (White et al. 2019). A transparent and systematic classification of the evidence base, which allows going back to the sources, is crucial to avoid a 'data laundering' process (Strubbe et al. 2011), whereby stakeholders use the results of the impact assessments to draw conclusions without being aware of the potentially limited quality of the underlying evidence. Third, the rise of 'invasive species denialism' (Ricciardi and Ryan 2017;Russell and Blackburn 2017) challenges invasion biologists to better present the available evidence, because disagreements often arise when uncertainty on impacts is confounded by differences in personal values. Minimizing social conflict in NNS management will need more than evidence alone. For example, Crowley et al. (2017a) advocate for the implementation of social impact assessments ('SIA') for identifying, evaluating and addressing social costs and benefits, and to enable meaningful public participation in management planning. Hence, especially in our contemporary 'post-truth' world (Higgins 2016), compiling and presenting a transparent and publicly available evidence base for informed risk assessment, risk management, and risk communication should be a core concern for invasion ecologists.

A framework to map variation in the evidence base
To address these challenges, we propose a framework by which all available information is systematically classified and catalogued, in order to achieve the creation of a transparent, objective and reproducible evidence base. Multiple impact assessment protocols already require expert evaluators to document the most severe impact case studies encountered (e.g. Bacher et al. 2018), but similar reporting of all literature sources assessed typically is not mandatory. Here, instead of following protocol-specific instructions regarding eligible evidence, we suggest separating out the initial construction of the evidence base of NNS impacts (Fig. 2), making this independent of the scoring protocol. This evidence base can subsequently be used for impact scoring in combination with a chosen impact assessment protocol (and provided to stakeholders, the general public, and/or reviewers). We propose that this evidence base (a) must include each and every source documenting an invader impact -not only the most severe case study, and (b) that each and every source is catalogued using at least the following four variables (see Table 4 for a summary).
A first important yet variable decision NNS assessment protocols make (see Table 1) is deciding from which 'geographical area' invader impacts are included. Invasion impacts are context-dependent, as they, among others, depend on the environment in which the impact occurs (Pyšek and Richardson 2010;Bartz and Kowarik 2019). For a flexible implementation of this decision, we propose that invader impacts are classified in the evidence base depending on whether the information comes from (a) the geographical area for which the assessment is performed, (b) from other non-native Table 1. Geographical areas considered by a set of commonly used NNS risk or impact assessment protocols. The protocols listed in the Tables below vary in their scope, from purely impact assessment (such as EICAT and GISS) to full risk analysis tools (e.g. GABLIS). We here focus on the 'impact assessment' component common to each protocol. To illustrate how available impact assessments differ in various ways, we here highlight protocols whose impact assessment module potentially meets the minimum standards set by the European Union (Roy et al. 2014). We additionally included the more recent EICAT protocol because it has been adopted by the IUCN as their tool of choice to rank invaders based on the magnitude of their impacts.

Protocol
Geographical areas considered AquaNIS AquaNIS considers two 'impact blocks'. 'Regional impacts' involves data on species impacts in the invaded region under consideration. 'General impacts' refers to impact data from 'any world region', whereby AquaNIS does not explicitly discriminate between impacts from the native range or impacts from other invaded areas. (Olenin et al. 2014.) EFSA EFSA instructions require consideration of impacts in other invaded regions and also allow to use impacts from species native ranges: "A review of the type and intensity of the current environmental impact in other invaded regions (outside the risk assessment area) is required. From this information, the prevalent ecological role and the ecological interactions the pest establishes (or is expected to establish) in the current area of distribution and in its different developmental stages can be defined. If the species has not invaded any other area, or if the invasion is too recent and too little is known about its ecology in the invaded areas, the ecological role of the species as a driver of ecosystem change can be evaluated in the native distribution area" (EFSA 2011) ENSARS ENSARS seems to primarily rely on information from species native ranges: "Impacts on aquaculture are determined first through questions on the impact the agent has on aquatic animal production within the existing geographic range, and whether impact is likely to be comparable in the importing country.", and "Similarly, social and environmental impacts are also assessed through comparison of the original geographic range with the RA area." ) EPPO EPPO primarily relies on impacts reported from the invaded area under consideration, but also allows to use evidence from elsewhere, but it is not clarified whether "elsewhere" includes both species native ranges and other invaded areas: "As far as possible, evidence should be obtained from records of invasive behavior in the area under assessment or in the EPPO region. Information on invasive behavior elsewhere may also provide guidance." (Brunel et al. 2010) FISK FISK impact questions focus on impacts on the invaded range only (e.g. 'In the taxon's introduced range, are there known adverse impacts to ecosystem services?"), no specific questions or guidance regarding impacts in other invaded areas or in species' native ranges are given. ) GABLIS GABLIS seems to allow to use impact information from any area, as it states "Data used for assessment (…) may refer either to a reference area or to climatically and ecologically similar areas." Indeed, GABLIS mentions that "(…) This "invades elsewhere" criterion is one of the most important and most appropriate to carry out predictive risk assessments (Pyšek and Richardson 2007).", but also warns that "(…) transferability must be assessed with caution and on a case-by-case basis." (Essl et al. 2011) GB NNRA GB NNRA considers both impacts in the invaded region under consideration and impacts 'within its existing geographic range', but does not explicitly discriminate between species native ranges and other invaded areas. (Baker et al. 2008) GISS GISS allows to use information from the invaded region under consideration, other invaded areas and from the species native range; "The impact scored by the GISS should ideally be observed in the focal invaded range. However, if the species shows no impact, for example because its density is still too low or it has just started spreading, no published information can be expected. In such cases, impact reports from other invaded areas ("impact elsewhere") can be taken into consideration and in some cases, even including impacts from the native range is justified, especially for species that are vectors of parasites or are toxic or allergenic (i.e., possess features that are unlikely to change between ranges)." (Nentwig et al. 2016) Harmonia+ Harmonia+ allows to use data from any geographical region to inform the assessment, as in its key guidelines, it is stated that: "Third, to use cases that are similar in biology or geography when direct evidence appears lacking (the higher the similarity, the better)." (D'hondt et al.

2015) EICAT
For so-called 'non-global' assessments, EICAT allows to use data from any other invaded region: "Non-global assessments may be carried out, based on data from the focal region or from focal regions outside the particular country or region of interest (…)". It does, however, explicitly exclude data from the native range: "Data and observations from the native range are often important components of risk assessments, but such data should not be used in estimating Current or Maximum Recorded Impacts. The EICAT scheme is purely about impact in the alien range of a species." ) NORWAY SCHEME The Generic Ecological Impact Assessments of Alien Species in Norway allows to use data from elsewhere, if there is no information available from the (invaded) region under consideration. It is not clear whether this includes the use of data from the native range, but this seems acceptable judging from the following statement: "Where data on a given species are not available, from the country or region for which it is assessed, data should, in this order, be sought from: other regions with comparable ecoclimatic conditions, other regions with different ecoclimatic conditions, other, preferable closely related, species with comparable ecological and demographic characteristics." (Sandvik et al. 2013)  ENSARS primarily relies on peer-reviewed literature, but also allows for 'other sources of reliable information', yet it does not clarify what criteria need to be met for 'other sources' to be 'reliable': "A key feature of ENSARS is that the risk assessments are, as far as possible, informed using peer-reviewed literature or other sources of reliable information, and there is therefore a 'paper trail' that enables the justification for a decision to be reviewed and subsequently be revised, should new information become available." ) EPPO EPPO (European and Mediterranean Plant Protection Organization) allows a wide range of data sources to be considered: "Available sources of information to run the process include: NPPO data, scientific literature, personal communications from scientists and botanists, websites and databases on invasive alien plants. Existing PRAs (Pest Risk Assessments) also need to be consulted (e.g. on the EPPO and NPPO (International Plant Protection Convention) websites)." (Brunel et al. 2010) FISK No guidance was found regarding the types of information that are acceptable for informing FISK impact questions. A 2013 background and guidance document prepared by the 'Salmon and Freshwater Team' mentions that when answering FISK questions, the assessor should "Provide a justification for that response (i.e. bibliographic source, background information, etc.)." This seems to suggest that a wider range of sources is accepted (i.e. not only peerreviewed literature). (Salmon andFreshwater Team 2013, Copp et al. 2016 GABLIS GABLIS allows to use impact information from a wide range of sources, as it states "Data used for assessment may result from scientific reports and peer-reviewed publications as well as from expert judgement (…)." (Essl et al. 2011) GB NNRA No explicit guidance could be found on which data sources are considered acceptable for informing the GB NNRA. (Baker et al. 2008) GISS GISS seems to exclusively rely on peer-reviewed literature, as it states that "(…) the GISS relies on published evidence of the impacts caused rather than on expert knowledge (…)" and "If no publications on impact can be found, this species cannot be scored by the GISS." (Nentwig et al. 2016) Harmonia + Harmonia + does not explicitly state which documents can inform the assessment, but seems open to include a wide range of sources as it states that: "Key guidelines are, firstly, to base answers as much as possible on evidence and not on a purely hypothetical or speculative basis."). (D'hondt et al. 2015) EICAT EICAT mentions that different data types can be used, classifying data as 'Observed' (e.g. empirical observation, designed observational studies) versus 'Inferred' (e.g. outcomes of mathematical models), but does not explicitly mention what data sources can be used. An IUCN EICAT evaluation excel sheet, however, mentions that: "Information on the impacts of an alien species may be taken from a range of sources including journal articles, books, scientific reports, websites, grey literature (unpublished) and personal communications." ) NORWAY SCHEME The Generic Ecological Impact Assessments of Alien Species in Norway accepts a wide range of data sources: "Scientific publications, reports as well as unpublished data are accepted as documentation, as long as the latter are made available by the experts. Documentation also includes reporting the complete input values of models performed, not merely their output." (Sandvik et al. 2013) areas invaded by the species, (c) from the species' native range, or (d) from captivity or cultivation. We acknowledge that there may be border cases where it could be difficult to allocate a specific study to one category or the other. Such ambiguity could be commented upon in the database so that other assessors can investigate these cases and consider categorizing them differently. Assessors may also consider investigating how alternative allocations affect the final conclusions (i.e. a sensitivity analysis). Choosing which geographical areas are relevant can depend on the goal of the assessment. For example, the EICAT protocol ) is based only on impacts that have been observed in the invaded area under consideration. Other protocols explicitly aim at quantifying not only the actual, but also the potential impact invaders can have in the invaded area under consideration, by incorporating impacts recorded elsewhere as Table 3. Reference to the precautionary principle by a set of commonly used NNS risk or impact assessment protocols.

Mention of Precautionary Principle (PP) AquaNIS
No references found. (Olenin et al. 2014.) EFSA No references found. (EFSA 2011) ENSARS No references found. ENSARS only uses the wording 'precautionary approach' once, but it does not refer to the interpretation of impacts. ENSARS includes a pre-screening component which corresponds to the initial hazard identification phase of the risk analysis process. Here, 'precautionary approach' is used to justify that "(…) toolkits are based on the generally accepted premise that organisms invasive in other parts of the world have an increased chance of being invasive in new areas with similar environmental conditions", and thus seems to allow to use information from other native invaded ranges as well. (Copp et al. 2016) EPPO No references found. (Brunel et al. 2010) FISK FISK does not mention the PP explicitly, but the 2013 background and guidance document prepared by the 'Salmon and Freshwater Team' mentions that, for scoring uncertainty, "A question is counted as unanswered if any of these items is not completed -in such a case, a default (precautionary) score is given (i.e. the highest possible value). " FISK thus invokes the PP to assign the highest possible uncertainty score if an assessor cannot fully answer a given question, independent of the reason that the question cannot be answered. (Salmon andFreshwater Team 2013, Copp et al. 2016) GABLIS GABLIS first refers to the PP in the introduction: "Management opportunities for IAS are mostly restricted to early stages of invasions …, hence the early, ideally ex ante identification of IAS is an urgent need. The priority of this precautionary principle is recognized by the Convention on Biological Diversity (CBD 1992)." GABLIS assigns IAS to a listing approach, and invokes the PP in this listing process through stating that: "The allocation to a list is based on the precautionary approach: if at least one criterion is assessed with "yes", the alien species is assigned to the Black List" Furthermore, GABLIS invokes the PP when discussing how uncertainty is treated: "Thus, any methodology for the assessment of future impacts inevitably includes a certain probability of error resulting from insufficient data or wrong data interpretation. GABLIS covers this uncertainty by placing alien species for which deleterious impacts on biodiversity are insufficiently known on the Grey List. This is also supported by the precautionary principle of the CBD (2000CBD ( , 2002. As Genovesi and Shine (2003) put it: "Where there is a threat of significant reduction or loss of biological diversity, lack of full scientific certainty should not be used as a reason for postponing measures to avoid or minimize such a threat". Lastly, GABLIS explicitly states that "In other words, the precautionary principle should be employed as a significant guideline for assessing the risks posed by IAS (Genovesi and Shine 2003). We have explicitly included the precautionary principle in GABLIS (…) ". (CBD 1992, CBD 2000, CBD 2002, Genovesi and Shine 2003, Essl et al. 2011 No references found in the Baker et al. (2008) publication outlining this risk assessment. The NNS website (http:// www.nonnativespecies.org) does refer to the Convention of Biological Diversity (CBD), which emphasizes the need for a precautionary approach towards NNS, and mentions that the NNRA "has been developed to help facilitate such an approach in Great Britain". (Baker et al. 2008) GISS GISS explicitly invokes the PP once, to justify why the highest impact score should be chosen when there is conflicting evidence: "If several studies report different impact levels in the same category, the maximum is chosen as a representation of the highest potential impact a species can reach (precautionary principle)." (Nentwig et al. 2016) Harmonia+ Harmonia+ explicitly refers to the PP when describing its key guidelines: "Second, to always employ the precautionary principle; e.g., by taking the worst-case scenario when different scenarios are possible. This is in line with a primary principle from the Convention on Biological Diversity (COP 2002)." (COP 2002(COP , D'hondt et al. 2015 EICAT EICAT invokes the PP multiple times, sometimes using the wording 'precautionary approach' as a synonym. "The EICAT scheme takes a precautionary approach: when the main driver of change is unclear, it should be assumed to be the alien taxon for the purposes of the EICAT process." "We note that invasion, and by extension impact, is a characteristic of a population, rather than a species: not all populations of a given taxon necessarily become invasive. It follows that the EICAT classification of a taxon will generally reflect impact recorded from one or a small number of populations, and hence that population level impacts translate into taxon-level assessments. This reflects the precautionary principle 1 for alien impacts, as impact caused by one population suggests the potential for other populations of the same taxon to cause similar impact elsewhere if they were transported outside of their natural boundaries." "As most taxa that are alien and have impacts somewhere have not been introduced to many of the locations where they could potentially thrive and have impacts, the vast majority of assessments will use 'focal region' data to generate a global level species assessment. Again, this reflects the precautionary principle for alien impacts, which is important as there is evidence that many alien taxa can have strong impacts in at least part of their invaded range, if distributed sufficiently widely." ) NORWAY SCHEME The Generic Ecological Impact Assessments of Alien Species in Norway invokes and discusses the PP, mainly to justify a 'One Out, All Out' scoring: "… a species is categorized by selecting the highest risk category of which at least one criterion is met. Criteria used to assess species should not simply be summed, because this may result in an intermediate risk category for species that score extremely high on one criterion but low on others (cf. Makowski and Mittinty 2010)." (Makowski andMittinty 2010, Sandvik et al. 2013) Table 4. Proposed impact evidence variables and metadata recorded for each evidence entry in an impact assessment evidence base. When assignment to a single category is difficult, this can be flagged in the comments column or the entry can be given a dual coding.

Impact evidence variable Levels Description Species
Scientific name of the organism under assessment Criteria for including non-native species in the assessment.

Impact category or mechanism
Specific to the impact assessment protocol chosen.

Non-experimental
A study that uses a qualitative/quantitative, but non-experimental, scientific sampling design (allows inference on magnitude but not causality of impact). Anecdotal Casual observation acquired without a sampling design (only allows inferences on presence/absence of impact, not on magnitude or causality). Indirect report Impact not observed by person reporting it or sources that do not report primary data (impacts cannot be verified). Impact direction Deleterious Evidence entry explicitly reports deleterious impact Beneficial Evidence entry explicitly reports beneficial impact No impact Covers cases where no impact is explicitly reported. Metadata Source identifier; Evidence entry identifier (for entries coming from a source containing multiple pieces of evidence); Year in which evidence was made available; Source language; Geographical region; Country; Detailed location of reported impact (e.g. nearby city or coordinates); Full bibliographic reference of source; Expert assessor name; and a short written description of relevant evidence.
a proxy (Bomford 2008;Nentwig et al. 2016, see Table 1). Indeed, both Matthews et al. (2017) and Verbrugge et al. (2010) found that the geographical area considered was a root cause of variability among NNS impact classifications. As an example, depending on the European country where the impact information was taken from, the fish species Umbra pygmaea is classified either as a low priority 'non-invasive' introduced species or as a 'high risk' invader (Verbrugge et al. 2010). When impacts are clearly labelled based on their geographical context, assessors can transparently debate and decide which evidence is or is not incorporated into the specific assessment they are carrying out, and the consequences of using different criteria on invasive species impact rankings can be transparently assessed and discussed. Second, evidence should be classified according to its "source type", as either (a) peer-reviewed literature, (b) non-peer-reviewed ("grey") literature or (c) unpublished data (i.e. personal communication, personal observation, unpublished data). NNS assessment protocols again differ in which source types are included (Table 2), so an evidence base that is structured accordingly allows for flexible use and evaluation of this criterion. For example, the fact that Chlamydia psittaci (the bacterium causing the zoonotic infectious disease psittacosis) is present in Belgian non-native ring-necked parakeet (Psittacula krameri) populations is only known from a single line in a grey literature report (Vangeluwe et al. 2004). Thus, the type of publication that is allowed into the evidence base, or its weight, may have a marked effect on the assessment outcome and on the identification of which kinds of impact may be most threatening.
Third, the evidence should also be explicit about the 'study design'. This is important, as also peer-reviewed studies can strongly differ in the amount and quality of the evidence they provide. Therefore, we propose to classify the study design as either (a) an experimental study, i.e. any study using a qualitative/quantitative experimental manipulation of the mechanisms by which the invader is presumed to have an effect, so causality can be inferred, (b) a non-experimental study, i.e. any study using a qualitative/quantitative scientific sampling design to quantify associations between NNS and impacts, without being able to definitively establish causality, (c) an anecdotal report, i.e. any casual observation acquired without a qualitative/quantitative scientific sampling design, so presence/absence of impact can be inferred but neither magnitude nor causality, or (d) indirect reports, i.e. data not observed by the person reporting it, or sources that do not report primary data, so impacts cannot be verified.
Fourth, the evidence should include the direction of the impacts encountered (Schlaepfer et al. 2011;Tanner et al. 2017;Dickey et al. 2018;Hagen and Kumschick 2018). Impact assessments typically only consider deleterious impacts (Baker et al. 2008), and often do not take into account evidence of no impact. Besides deleterious impacts, NNS can also have beneficial impacts, and information on such beneficial impacts may be used by policy-makers in the subsequent risk management and risk communication steps. Indeed, recent European Union legislation aimed at combatting NNS explicitly states that risk assessments should include "a description of the known uses for the species and social and economic benefits deriving from those uses" (EU Regulation 1143/2014, Art. 5, 1(g)). Along similar lines, Branquart et al. (2016) mention that when NNS impacts are offset against perceived gains, cataloguing such gains belongs within the scope of the broader risk analysis. A concrete example of such possible beneficial effects is shown by the invasion by Scotch broom (Cytisus scoparius) in New Zealand, where this plant is considered valuable by beekeepers (Jarvis et al. 2006), whereas farmers and the forestry industry consider it a pest and opt for releasing biocontrol agents. Including direction of impacts therefore will also highlight that impacts (in either direction) may not be fully objective and can be "user-dependent": some impacts may be scored differently by distinct sections of the scientific community and the general public. We therefore strongly advocate that information on absent and (apparent) beneficial invader impacts is fully included in a transparent and systematic manner in the evidence database. By making evidence of beneficial impacts part of the evidence base, policy-makers or conservation managers can rely on this information in later stages of the risk analysis process, i.e. in the risk management step (Fig. 1, Fig. 2) where any other relevant economic, political or societal factors are considered.
Our framework aims at strengthening the existing standards towards transparency and reproducibility in NNS impact assessments, in line with current scientific trends. For example, recently, Galanidi et al. (2018) applied the EICAT protocol for prioritizing marine invasive fishes in the Mediterranean and published the full underlying evidence base, detailing for each impact encountered (not only the worst ones) the geographical area where the impact was recorded and the study design of the manuscript reporting it. In that sense, our proposal to build and publish the evidence base responds to the needs identified by the scientific community. The criterion 'geographical area' refers to the issue of transferability of non-native species impacts, while study design and source type are both associated with the credibility and reproducibility of scientific findings. Direction of impact is included here because of its relevance for the broader risk analysis process and this information is valuable in the later stages following risk management. We aim to promote such organization and reporting of impact evidence. We stress that, here, we do not make a judgement about which evidence should or should not be considered in invasive species impact assessment, but we do call for an evidence base that includes all known information, allowing to transparently track which decisions any study may have made. We further note that our framework would also facilitate the interchange or publication of data sets. This can prevent unnecessary replication of literature review efforts, facilitate rapid updating, enable comparison of outcomes of assessments with respect to different assessment protocols, and promote the involvement of other stakeholders. Technical barriers for sharing data have recently been lowered by the emergence of digital platforms such as, for example, Data Dryad, Figshare, Zenodo, or the GBIF Integrated Publishing Toolkit . Impact assessment databases can be made findable, shareable and citable using resolvable Digital Object Identifiers (DOI, Kahn and Wilensky 2006). Fig. 3 provides a hypothetical example illustrating that impact scores can be strongly dependent on whether certain source and evidence types, or geographical areas, are included or excluded in the final impact assessment. Classifying the evidence base according to the variables outlined above prior to the actual scoring allows one to transparently track why impact classifications may differ between scoring protocols. White et al. (2019) recently applied the evidence base scheme proposed here to first collate evidence on impacts caused by parakeets and then leveraged this information to carry out a GISS-based impact assessment for all parakeet species introduced to Europe. They found that the types of evidence included in assessments strongly influenced outcomes, whereby, for example, including evidence from the native range or anecdotal evidence resulted in a switch from minimal-moderate to moderate-major overall impact scores (Fig. 4). Such transparency is important for application of assessment outcomes by different users and supports the communicability and acceptance of assessment results (Bartz and Kowarik 2019). We should however note that uncertainty in NNS impact assessments has many sources (reviewed by McGeoch et al. 2012). Our framework may help by addressing the 'epistemic' uncertainty due to data quality. Having all informa- Figure 3. Hypothetical example of an impact assessment carried out following the evidence mapping framework presented here. The figure shows how impact evidence can differ across dimensions, and that consequently, in-or excluding certain classes of evidence (and how they are scored) can strongly change impact assessment final outcomes. The size of the dots is proportional to the number of impacts reported in the literature. In this hypothetical example, including impact evidence from native-range grey literature would result in the species under consideration being assigned a far higher threat level compared to only considering peer-reviewed studies from the invaded area under consideration. Ranking NNS based on their threat level can be done either by averaging impact scores across impact categories ('mean impact scoring'), or based solely on the most severe impact recorded ('worst-case impact scoring'). tion on available evidence at hand will help assessors assigning impact scores and the associated uncertainty. For example, an assessor could opt for maximum scoring (and assign the invader to a 'high impact' category) but report a large degree of uncertainty (because the evidence for the more severe impacts comes from grey literature studies). That is one of the ways we envisage our evidence base will facilitate invasive species impact assessments.

A prudent use of the precautionary principle
Both according to scholars and to legal entities such as the EU, the precautionary principle is part of the risk 'management' step, allowing policy-makers to take certain decisions even if there is no full scientific certainty or agreement (European Commission 2000; Ahteensuu and Sandin 2012, Fig. 1, Box 1). Introducing the precautionary principle during the impact scoring in risk 'assessment' comes down to reframing this principle from a rule that guides how we should 'act or take decisions', to a principle about what we should 'believe' (Harris and Holm 2002). Such use of the precautionary principle may be problematic for two reasons. First, the precautionary principle likely is one of the underlying causes of systematic differences between assessment outcomes, as protocols that invoke this principle score invader impacts solely on the worst recorded impacts and consequently tend to result in higher impact scores compared to protocols that do not rely on the precautionary principle. Such disagreement between different protocols can be problematic and undermine the credibility of assessments (see e.g. Vanderhoeven et al. 2017;Turbé et al. 2017;González-Moreno et al. 2019). Second, instead of providing 'a scientific and objective evaluation' (European Commission 2000), basing impact assessment scoring on the precautionary principle 'may encourage assessors to select information that portrays alien species in the worst possible light' (Matthews et al. 2017). Dahlstrom et al. (2011) remark that variation regarding the incorporation of the precautionary principle can contribute to disparate outcomes of different risk assessment frameworks. Along similar lines, Heard et al. (2011), for example, note that it had become difficult to quantify how widespread a threat the non-native chytrid fungus Batrachochytrium dendrobatidis really is to the world's IUCN Red Listed amphibians because -motived by the precautionary principle -assessors routinely list the disease as a contributing factor, mostly without any evidence on whether the disease has actually been detected (a more recent analysis does confirm that the chytridiomycosis panzootic is a leading cause of species extinctions at a global scale, Scheele et al. 2019).
The precautionary principle is an important and explicit part of several impact assessment schemes (Table 3). For instance, some protocols, such as AquaNIS, EFSA and EPPO, make no reference to the precautionary principle, while others explicitly invoke the use of this principle when assessing risks and impacts (e.g. EICAT, GISS, Harmo-nia+; Table 3 for full details). When referenced in NNS impact assessment, we argue the precautionary principle is typically used (1) to limit the evidence base to only the most severe documented impacts, (2) for using the most severe impact recorded as sole criterion for ranking NNS, and (3) to lower the evidence bar needed to accept an impact. In contrast, we contend that the evidence base accompanying impact assessments should include all relevant studies encountered, not only the most severe ones. Next, impact as- Figure 4. Effect of allowing only impact evidence from the invaded area under assessment (orange lines) versus also including evidence from other geographical areas (brown lines) on impact assessment outcomes for ring-necked (left) and monk parakeets (right) introduced to Europe. Impacts were scored according to the GISS protocol (Nentwig et al. 2016), whereby the magnitude of impact is quantified with six levels ranging from 0 (no impacts known) to 5 (the highest possible impact, see Table 2 in Nentwig et al. 2016). Spider graphs are drawn using maximum scoring (i.e. based on the worst recorded impact for each impact mechanism) based on data from White et al. (2019). sessments result in impact scores, and these scores need to be integrated to allow ranking invaders according to their overall impacts. While we reject the precautionary principle as justification for using maximum impacts as the sole acceptable criterion for categorizing invaders, maximum scoring can be useful for identifying NNS which may, depending on location or context, have a single, large impact. For example, in Europe, the ruddy duck (Oxyura jamaicensis) threatens the survival of the endangered native white-headed duck (Oxyura leucocephala) through hybridization (Munoz-Fuentes et al. 2007). This 'most severe' impact alone formed the justification for an eradication campaign targeting the species (Henderson 2009). Importantly, our evidence framework allows for easy inclusion of alternative criteria. For example, when an invader is capable of damaging its environment in multiple ways, listing all impacts and averaging them (both within impact categories, and when summarizing impact category scores into an overall impact score) allows for a more representative picture on how likely and how high impacts currently really are, even if there is no single, major impact known (D'hondt et al. 2015;Nentwig et al. 2016). Consider, as a theoretical example, an impact assessment scheme that would employ ten different impact mechanisms for which impact scores need to be derived from the literature. Imagine we have two invasive species: species A attains a 'moderate' impact score for only one category, while species B attains a 'minor' impact in four categories and a 'moderate' impact in another two. Maximum scoring will conclude species A and B pose a similar overall threat, while averaging-based scores will assign species B to a higher threat level compared to A. Having an evidence base that includes all known impact case studies additionally allows one to, for example, provide histograms of impact scores, further clarifying the distribution of NNS impact evidence and severity.
Lastly, we argue that calling upon the precautionary principle encourages impact assessors -most likely unintentionally -to give a greater weight to any evidence suggesting an impact, regardless of the origin, type and quality of underlying evidence (Harris and Holm 2002). Quantifying invader impacts is fraught with difficulties (see above, Courchamp et al. 2017), yet, under the guise of the precautionary principle, invasion biology sometimes drifts into strong inferences that species have a greater impact than is objectively justified by the evidence (see for example Strubbe et al. 2011). This may be motivated by concerns that decision-makers will not act when a NNS suspected of damaging its environment is not unequivocally designated as a high-impact invader. Yet, invoking the precautionary principle in impact assessment risks over-emphasizing likely context-dependent impacts and can mask actual differences in impact between species. This not only leads to disagreements between experts on the magnitude of NNS impacts (Crowley et al. 2017b;Davis and Chew 2017;Russell and Blackburn 2017), but may also fuel public opposition to NNS management, especially for so-called charismatic invaders such as most birds and many mammals (Dana et al. 2013;Estévez et al. 2015). In addition, using the precautionary principle in impact assessments may lead to an inflation in the number of species classified as high-impact invaders, straining and potentially misallocating the resources available for NNS management (Matthews et al. 2017). We therefore argue against the use of the precautionary principle during impact assessment.

Conclusions
We recommend that NNS impact assessments (i) focus on constructing a transparent, complete, reproducible and preferably public database that maps all evidence according to a set of main criteria, ii) explicitly mention what (often protocol-specific) criteria that have been applied to select 'admissible evidence' from the database, and (iii) do not involve the precautionary principle in their database construction or scoring (Fig. 1). This improves the scientific basis upon which informed decision-making (sensu Fairbrother and Bennett 1999) can take place. We contend that adopting such an approach will promote better and societally-supported policy and management of NNS, which is ultimately needed to reduce their ecological and socio-economic impacts.

May atyid shrimps act as potential vectors of crayfish plague? Introduction
Invasive alien species (IAS) are considered one of the major threats to native biodiversity (Sala et al. 2000), due to their wide range of negative impacts on the functioning of whole ecosystems and their communities . Moreover, IAS represent a significant source of non-native pathogens whose transmission to susceptible hosts may have unforeseeable consequences ). The IAS may not only be responsible for an introduction of novel disease agents but also facilitate the spread of the ones that already occur in their new ranges (Peeler et al. 2011;Strauss et al. 2012). In fact, one quarter of the IAS listed as the 100 of the "world's worst" (Lowe et al. 2004) cause environmental impacts linked to disease emergence, as disease agents, vectors or reservoirs (Hatcher et al. 2012).
The emergence in Europe of the oomycete Aphanomyces astaci Schikora, the causative agent of crayfish plague, exemplifies the devastating impacts that a novel pathogen may impose on native fauna. Its spread across the continent caused irreversible declines of native European crayfish populations and still threatens their remaining stocks (Alderman 1996;Holdich et al. 2009), leading to its inclusion among the worst IAS in Europe (Vilà et al. 2010) as well as worldwide (Lowe et al. 2004).
In Europe, the spread of A. astaci is mainly facilitated by its original hosts, North American crayfish species (Holdich et al. 2009;Rezinciuc et al. 2015). Thanks to their long co-evolutionary history with this pathogen, North American crayfish species are able to efficiently limit pathogen growth, and thereby act as asymptomatic carriers. In contrast, European native crayfish, and presumably all other crayfish species that do not originate from North America, are considerably more susceptible to A. astaci (reviewed in Svoboda et al. 2017). This is reflected, for instance, in the mass mortalities of endemic Japanese crayfish Cambaroides japonicus (De Haan, 1841) in Hokkaido, Japan (Martín-Torrijos et al. 2018) as well as of the farmed Australian redclaw Cherax quadricarinatus (von Martens, 1868) in Taiwan (Hsieh et al. 2016), both caused by A. astaci. Like in Europe, C. japonicus mortalities in Japan highlight that the spread of North American crayfish species on other continents may be followed by crayfish plague outbreaks with serious negative impacts . Therefore, this crayfish pathogen should be considered as a serious threat to susceptible indigenous crayfish populations around the world.
The releases and escapes from aquaculture and aquarium trade were assessed as the most important entry pathways of non-native freshwater species in Europe (Nunes et al. 2015). Likewise, the first introductions of North American crayfish into European freshwaters are associated with stocking to open waters and aquaculture (Holdich et al. 2009), and in recent years with illegal stocking activities, bait introductions, garden pond escapes and aquarium releases (Chucholl 2015;Patoka et al. 2017 and references therein). Indeed, the trade in ornamental crayfish species is nowadays considered as the main introduction pathway of non-indigenous crayfish species into European freshwaters (Chucholl 2015;Kotovska et al. 2016;Weiperth et al. 2017Weiperth et al. , 2019aHossain et al. 2018). Moreover, A. astaci-infected ornamental crayfish species have been already reported in German, Czech, and even Indonesian aquarium trade (Mrugała et al. 2015;Panteleit et al. 2017;Putra et al. 2018), and hence releases of infected crayfish may further contribute to crayfish plague spread.
A. astaci was long considered to be a specialist pathogen whose host range is limited to freshwater crayfish (Decapoda: Astacoidea and Parastacoidea). Recent studies, however, confirmed assumptions of Benisch (1940) and Unestam (1972) about the carrier status of freshwater-inhabiting crabs (Decapoda: Brachyura). The Chinese mitten crab Eriocheir sinensis H. Milne-Edwards, 1853, Potamon potamios (Olivier, 1804 and Parathelphusa convexa de Man, 1879 were observed to carry A. astaci infection that they likely acquired from coexisting crayfish populations (Schrimpf et al. 2014;Svoboda et al. 2014a;Tilmans et al. 2014;Putra et al. 2018). Schrimpf et al. (2014) also demonstrated that A. astaci could be transmitted from infected E. sinensis to susceptible noble crayfish, Astacus astacus (Linnaeus, 1758). Moreover, the resistance to A. astaci was also tested in two freshwater shrimp species (Decapoda: Caridea): Macrobrachium dayanum (Henderson, 1983) and Neocaridina denticulata davidi (Bouvier, 1904) (Svoboda et al. 2014b). The experimental infection did not cause mortality in either shrimp species; however, their apparent resistance to the pathogen has been attributed to the purgatory effect of their frequent moulting. The results also indicated that some growth of A. astaci might have occurred in non-moulting individuals of M. dayanum and their exuviae, highlighting the potential of at least some shrimp species to act as A. astaci temporary hosts. This assumption was further supported by the detection of A. astaci in freshwater shrimp Macrobrachium lanchesteri (de Man, 1911) coexisting with infected red swamp crayfish P. clarkii (Girard, 1852) in Indonesia (Putra et al. 2018). However, no infection was detected in marine and brackish water crabs and shrimps in the Black Sea basin despite their proximity to infected populations of Pontastacus leptodactylus (Eschscholtz, 1823), supporting the assumption that the distribution and dispersal of A. astaci is restricted to freshwaters (Panteleit et al. 2018).
Apart from their ecological significance, many freshwater shrimps and crabs are involved in intensive aquaculture and pet trade, and hence they have considerable socioeconomic importance. Their potential sensitivity towards the crayfish plague pathogen might thus have far-reaching consequences (Svoboda et al. 2014b). Moreover, even if A. astaci infection is not accompanied with mortality in shrimps, they may still serve, similarly to North American crayfish in Europe, as chronic carriers of the pathogen, representing threats to wild populations and farms culturing susceptible crayfish species. Indeed, recent reports attribute the presence of ornamental shrimp species in European freshwaters to releases by hobbyists who keep them as aquarium pets (e.g., Klotz et al. 2013;Jabłońska et al. 2018;Weiperth et al. 2019b). The lack of reported mass-mortalities of E. sinensis in Europe, where it coexists in many rivers with North American crayfish, permits the assumption that at least this crab species is resistant to A. astaci infection (Schrimpf et al. 2014). Nevertheless, the situation is less clear for freshwater shrimp species.
The present study focuses on interactions of freshwater shrimp species with A. astaci, and experimentally tests two hypotheses evaluating shrimps' potential to act as its alter-native vectors: 1) the chosen shrimp species may host A. astaci, and 2) they may transmit this parasite to susceptible crayfish. Two widespread filter-feeding atyid shrimps (Decapoda: Caridea) frequently traded for ornamental purposes were chosen: Atya gabonensis Giebel, 1875 originating from West Africa, and Atyopsis moluccensis (De Haan, 1849) from South-East Asia (Hobbs and Hart 1982;Chace 1983;De Grave and Mantelatto 2013). We may presume that both mentioned as well as other freshwater shrimps may get in contact with A. astaci vectors, particularly with P. clarkii, in the pet trade as well as in the wild (Turkmen and Karadal 2012;Uderbayev et al. 2017;Putra et al. 2018).

Studied decapods and A. astaci strains
A. gabonensis is relatively abundant in West Africa, occurring from the Democratic Republic of Congo to Senegal. There are also reports of its presence in South America, however, these are probably erroneous and concern its congener, A. scabra (Leach, 1816) (Hobbs and Hart 1982;De Grave and Mantelatto 2013). A. moluccensis has a wide distribution ranging from Sri Lanka to Thailand, Malaysia, Indonesia and possibly the Philippines (Chace 1983). Tested individuals of both species were caught in the wild, A. gabonensis in Niger, and A. moluccensis in Thailand, and subsequently obtained in the Czech Republic from the wholesaler. The Australian yabby, Cherax destructor Clark, 1936 originated from an experimental culture kept at the Faculty of Fisheries and Protection of Waters, University of South Bohemia in České Budějovice (FFPW USB), Vodňany, Czech Republic. The European A. astacus was caught with a permit for research purposes (permit no. KUJI 39435/2011 OZP 268/2011/Vac/6) from Pařez pond, Vysočina Region, Czech Republic. The animals were acclimated to the laboratory experimental conditions for a month prior to the beginning of the experiment. The total body length of shrimps (from the tip of the rostrum to the end of the telson) ranged from 44 to 60 mm. C. destructor and A. astacus individuals had a total length of 42-73 and 53-78 mm, respectively.
The experimental animals were exposed to zoospores of A. astaci strain belonging to the genotype group D (Svoboda et al. 2017). The strain originating from infected marbled crayfish Procambarus virginalis Lyko, 2017 was obtained from the German aquarium trade (Mrugała et al. 2015) and is kept at the Finnish Food Authority, Kuopio (culture code Evira10823/13). At present, the axenic culture of this A. astaci strain is also kept on RGY agar (Alderman 1982) at the Faculty of Science, Charles University, Prague.

Experimental design
The study consists of two subsequent experiments that were conducted in the facilities of the FFPW USB in Vodňany. The infection experiment lasted 120 days between March and July 2016, and was followed after 20 days by a transmission experiment that lasted a further 130 days until December 2016 (Fig. 1).

Infection experiment
Both shrimp species and C. destructor were used in the infection experiment. C. destructor served as a sensitive control to evaluate A. astaci virulence. This crayfish species was reported to be susceptible to an A. astaci strain from the genotype group D (Souty-Grosset et al. 2006) and two other highly virulent A. astaci strains belonging to genotype groups B and E (Mrugała et al. 2016). The experimental animals were kept separately in plastic containers (163×118×62 mm) with 1 l of aged tap water under a natural light:dark regime. The weekly water change was preceded by manual cleaning of the containers. The animals were fed daily with 1-3 pellets (Sera Grunugreen, Sera, Germany) depending on their food intake. Water temperature was 19.7±0.4 °C (mean±SD), and concentration of dissolved oxygen was 8.3±0.3 mg·ml -1 . To avoid airborne pathogen cross-contamination, no aeration was provided and each container was covered with a plastic lid. The animals were monitored daily; dead shrimps and crayfish as well as exuviae were removed immediately and stored in 96% ethanol.
The A. astaci zoospores were produced as described in Mrugała et al. (2016). The experiment was divided into three different treatments: no A. astaci zoospores (negative control group) and an addition of two spore doses differing in concentration by an order of magnitude. The spore doses added to containers were 10 spores ml -1 and 100 spores ml -1 for C. destructor, and 100 spores ml -1 and 1000 spores ml -1 for both shrimp species (Fig. 1). The shrimps were exposed to higher spore concentrations based on their presumed higher resistance to A. astaci (Svoboda et al. 2014b). The water volume of 400 ml used during inoculation (due to limited amount of available zoospores) was The study consisted of two subsequent experiments: the infection experiment (120 days long) that was followed after 20 days by a transmission experiment (130 days long). Ten individuals of C. destructor, A. moluccensis and A. gabonensis were used in each of the three treatments: no A. astaci zoospores (negative control group) and an addition of one of the two spore doses differing in concentration by an order of magnitude. Six A. gabonensis individuals from each treatment were subsequently used in the transmission experiment, and each individual was placed separately with one A. astacus. To avoid physical interactions and predation by crayfish, A. gabonensis were placed under perforated plastic cages.
increased to 1 l on the next day to ensure suitable conditions for the experimental animals. Ten individuals of each species were used per treatment. Besides 18 A. gabonensis individuals used in the subsequent transmission experiment, all surviving animals were euthanised and stored in 96% ethanol after 120 days of the infection trial.

Transmission experiment
Due to a high mortality of A. moluccensis, only A. gabonensis individuals (six from either treatment, including two infection treatments and negative control group) were used in the transmission experiment. Each potentially infected A. gabonensis was kept individually for 20 days in a plastic container. Subsequently, one A. astacus individual was placed in each container. To avoid physical interactions and predation by crayfish, A. gabonensis were placed under perforated plastic cages. The animals were handled in the same way as during the infection experiment. Water temperature was 18.7±0.3 °C for first 100 days, followed by 23.5±0.2 °C for final 30 days to trigger shedding of the shrimp exoskeleton as zoospore concentrations were observed to increase during crayfish moulting (e.g., Svoboda et al. 2013). Concentration of dissolved oxygen was 8.7±0.6 mg·ml -1 and slightly decreased to 6.8±0.8 mg·ml -1 during the final 30 days. Upon termination of the experiment, all animals that survived were euthanised and stored in 96% ethanol.

DNA isolation and A. astaci detection
All experimental animals were tested for the presence of A. astaci DNA in their tissues, presumably indicating infection. Due to a limited number of available animals we did not test any additional individuals for the presence of A. astaci infection prior to the beginning of the experiment. The surfaces of all animals were thoroughly rinsed with tap water prior to DNA isolation to remove potentially attached cysts. The total body length of each specimen was measured, and each animal was also examined for any presence of melanised spots on its body, which may indicate a local presence of infection. However, it should be noted that melanisation is a common defence mechanism in crustaceans that can have various causes (Cerenius et al. 2008). As microscopic examination of shrimp tissues for the presence of A. astaci hyphae is a non-efficient technique, usually followed by poor results (Svoboda et al. 2014), we omitted this procedure. From each specimen, we dissected soft abdominal cuticle, 2 uropods, 2 legs and if present, any melanised tissues. These mixedtissue samples were ground in liquid nitrogen, and 50 mg subsamples were subsequently used for DNA extraction with the DNeasy tissue kit (Qiagen) as in Mrugała et al. (2015). The same procedure was used for the DNA extraction from the whole exuviae.

Statistical analyses
The data analyses were performed in R version 3.4.3 (R Core Team 2017), with the package "survival" (Therneau and Grambsch 2000). Specifically, we evaluated the differences in mortality rates, using the "survdiff" function: 1) between C. destructor exposed to the two different zoospore doses, 2) between A. moluccensis exposed to the two different zoospore doses, and 3) among all three A. moluccensis treatments including the non-infected control. The significance level was set at 0.05.

Infection experiment
No presence of A. astaci DNA was detected in any shrimp or crayfish individual from the negative control groups. All C. destructor and A. gabonensis used in the control groups survived, whereas eight out of ten control A. moluccensis died before the end of the experimental trial.
Infection by A. astaci was detected in all C. destructor individuals from the two zoospore treatments. The infection reached moderate to very high agent levels in crayfish bodies (Table 1), and was observed to be higher in most crayfish exuviae (Appendix 1). Only two C. destructor survived in the low-dose treatment and the mortality of the others mostly occurred 42-87 days post-infection (median: 58 th day). In the high-dose treatment, all crayfish died between 24 and 104 days post-infection (median: 66 th day). No statistical difference was found between these two treatments (χ 2 = 2.2, df = 1, p = 0.135). The moulting and/or loss of limbs occurred shortly before crayfish death in half of the above-described cases.
A. astaci DNA was detected in bodies or exuviae of all A. moluccensis and the majority of A. gabonensis exposed to A. astaci zoospores. The detected A. astaci agent levels in the zoospore treatments ranged from very low to low (Table 1), and tended to be higher in the exuviae of moulted individuals (Appendix 1). Furthermore, presence of A. astaci infection was no longer confirmed in most A. moluccensis bodies after moulting (except of two individuals), indicating the loss of A. astaci infection through shedding of exuviae (Appendix 1). Contrasting mortality rates were observed between the two shrimp species. All A. gabonensis survived until the end of the experiment, while A. moluccensis suffered high mortality. In contrast to the infected crayfish, shrimps did not lose limbs prior to death, and moulting was associated only with two deaths of A. moluccensis in the low-dose treatment. Table 1. Results of the qPCR analyses of crayfish and shrimp bodies after the experimental infection. N: number of individuals of each species exposed to zoospores. Semi-quantitative agent levels based on the estimated amounts of PCR-forming units (PFU) in the reaction (according to Vrålstad et al. 2009) are provided: A2 (5 ≤ PFU < 50), A3 (50 ≤ PFU < 10 3 ), A4 (10 3 ≤ PFU < 10 4 ), A5 (10 4 ≤ PFU < 10 5 ), A6 (10 5 ≤ PFU < 10 6 ), A7 (PFU ≥ 10 6 ).

Species
Zoospore Among A. moluccensis, the mortality occurred 14-101 days post-infection (median: 23 rd day, one surviving individual) in the low-dose treatment and 15-86 days post-infection (median: 32 nd day, four surviving individuals) in the high-dose treatment. No statistical difference was found between these two treatments (χ 2 = 0.6, df = 1, p = 0.439). The high mortality, however, was also observed among the control individuals, not differing significantly from either infected A. moluccensis group (χ 2 = 0.6, df = 2, p = 0.737). Specifically, eight control A. moluccensis died 14-115 days after the experiment started (median: 29 th day, two surviving individuals).

Transmission experiment
Similarly to the infection experiment, no A. astaci DNA was detected in the control A. astacus and A. gabonensis. The shrimp individuals were exposed to A. astaci spores prior to the transmission experiment and their infection status was confirmed only after its termination. In the low-dose treatment, A. astaci DNA was detected in two shrimps and in exuviae of another individual, whereas in the high-dose treatment A. astaci DNA was detected in all shrimps, either in their bodies or exuviae ( Table 2). The increased temperature during the last 30 days of the experiment induced moulting in the majority of shrimp individuals. Four, three and four shrimps moulted in the control, low-dose and high-dose treatments, respectively. The A. astaci infection reached very low levels in tested shrimp bodies, and very low to moderate levels in their exuviae (Table 2, Appendix 1).

Discussion
It was assumed for decades that crayfish are the only hosts of A. astaci. Unfortunately, recent studies provided evidence that A. astaci does not only grow within the tissues of freshwater-inhabiting crabs (Svoboda et al. 2014a;Tilmans et al. 2014;Putra et al. 2018) but can also be successfully transmitted from crabs to European crayfish species (Schrimpf et al. 2014). Whether freshwater shrimps may similarly act as resistant A. astaci carriers remained, however, unresolved (Svoboda et al. 2014b). Our study corroborated the results of Svoboda et al. (2014b) by demonstrating an elevated resistance to A. astaci infection in two other shrimp species. Furthermore, the outcomes of the exploratory transmission experiment suggest that shrimp individuals previously exposed to A. astaci zoospores might, under circumstances favourable for release of zoospores, transmit A. astaci to susceptible crayfish hosts. The elevated resistance of North American crayfish hosts to A. astaci has been attributed to the rapid response of their immune system that efficiently limits parasite growth in their cuticles. This defence mechanism is an outcome of long co-evolutionary history between A. astaci and its North American crayfish hosts (Unestam and Weiss 1970;Cerenius et al. 2003). It is unlikely that freshwater-inhabiting crabs and shrimps are similarly well-equipped against A. astaci; nonetheless; both groups seem resistant to the crayfish plague pathogen. Our results indicate that the tested shrimp species may be capable of resisting A. astaci infection; however, their response to the experimental treatments and holding conditions differed. The African A. gabonensis were unaffected by either exposure to A. astaci or maintenance in small containers, while the Asian A. moluccensis suffered extensive mortalities, likely caused by its considerably lower food intake that led to depletion of energy reserves. Because the death rate of A. moluccensis control individuals was comparable with individuals exposed to zoospores, it is reasonable to assume that A. astaci infection was not the main cause of their mortality.
The progress and success of A. astaci infection may be also influenced by the frequent moulting of its hosts, especially those exhibiting increased resistance (Vrålstad et al. 2011;Svoboda et al. 2014b;Mrugała et al. 2016). In our experiment, the exuviae shed by both shrimp species were considerably more infected than the shrimp bodies. Indeed, the parasite penetrates host bodies through the exoskeleton cuticle (Oidtmann 2012), and higher concentration of A. astaci DNA is thus expected in this part of the host's body before rather than after moulting. Nevertheless, A. astaci DNA was still detectable in shrimp bodies or exuviae after one or even two moulting events indicating that either A. astaci had penetrated the soft cuticle or re-colonized the hosts after moulting by zoospores released during that process. As shedding of old cuticle would remove any attached spores, A. astaci DNA should be only detectable from the growing A. astaci hyphae in these individuals. The A. astaci spores were observed to survive for at least 14 days under experimental conditions at 15 °C (CEFAS 2000); the temperature that was close to the one provided during our experiments. Furthermore, Svoboda et al. (2014b) were still able to detect A. astaci DNA on filters after seven weeks at 20 °C; however, it remained questionable whether any active zoospores were still present or the assay only picked non-viable cells or environmental DNA. Although the presence of active A. astaci zoospores of viable cysts persisting from the original inoculation cannot be entirely excluded in our experiment, it seems unlikely considering the substantial duration of both experimental trials, weekly cleaning of the boxes, water exchange during the experiments, and subsequent rinsing of shrimp bodies prior to DNA extraction. Finally, the detection of A. astaci DNA in moulted individuals from the transmission experiment after more than 8 months since the zoospore exposure highlights that the pathogen must have been able to penetrate and grow in shrimp tissues.
The growth of A. astaci in host bodies and the subsequent production of motile zoospores is a prerequisite for its successful transmission to the next host. The horizontal transmission of A. astaci between different crayfish species has been widely documented in the experimental settings, aquarium facilities as well as from the wild (e.g., Vey et al. 1983;Diéguez-Uribeondo and Söderhäll 1993;Mrugała et al. 2015;James et al. 2017). Our findings highlight that shrimps might also have a potential to transmit A. astaci to susceptible crayfish species. Although only one A. astacus individual tested positive for A. astaci presence, we might have been unsuccessful in detecting this parasite in lowly infected individuals. This was apparently the case in the cohabiting shrimp individual that likely harboured such low level of infection that it only demonstrated trace DNA amounts in the exuviae. Schrimpf et al. (2014) also failed to detect A. astaci in tissues of four crabs even though A. astacus cohabiting with them got infected. The patchy distribution of the parasite in the host tissues may decrease detection success, especially in resistant hosts (Vrålstad et al. 2009;Schrimpf et al. 2014). Future research on the conditions of A. astaci sporulation in alternative hosts should be coupled with observations of the infection's development in their tissues. This would provide important information about the mechanisms behind A. astaci horizontal transmission between different decapod hosts and the likelihood of alternative hosts actually releasing zoospores in sufficient numbers for a successful spread of the disease.