Model design and study sites
Field trials were carried out between 26th February and 17th March 2020, on two sites in Cornwall (UK), a woodland (Cosawes wood, 50°11’51” N, 5°7’33” W) and farmland managed for conservation purposes (Trelusback farm, 50°12’5” N, 5°12’26” W).
Targets were designed to be the shape and size of a hare (40 cm tall × 26 cm wide), chosen as an easily-recognisable natural shape, much larger than typical targets in detection experiments, but small enough to ensure the task remained difficult. These targets were laser-cut from 6 mm thick birch plywood (The Grain Ltd. Liskeard, UK), and inserted into the ground with a wooden spike. Each model was painted a uniform colour, representing microhabitat specialist and generalist camouflage strategies, based on real colours found in the natural environments in which the targets would be seen (Fig. 5).
Photography and image analysis
Calibrated photographs of field locations were used to identify specialist and generalist colours, and later to analyse colour differences between the models and the exact natural backgrounds they were seen against. All photographs were taken with a SONY A7 camera fitted with a 28–70 mm lens (SONY, Tokyo, Japan) with fixed settings (RAW, f.8, ISO 400, white balance set to cloudy), in diffuse lighting conditions. Each image included a Classic ColorChecker® chart (X-Rite Inc., Grand Rapids, USA) to enable normalisation with respect to light levels and provide a scale bar. Camera calibration and all image analyses were carried out with the Image Calibration and Analysis toolbox-MicaToolbox [63, 64] in ImageJ . Cone catch models for human vision were created based on photographs of the colour chart, following custom plugins in the toolbox. Images were linearised, normalised and converted to coordinates in human CIE XYZ space, and from there into the CIELab colour space , a representation of human colour discrimination, approximating a perceptually uniform colour space, and widely used to assess human colour perception [67, 68]. CIELab coordinates account for achromatic and chromatic information, defining a colour along three axes, representing lightness (L) and colour, from green to red (a) and blue to yellow (b).
The hare-shaped targets were designed to represent camouflage strategies matching a specific microhabitat type in either habitat (“microhabitat specialist”), or adopting a compromise, global-matching solution, resembling the average colour of the visual landscapes (hence known as “generalist”). To identify appropriate colours in our field sites, we photographed likely locations for targets in each area (Nwood = 22, Nfarm = 20) in January and February 2020. In each habitat, two types of common natural elements in the environment were selected as the basis for targets of microhabitat specialist camouflage: leaf litter and bramble/other dark green shrubs in the woodland, grass and bracken/other dried and brown vegetation in the farmland. In each image, transformed to CIELab space, square selections representing 10 cm2 areas of the two relevant specialist elements were taken using the rectangle selection tool. Generalist colours were based on the average colour of the entire visual scenes in these images, across both farm and woodland, excluding colour standards, sky and large man-made objects.
Ideal target colours were then compared to 578 paint samples from the Valspar® range (Valspar, Wokingham, UK). Sample cards were photographed and analysed using the same technique described above. We first identified paints whose CIELab values fell exclusively within the range of values for a single set of target colours (grass, bracken, leaf litter, bramble or whole images). If more than two colours fit this criterion, we selected the best two matches in terms of distance in CIELab space (∆E) between the paint colours and the median target colour, provided that the paints did not also match the colours of any other target groups just as closely. ∆E was calculated according to the CIEDE2000 formula [68,69,70], an adjustment to Euclidean distance officially adopted by the Commission for International on Illumination (CIE) in 2001 , which accounts for some remaining perceptual non-uniformity in the CIELab space and, under appropriate viewing conditions , better predicts colour discrimination by humans than previous formulations [69, 73, 74]. This protocol yielded a total of ten paint colour selections, two each per type of microhabitat specialist or generalist treatment (Fig. 5; Additional file 4:Tables S6). Samples from the chosen paints were then painted onto plain birch plywood to check that the actual paints themselves fulfilled these criteria. Painted squares were photographed outdoors and analysed as above, and ∆E values (CIEDE2000) between the paint values and target selections were calculated (Additional file 4:Tables S6). We subsequently analysed the colour difference between the painted hare models and all natural areas of interest again after the field trials were carried out, based on photographs of the models in situ, to verify that the microhabitat specialist targets were indeed best matched to the areas they were supposed to represent [see Additional file 3].
A total of 24 volunteers, aged 19 to 47 (Nfemale = 17, Nmale = 7) participated in the experiments; 15 of them performed the search task in both the wood and farm, while the others were only able to visit a single location, yielding a total of 39 trials (Nfarm = 18, Nwood = 21); no further data collection was possible due to restrictions linked to the Covid-19 pandemic. Of those who completed both trials, 12 out of 15 participants were tested in the farm before the wood. A simple colour vision test using Ishihara plates (24-plate edition, Kanehara Trading Inc., Tokyo, Japan ) was carried out in the field prior to the search tasks – a single participant did not pass the screening, and subsequent analyses were carried out with and without their trials [see Additional file 5]. Volunteers were recruited by word of mouth, compensated for their time according to the university guidelines for participation payments to research volunteers, and provided written consent for their results to be used in this project.
At each field site, both woodland and farmland, 20 model hares (two of every colour: eight specialist colours and two generalist colours) were set out at fixed positions, a minimum of 30 m apart, either side of a predetermined path. Models were placed in a random order at the start of each day of field trials, then moved along by one position after each participant had completed the trial, so that every volunteer experienced a different combination of model colours and background locations. Along each transect, an equal number of targets were placed facing left or right, in a randomised order. Each one was visible from a minimum of 30 m away, as the volunteers walked along the route, but, due to differences in topography, path layout and the presence of occluding vegetation, the maximum and minimum detection distances for targets varied between positions; these distances were recorded and used to standardise detection distance in subsequent analyses. Volunteers were tested individually, with an experimenter walking behind, to guide them without influencing their search. They were encouraged to walk at a comfortable pace and search as they moved, without stopping to scan the scene; when they spotted a model, they stopped and the experimenter recorded the distance between the subject and the model (detection distance), using a laser range finder (MLR01, Tacklife, USA). Participants also took a photograph of the target from where they stood, using the same equipment and settings as the photography for image analysis, to preserve a record of the viewing conditions. Based on these images, weather conditions were later classified as overcast or sunny for each detection event; experimenters made a note of conditions when targets were missed.
Analyses of target camouflage
To analyse landscape coloration in the specific locations in which targets were seen by volunteers, a new set of field photographs was taken at the end of the experiment. Each target location was photographed with a model hare in place (painted pink – “Pinkberry Passion”, R65E – to stand out against the natural backgrounds), from 10 and 30 m away, with the same camera equipment and settings as above, and a 70 mm zoom lens. Images were scaled to 0.8 and 0.3 pixels/mm respectively and processed and transformed to CIELab space as described above. In each image, we selected the hare target using the colour threshold tool in ImageJ, and, following methods in , defined two further areas for analysis: the immediate surrounds, a band the width of the hare target height (40 cm) around the target, and the whole visual scene, excluding the skyline and large man-made structures (see Additional file 6: Fig. S5), from which we measured mean CIE Lab values. A narrow band of pixels (4 and 2 pixels wide for 10 and 30 m images respectively) was excluded around the outline of the model to ensure that no model pixels were mistakenly included in the background zones. The painted targets of different colours used in the experiment (Nmodel=10) were also photographed and analysed in the same way: a large area in the centre of each hare was selected using the polygon tool in ImageJ, from which the average colour was extracted. Colour differences between every target type and all background areas were once again measured in ∆E (CIEDE2000).
Photographs of the pink model hare in situ, taken from 10 m away, were also used to quantify the relative size of the hare-shaped targets and the areas resembling each microhabitat (grass, bracken, bramble and leaf litter) in the visual scenes. Using the MicaToolbox Quantitative Colour Pattern Analysis (QCPA) framework , images in human CIE XYZ space (N = 40) were first smoothed with the receptor noise limited (RNL) ranked filter tool (weber fraction = 0.05, weber fraction for luminance = 0.1, kernel radius = 3, falloff = 2 and 3 iterations), to facilitate clustering. They were then transformed to CIELab space, and the built-in Naïve Bayes classifier tool from the MicaToolbox QCPA  was applied to segment the image (excluding the target and exclusion zone), based on the mean and standard deviations of the four microhabitat areas initially selected to design the targets. This process assigns each pixel to a microhabitat cluster, based on similarity to the colour of that microhabitat type. Finally, cluster particle analysis was used to measure the area of every individual patch belonging to each cluster, and the proportion of the total area occupied by pixels corresponding to each microhabitat was calculated, for both the whole image and near area band around the targets.
Online experiment design
When playing the game, participants were shown a set of 20 slides, each with a single hare target to locate: each set included an equal number of backgrounds from the wood and farm environments, with no repeats, along with an equal number of target hares facing left or right, and of every colour, in a randomised order. To create variation in difficulty and maintain interest, 8 out of 20 slides featured small crop size images, where the hare was larger and thus easier to find, and the remainder were large crop images. Participants were given 10 s to find the target in each slide, and received feedback on their success. If they correctly clicked on the target, a ‘positive’ sound was played and a green circle appeared around the hare before the background faded away. By contrast, clicks in incorrect locations triggered a ‘negative’ sound, and, if the target was not located within the time limit, a red circle highlighted its position. The timing and position of all clicks, including misses in incorrect locations, were recorded.
All colour matching analyses and statistical analyses were carried out in R, version 3.5.2 (“Eggshell Igloo”) . The DeltaE function in the ‘spacesXYZ’ package  was used to calculate ∆E (CIEDE2000).
The effect of camouflage strategy on the probability of detection was tested using Cox mixed effects survival models, implemented with the package ‘coxme’ . In all models for the field trials, detection distance was used as a measure of detection risk for the camouflaged targets. To account for variation in how far models at different transect positions could physically be seen, this distance was scaled as a proportion of the maximum possible viewing distance at each location, providing a measure of relative detection distance. To match typical survival model outputs, in which increasing time to capture indicates better survival, we then took the inverse of this relative distance, so that increasing values represent an increase in the relative distance participants walked towards the target model from its theoretical maximum viewing distance, before detecting it. The full survival model included strategy, habitat (farm or woodland) and weather, with their pairwise interactions, as well as presentation order, as fixed effects, with subject ID and position as random effects, to account for variation in ability between participants and in difficulty between specific sites along the transect. Model simplification using likelihood ratio tests was performed to determine the significance of fixed effects on survival probability. Final survival models were relevelled to provide a hazards ratio (HR) for the effect of being a generalist rather than a specialist, where an HR greater than 1 indicates that the probability of detection increases and an HR below 1 that it decreases with this strategy . This analysis was then repeated, replacing the strategy variable with the specific paint colour applied to the models (two generalist colours and eight specialist colours), to check that testing for an overall effect of strategy did not mask the success of particular specialist colours. Where a significant interaction with habitat was found, separate models were then run for the farm and woodland in turn.
To further investigate the factors affecting detection risk, we performed two additional series of analyses. To test the effect of similarity between the colours of the targets and the areas they were seen against, the mixed effects Cox survival models described above were re-run with quantitative measures of colour differences, in ∆E, between models and different background areas (near zone and whole image) as explanatory variables. Their performance in explaining variation in detection risk was compared by computing the Akaike Information Criterion (AIC)  for each model; models with ΔAIC values greater than 6 were considered to differ substantially in their explanatory power . Finally, we tested whether the proportion of the scene visually similar to each microhabitat affected detection risk for the microhabitat specialist targets in the field. Survival analyses were re-run on the field data restricted to only microhabitat specialist target types, with the proportion of area occupied by the same microhabitat cluster as the target, in either the whole image or near area band, as a continuous explanatory variable. In all survival models, subject number and position were included as random effects, and the proportional hazards assumption was verified.
For the online game, data downloaded from the server on 15th February 2021 recorded a total of 2906 plays. Participants playing on mobile devices were excluded due to the small screen size, as were an additional three plays where results for only 19 out of 20 slides were recorded, leaving 2804 plays for analysis, with 1955 unique players. Similarly to the field trials, results were analysed with Cox mixed effects survival models, using the package ‘coxme’ . The main model included strategy and slide number as fixed effects, with participant number and image ID, corresponding to its position in the field, as random effects, to account for variation in individual performance, including differences in the devices used to play the game, and variation in task difficulty based on the specific position of the target in the photographs. Target location relative to the centre of the screen has been shown to be a significant predictor of detection times in similar experiments , with more central targets easier to find , and the size of the target, determined by image crop size, was also expected to be important. Including distance from the screen centre and crop size in the model led to non-proportional hazards, so the model was instead stratified by crop size (large or small) and distance from the centre, discretised into quartiles, to account for these effects. A second model was then fitted with strategy replaced by hare colour. For all coxme models, for both field and computer results, we verified that the proportional hazards assumption was satisfied, using diagnostic plots and the cox.zph function in the package ‘survival’ , for the equivalent coxph models with no random effects, and, where possible, for coxph models with random effects included one at a time as frailty terms; deviations according to the cox.zph test were tolerated, depending on inspection of plots of the Schoenfeld residuals.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.