Once, I decided to participate in an online citizen science project. I was surfing the Web and noticed that joining some Internet-based projects requires registration and some not. Well, I picked Tomnod, the one where I didn’t have to sign in, and started identifying satellite images with crabeater seals on them. The instructions were very simple: – ‘Yes’ for I see seals, and ‘No’ for I don’t. Time went by and after the analysis of about 20 images I got curious, “What will happen with my answers?” “Are my answers correct?”, “Have I analyzed more images than anyone else?”, “Will I get feedback on my work?”, “Will I be informed about a new project?” Interestingly, I got even curious about the life of the seals from the images, ”How old are they?”, ”Have they had enough food today?”
And here is the problem – some citizen science project platform don’t give feedback to the users. For example, I even didn’t get ”Thank you for your work for Tomnod”, so I didn’t feel good about the platform in general. I shared my thoughts with Keith Larson, my supervisor at Umea University and the leader of the group that works on technology implementation for citizen science within the ARCS project. We agreed that the absence of feedback in a citizen science project doesn’t sound right. When you are talking about engaging and collecting data without login it resembles the business marketing model – brand-customer relations. In this model, citizen scientist is a shopper or a consumer, if you’d prefer, and the citizen science project is a company. If the customer is committed to your brand, to your company, he or she comes back once and again until the project ends. Do we really want to create a structure where citizen science uses all these marketing tools? Some citizen science platforms are great but they undergo a big risk – without some fundamental feedback that helps people learn about why and what they are doing, the platforms will lose their audience. Moreover, people will probably lose interest in the future in terms of participating not only in a specific project but in citizen science in general. Yes, it might be difficult to provide the individual feedback to each of the participants of a project, especially when there are thousands of volunteers collecting data but is there a solution to optimize the process of giving feedback?
I think computers can help a lot the scientists to improve two-way communication with individuals motivated for participation in citizen science projects. Some computer-based processes can retain volunteers in citizen science initiatives. I was looking for the solutions in the literature database of the ARCS project that I’m a member of, and I found a very good case. Van der Wal et al (2016) give an example of how computing science frameworks allowed for the automated generation of informative feedback to citizen scientists and fostered learning and volunteer engagement and laid the foundation for effective and long-lived citizen science projects. The participants of the BeeWatch program got not only “Thank you” for the submission of photos of bumblebees, but also feedback on whether the identification of a specimen was correct, reasons for misidentification and highlighting key features that can facilitate correct identification in the future. When you participate in such a program, you realise that you both help to collect data and learn more about different species, share gained knowledge with friends and just have fun.
Van der Wal, R., Sharma, N., Mellish, C., Robinson, A., & Siddharthan, A. (2016). The role of automated feedback in training and retaining biological recorders for citizen science. Conservation Biology, 30(3), 550–561.doi:10.1111/cobi.12705