When patients are directly involved in research, the results better reflect a patient’s perspective and clinicians can make more informed decisions. Researchers recently found that crowdsourcing platforms, like Amazon’s Mechanical Turk (MTurk), are effective in efficiently reaching groups of individuals to receive input on prevalent conditions, including back pain.
In a study
published in the Journal of Medical Internet Research
surveys were conducted on MTurk that were used to assess the back pain experienced by participants and to collect potential research topics that would be a priority to them. Also, the 24-point Roland Morris Disability Questionnaire (RMDQ) was included and completed by respondents in order to categorize those with or without back pain and allowed them to rank the research topics they found most important.
“Modern healthcare decision making incorporates expert opinion, practice standards, and the individual preferences and values of patients themselves,” the authors wrote. “The patient’s voice is essential to ensuring that treatment plans address what is most important to them. In support of patient-centered care, patient-centered outcomes research equally seeks to engage patients and the public in designing and implementing research studies.”
The study consisted of 2 screening waves. The first included 2819 individuals for back pain over 33 days of screening and of the total, 350 that screened positively completed the research prioritization activity. Then, 397 participants without back pain completed the prioritization in the second administration. The demographics, including age, education, marital status, and employment, were similar among those with and without back pain.
Additionally, both groups agreed on 4 out of the top 5 and 9 out of the top 10 research priorities. However, those with back pain ranked “treatment—self-care” as their top research topic, while those without back pain voted their top research topic as “diagnosis—causes of back pain.” Despite the differences in rank position, the top research priorities were significantly similar between the two groups.
“We found that, while the groups ranked research topics similarly, there were subtle differences in the content and quality of free-text comments,” concluded the authors, “Given these differences, we suggest that supplemental efforts may be needed to augment the reach of crowdsourcing in obtaining the patient’s voice, especially from specific populations.”
The use of crowdsourcing platforms are effective in reaching out to large groups of individuals; however, further research is needed to understand the overall reach of crowdsourcing in identifying specific population perspectives.