FSA Quality Assurance Toolkit

4. Introduction

FSA Quality Assurance Toolkit

Last updated: 03 March 2023

When the FSA produces (for example, designs research studies, collects data and reports the results), assesses (for example, evaluates research reports written by third parties, including the final outputs from commissioned research) or procures (for example, writes tender specifications or assesses tender bids written by third parties) research it intends to i) be transparent about how the evidence is assessed and used to develop its evidence base, policy recommendations and risk communication; ii) assess evidence in its proper context using the principles of quality, trust and robustness; iii) seek to minimise bias in its assessments of evidence by using professional protocols, its Scientific Advisory Committees, peer review and/or multi-disciplinary teams; and iv) be open and transparent about the conclusions it has reached about any evidence submitted to it (FSA Science Council, 2021).

Aligned with this commitment, the FSA, in collaboration with its Advisory Committee for Social Science, set out to develop a ‘good science’ Quality Assurance Toolkit (QAT) to support its members to produce, assess and procure high-quality research – necessary for ensuring that time and resources that are devoted to produce, assess and procure research have the optimal impact on policy recommendations and risk communication, and avoiding any unintended consequences.

The FSA QAT is intended to be easy to use whilst also supporting transparency in how scientific evidence is produced, assessed, and procured, ensuring that quality assurance is consistently applied across projects and FSA staff. The QAT is not itself intended to provide comprehensive information about all aspects of the research process but was designed to help signpost to key internal and external guidelines. The FSA QAT contains three interlinked components: a ‘Guidance’ section, a series of ‘Checklists’ and a series of ‘Case studies’.

4.1 How the QAT was developed

The FSA QAT was developed between February and August 2022 through a process of co-creation. The suppliers (Dr Olga Perski and Dr Danielle D’Lima with critical feedback provided by Prof Jamie Brown, University College London) worked with the FSA and the Assurance Working Group to collaboratively develop the topics to be covered within the QAT. A series of focus groups were held with FSA staff in March 2022. This was followed by a scoping review of internal and external guidance to help populate the QAT. The QAT was iteratively developed through multiple rounds of feedback from the Advisory Committee for Social Science and piloting on several study protocols, research reports, and tender specifications between April and July 2022.

4.2 How to use the QAT

The ‘Guidance’ section has three parts. Part 1 contains guidance for producing, assessing, and procuring research. Part 2 contains guidance for research management and dissemination (primarily relevant for producing and procuring research). Part 3 contains additional guidance for procuring research. The Guidance sections have been structured to enable ease of navigation. For example, when assessing research reports written by third parties, aspects relating to the research management, dissemination and procurement are typically not relevant.

Separate ‘Checklists’ have been provided and should be selected according to the relevant use case:

The Checklists can be used to transparently document how well the different Guidance aspects have been addressed in a research protocol, research report, or tender specification. For most Checklist items, the response options are: ‘No’, ‘Yes – partly’, and ‘Yes – fully’. Ratings should be made based on the expert evaluators’ knowledge, drawing on the Guidance and linked internal and external resources, in addition to the ‘Case studies’, which contain moderate-to-high-quality research reports across the most common social science research methods within the FSA (for example, focus groups, surveys and behavioural intervention trials).

It is recommended that expert evaluators first read the Guidance and Case studies, and subsequently complete the relevant Checklist, drawing on the linked internal and external resources which contain more detailed information when needed. For example, if some (but not all) aspects mentioned in the Guidance have been considered or implemented, a ‘Yes – partly’ rating should be used. If most or all aspects have been considered or implemented, a ‘Yes – fully’ rating should be used. To provide an audit trail of the rationale behind Checklist item ratings, expert evaluators should also briefly justify each rating via a free-text entry.

Some Checklist items may not be relevant within the project at hand (for example, because of the specific analytical approach used). If an item is not judged to be relevant, expert evaluators should select the ‘No’ rating and use the free-text entry to document why the item is not relevant (for example, “Not appropriate for the analytical approach used”).

Some Checklist items refer to there being a good match between, for example, the research question/aim, the research design, and the analytical approach as this is key to good quality science. Here, expert evaluators should draw on the Guidance and Case studies to inform their rating, acknowledging that there is typically more than one legitimate way to combine research designs and analytical approaches to address the research question/aim.

4.3 Embedding the QAT within the FSA workflow

For each project, the relevant Checklist should be completed by the project officer and checked by a team leader prior to sign-off. This supports the reliability of the Checklist application, with discrepancies resolved through discussion between the two expert evaluators. This should also be used as an opportunity to discuss if particular aspects of the project can be modified to improve its quality (for example, changing the research method or conducting a sample size calculation). The decision to move forwards with a project depends on the overall quality impression (for example, a judgment made by the two expert evaluators following the completion of the relevant Checklist) and a consideration of the local context (for example, whether there is an urgent need for the research, what resource is available for the project). For example, in some cases it may be preferable to move forwards with a lower quality project due to the urgent need for the research. Such considerations should be documented prior to sign-off. A copy of the Checklist should be stored alongside the project materials for transparency.