Confirmability questions how the research findings are supported by the data collected. This is a process to establish whether the researcher has been bias during the study; this is due to the assumption that qualitative research allows the research to bring a unique perspective to the study. An external researcher can judge whether this is the case by studying the data collected during the original inquiry.
To enhance the confirmability of the initial conclusion, and audit trail can be completed throughout the study to demonstrate how each decision was made.
To add to your section on Credibility and trustworthiness. Credibility involves establishing that the results of research are credible or believable. Sometimes, especially in qualitative research it is only the participants of the research who can legitimately judge the credibility of results.
This is why it is important to try to give evidence and to show the reader that the research is credible. In showing the credibility of the research to the reader it makes the reader trust the research. A thorough understanding of bias and how it affects study results is essential for the practice of evidence-based medicine. Begin typing your search term above and press enter to search.
Press ESC to cancel. Skip to content Home Research Paper What is credibility in qualitative research? Research Paper. Ben Davis June 1, What is credibility in qualitative research? How you ensure the quality trustworthiness and credibility of your qualitative research? What is the purpose of trustworthiness in qualitative research? Those who were not working in that ED during the implementation period were not eligible to participate, even if they had previous working experience in other EDs.
We used maximum variation sampling to ensure that the sample reflects a diverse group in terms of skill level, professional experience and policy implementation [ 28 ]. In summary, over a period of 7 months August to March , we identified all the potential participants and conducted interviews 5 were unable to participate due to workload availability. The overall sample comprised a cohort of people working in different roles across 16 hospitals.
Table 2 presents the demographic and professional characteristics of the participants. We employed a semi-structured interview technique. All the hospitals provided a quiet interview room that ensured privacy and confidentiality for participants and investigators. All the interviews were transcribed verbatim by a professional transcriber with reference to a standardised transcription protocol [ 29 ].
The data analysis team followed a stepwise process for data cleaning, and de-identification. Transcripts were imported to qualitative data analysis software NVivo version 11 for management and coding [ 30 ]. The analyses were carried out in three stages. In the first stage, we identified key concepts using content analysis and a mind-mapping process from the research protocol and developed a conceptual framework to organise the data [ 31 ].
The analysis team reviewed and coded a selected number of transcripts, then juxtaposed the codes against the domains incorporated in the interview protocol as indicated in the three stages of analysis with the conceptual framework Fig. Conceptual framework with the three stages of analysis used for the analysis of the qualitative data. In this stage, two cycles of coding were conducted: in the first one, all the transcripts were revised and initially coded, key concepts were identified throughout the full data set.
The second cycle comprised an in-depth exploration and creation of additional categories to generate the codebook see Additional file 2 : Appendix 2. This codebook was a summary document encompassing all the concepts identified as primary and subsequent levels.
It presented hierarchical categorisation of key concepts developed from the domains indicated in Fig. A summarised list of key concepts and their definitions are presented in Table 3. We show the total number of interviews for each of the key concepts, and the number of times i.
The second stage of analysis compared and contrasted the experiences, perspectives and actions of participants by role and location. The third and final stage of analysis aimed to generate theory-driven hypotheses and provided an in-depth understanding of the impact of the policy.
At this stage, the research team explored different theoretical perspectives such as the carousel model and models of care approach [ 16 , 32 , 33 , 34 ]. We also used iterative sampling to reach saturation and interpret the findings. Ethics approval was obtained for all participating hospitals and the qualitative methods are based on the original research protocol approved by the funding organisations [ 18 ].
This section described the FDC and provided a detailed description of the strategies used in the analysis. It was adapted from the FDC methodology described by Lincoln and Guba [ 23 , 24 , 25 , 26 ] as the framework to ensure a high level of rigour in qualitative research. In Table 1 , we have provided examples of how the process was implemented for each criterion and techniques to ensure compliance with the purpose of FDC.
All the investigators had the opportunity to have a continued engagement with each ED during the data collection process. They received a supporting material package, comprising background information about the project; consent forms and the interview protocol see Additional file 1 : Appendix 1. This process allowed the investigators to check their personal perspectives and predispositions, and enhance their familiarity with the study setting.
This strategy also allowed participants to become familiar with the project and the research team. In order to increase credibility of the data collected and of the subsequent results, we took a further step of calibrating the level of awareness and knowledge of the research protocol.
The research team conducted training sessions, teleconferences, induction meetings and pilot interviews with the local coordinators. Each of the interviewers conducted one or two pilot interviews to refine the overall process using the interview protocol, time-management and the overall running of the interviews. The semi-structured interview procedure also allowed focus and flexibility during the interviews.
The interview protocol Additional file 1 : Appendix 1 included several prompts that allowed the expansion of answers and the opportunity for requesting more information, if required. Theoretical knowledge and skills in conceptualising large datasets: Investigators had post-graduate experience in qualitative data analysis and using NVivo software to manage and qualitative research skills to code and interpret large amounts of qualitative data.
Ability to take a multidisciplinary approach: The multidisciplinary background of the team in public health, nursing, emergency medicine, health promotion, social sciences, epidemiology and health services research, enabled us to explore different theoretical perspectives and using an eclectic approach to interpret the findings.
These characteristics ensured that the data collection and content were consistent across states and participating hospitals. These materials were collected and used during the different levels of data analysis and kept for future reference and secure storage of confidential material [ 26 ].
They were asked at different stages throughout the analysis to reflect and cast their views on the conceptual analysis framework, the key concepts identified during the first level of analysis and eventually the whole set of findings see Fig. We also have reported and discussed preliminary methods and general findings at several scientific meetings of the Australasian College for Emergency Medicine.
This study was developed from the early stages through a systematic search of the existing literature about the four-hour rule and time-target care delivery in ED. Detailed draft of the study protocol was delivered in consultation with the PMC. After incorporating all the comments, a final draft was generated for the purpose of obtaining the required ethics approvals for each ED setting in different states and territories.
To maintain consistency, we documented all the changes and revisions to the research protocol, and kept a trackable record of when and how changes were implemented. Steps were taken to keep a track record of the data collection process [ 24 ]: we have had sustained communication within the research team to ensure the interviewers were abiding by an agreed-upon protocol to recruit participants.
As indicated before, we provided the investigators with a supporting material package. We also instructed the interviewers on how to securely transfer the data to the transcriber. The data-analysis team systematically reviewed the transcripts against the audio files for accuracy and clarifications provided by the transcriber. All the steps in coding the data and identification of key concepts were agreed upon by the research team.
The progress of the data analysis was monitored on a weekly basis. Any modifications of the coding system were discussed and verified by the team to ensure correct and consistent interpretation throughout the analysis. The codebook see Additional file 2 : Appendix 2 was revised and updated during the cycles of coding.
Back-up files were kept in a secure external storage device, for future access if required. To assess the interpretative rigour of the analysis, we applied inter-coder agreement to control the coding accuracy and monitor inter-coder reliability among the research team throughout the analysis stage [ 37 ].
This step was crucially important in the study given the changes of staff that our team experienced during the analysis stage. At the initial stages of coding, we tested the inter-coder agreement using the following protocol:. Step 2 — The team discussed the interpretation of the emerging key concepts, and resolved any coding discrepancies. Step 3 — The initial codebook was composed and used for developing the respective conceptual framework.
Step 4 — The inter-coder agreement was calculated and found a weighted Kappa coefficient of 0. With the addition of a new analyst to the team, we applied another round of inter-coder agreement assessment. We followed the same steps to ensure the inter-coder reliability along the trajectory of data analysis, except for step 3—a priori codebook was used as a benchmark to compare and contrast the codes developed by the new analyst.
The calculated Kappa coefficient 0. The analysis was conducted by the research team who brought different perspectives to the data interpretation. To appreciate the collective interpretation of the findings, each investigator used a separate reflexive journal to record the issues about sensitive topics or any potential ethical issues that might have affected the data analysis.
These were discussed in the weekly meetings. After completion of the data collection, reflection and feedback from all the investigators conducting the interviews were sought in both written and verbal format.
To assess the confirmability and credibility of the findings, the following four triangulation processes were considered: methodological, data source, investigators and theoretical triangulation. Methodological triangulation is in the process of being implemented using the mixed methods approach with linked data from our 16 hospitals. We expect to use data triangulation with linked-data in future secondary analysis.
Investigators triangulation was obtained by consensus decision making though collaboration, discussion and participation of the team holding different perspectives. This approach enabled us to balance out the potential bias of individual investigators and enabling the research team to reach a satisfactory consensus level.
Theoretical triangulation was achieved by using and exploring different theoretical perspectives such as the carousel model and models of care approach [ 16 , 32 , 33 , 34 ]. As outlined in the methods section, we used a combination of three purposive sampling techniques to make sure that the selected participants were representative of the variety of views of ED staff across settings.
This representativeness was critical for conducting comparative analysis across different groups. We employed two methods to ensure data saturation was reached, namely: operational and theoretical. The operational method was used to quantify the number of new codes per interview over time. It indicates that the majority of codes were identified in the first interviews, followed by a decreasing frequency of codes identified from other interviews.
Theoretical saturation and iterative sampling were achieved through regular meetings where progress of coding and identification of variations in each of the key concepts were reported and discussed. We continued this iterative process until no new codes emerged from the dataset and all the variations of an observed phenomenon were identified [ 39 ] Fig. Data saturation gain per interview added based on the chronological order of data collection in the hospitals.
Scientific rigour in qualitative research assessing trustworthiness is not new. Qualitative researchers have used rigour criteria widely [ 40 , 41 , 42 ]. The novelty of the method described in this article rests on the systematic application of these criteria in a large-scale qualitative study in the context of emergency medicine.
According to the FDC, similar findings should be obtained if the process is repeated with the same cohort of participants in the same settings and organisational context. By employing the FDC and the proposed strategies, we could enhance the dependability of the findings. As indicated in the literature, qualitative research has many times been questioned in history for its validity and credibility [ 3 , 20 , 43 , 44 ].
Nevertheless, if the work is done properly, based on the suggested tools and techniques, any qualitative work can become a solid piece of evidence. This study suggests that emergency medicine researchers can improve their qualitative research if conducted according to the suggested criteria. Abiding by a consistent method of data collection e.
Employing several purposive sampling techniques enabled us to have a diverse range of opinions and experiences which at the same time enhanced the credibility of the findings. We expect that the outcomes of this study will show a high degree of applicability, because any resultant hypotheses may be transferable across similar settings in emergency care. The systematic quantification of data saturation at this scale of qualitative data has not been demonstrated in the emergency medicine literature before.
As indicated, the objective of this study was to contribute to the ongoing debate about rigour in qualitative research by using our mixed methods study as an example.
In relation to innovative application of mixed-methods, the findings from this qualitative component can be used to explain specific findings from the quantitative component of the study.
In addition, some experiences from doctors and nurses may explain variability of performance indicators across participating hospitals. The robustness of the qualitative data will allow us to generate hypotheses that in turn can be tested in future research.
Careful planning is essential in any type of research project which includes the importance of allocating sufficient resources both human and financial. It is also required to organise precise arrangements for building the research team; preparing data collection guidelines; defining and obtaining adequate participation.
This may allow other researchers in emergency care to replicate the use of the FDC in the future. This study has several limitations. Some limitations of the qualitative component include recall bias or lack of reliable information collected about interventions conducted in the past before the implementation of the policy.
0コメント