Tag Archives: Data analysis

Member checking v’s dissemination focus groups in qualitative research

27 Nov

Historically, member checking (also known as member/participant validation) qualitative research findings has been viewed as an important aspect of establishing accuracy, credibility and validity (Koelsch 2013). Simply, member checking occurs when the researcher returns to participants to seek approval that the researcher has accurately reported their narratives and to gain further comments.

I hadn’t given member checking much thought (I conducted focus groups with members of the public and healthcare professionals in addition to one-to-one interviews with newspaper journalists and editors). It wasn’t until I had finished my preliminary data analysis when it was suggested to me by my supervisors.  This, I admit wasn’t a welcomed suggestion mainly due to the challenges it would likely cause. However, not being one to dismiss supervisor suggestions, I took myself off to explore this concept further. The outcome of this exploration was that I would not conduct member checks as I could not see a clear benefit. Here was my rationale:

  • As some of my focus groups were opportunistically undertaken from already-formed social groups, locating the same participants was likely to be impossible. They were also conducted quite some time ago
  • Geographically, my focus groups were conducted in another part of Scotland. I didn’t have the time or the energy at this stage of my research to travel back there for this purpose, especially when I questioned the effectiveness
  • Just say I was able to locate my participants, I could potentially cause them discomfort having to listen to sensitive issues being discussed, especially around my interpretation of their narratives
  • My participants could also feel uncomfortable hearing their own words
  • Exposing my preliminary findings and interpretation to my participants could make me feel uncomfortable (not a big deal but nevertheless the potential is there)
  • My participants may have forgotten they have said things therefore not be able to validate them. Alternatively, they may unintentionally wrongly recall what they have said and change the nature of the discussion that actually took place
  • My participants may request the removal of valuable data from the focus group. Also, they may have changed their perceptions about something and request that their narrative or part of their narrative is removed
  • The same group dynamics can never be recreated. Since group dynamics and interaction is a key component in my  focus group data analysis, it was deemed impossible to recreate the same group dynamics

However, this then left me with a gap.  Although I made the decision not to conduct member checks, it didn’t mean I could ignore the issue. This meant further reading and exploration. I also took to twitter to help me and received some excellent responses, in particular from Dr Bronwyn Hemsely @BronwynHemsley who had similar experiences.

Taking into consideration all the above points, and also importantly, keeping my epistemological stance of weak social constructionism and methodological approach (Interpretive descriptive methodology) at the forefront of my mind, I knew I wasn’t looking to ‘validate’ my findings, nor did I want to seek confirmation of a ‘truth’. Rather, I wanted to present my conceptual thinking and seek thoughts and ideas as to how I could be further develop them. Equally, I wanted to explore whether I had missed something important.  I therefore went down the route of dissemination focus groups. This is advocated by Rose Barbour (2005) as a more useful method to feedback preliminary findings than member checking. Focus group

So what I did was generate one focus group (6 people (two of whom were original participants)) with a mixture of members of the public and healthcare professionals (reflecting the characteristics of my participants). I prepared a Prezi and presented my key categories from my findings then asked specific questions for further discussion. The focus group was recorded with permission from the group and lasted just over one hour. I also provided light refreshments and gave small gifts as a token of my appreciation. Over all, I found this experience hugely beneficial as it:

  • Helped me explain and contextualise my study as a whole concisely and succinctly (something which has never come easy for me!)
  • Enhanced my analytical and interpretational sophistication through agreement and offers of further considerations
  • Crystallised similar and different perspectives from both the public and healthcare professionalsThinking
  • Helped further consider my findings in terms of what they mean in relation to informing practice and policy today and for the future
  • Was fun for me and those who took part

If you are considering member checking for qualitative research, I would definitely recommend dissemination sessions as an alternative. I’m not however, saying this is the right way and member checking is the wrong way , or indeed the way I did it was the right way – we know there is no right or wrong in qualitative research. What I am saying is that this was the right way for me and my research. I imagine there are various and innovative ways in which this can be done, but hopefully sharing how I did mine gives food for thought. I would be very interested to hear from others their experiences of either member checks or dissemination sessions (interviews or focus groups). Were they helpful or a hindrance?

Advertisements

Qualitative data analysis: data display

20 Oct

The first thing I want to say is that data display was lots of fun!

So my last blog post finished after I had developed and played around with my propositions before moving onto data display.

Miles et al (2014) dedicates 6 chapters to data display (part 2 of their book). I read and re-read these chapters a number of times before I could get my head around everything. Had I not done this, I can see how I may have gone down an inappropriate avenue. Miles et al provide various suggestions along with some smashing examples about how data can be displayed – mainly though matrices and network displays.

For my study, I created matrices (with defined rows and columns). Miles et al describe matric construction as “a creative yet systematic tasks that furthers your understanding of the substance and meaning of your database” (p.113). A key point that resonated with me was that it’s not about building correct matrices – it’s about building ones that will help give answers to the questions you’re asking. To do this, they advise us to “adapt and invest formats that will serve you best” (p.114).

An important conclusion I came to? I didn’t need to use (or fully understand) all the matrices/network displays. I took what I needed to (role-ordered matrices) and combined it with a little of something else (Framework matrices) to allow me to display my data in a way that helped me move on with analysis and progress through to interpretation – always with my research questions at the forefront of my mind (and pinned to my office door).

Picture1

So here’s what I did: I created a matrix for each main theme (n=4) and each focus group (n=15). In total I created 60 matrices.

My participants were entered along the first row and within each participant cell I also identified key demographic characteristics.  Each subtheme was a column heading. I can’t provide an example of one of my matrices in NVivo as the data is legible, so the image below is a QSR example from their volunteering study.

xxxThe beauty (and massive time saver) of NVivo is that when you click in each cell (number 4 ), the data that you have coded (for the individual within that theme) is displayed on the right of your matrix (number 3). This is referred to as the ‘associated view’. Obviously when you first create your matrix all the cells in the middle will be empty so from the coded data (associated view) a summary needs to be entered into each cell.

For my study, I read through all my coded data and my summaries were developed using the following:

  • Including sufficient detail that was understandable and not overly cryptic
  • Retaining my participants language
  • Sometimes including short verbatim excerpts if I thought it was necessary. All quotes were kept in italics
  • Including my commentaries (in a different colour) about context and focus group interaction

A simple, but important thing I noted when writing my summaries is that not all cells were coded, therefore no summary was required. I always wrote ‘NC’ in those cells so I knew that cells were not empty due to an unintended oversight.

Not surprisingly, as with all stages of data analysis, this process was extremely time consuming. However, by the time I completed it, I had so much more insight into what my data was telling me – for example, the similarities, the differences, the unsurprising and the surprising.  I generally gained a much deeper understanding of what was going on.

However, it didn’t end there. I wanted to compare and contrast my data not only within focus groups, but between focus groups. This I found difficult on a computer screen as I had to jump back and forth across so many matrices.  So….. similar to my propositions, I left my PC and went back to flip chart paper. To be honest, it was a nice break from sitting at my PC.

Another beauty of NVivo is that the matrices can be exported into excel. I did this then transferred them again into a word document (I like prettifying my tables with colours etc. and could only do that the way I wanted in word). It cost me a little more time, but nevertheless, it was worth it. I then printed my matrices out (all 60 of them). For each main theme and subthemes I sellotaped 3 flipchart paper sheets together (so that they were long enough to display all 8 focus groups matrices down both sides) and glued my public focus group matrices down the left hand side and my healthcare professionals’ focus group matrices down the right hand side.

These matrices on the flip chat paper then became my focus for a few weeks. I read them, compared them, returned to the literature, returned to my memos, reflected and took time away to think (long dog walks on the beach helped hugely with this). While I did this, I used the white space in the centre of my flipchart paper (between the matrices) to scribble down my thoughts and concepts. For me, this stage enabled the progression from description to interpretation. I even took them to one of my supervision sessions so I could talk through some of my thoughts and illustrate the process I took to get there. I can show you an image of this as the text is not legible – this is one main theme (with 4 subthemes (column headings (in blue)). The peach rows are my participants:

aaa

 

So in a nut shell, my data display process helped me to get my creative thinking underway for interpretation. I then used these matrices to help me write up my first draft of my findings.

I hope this has been helpful. Qualitative data analysis is so diverse and complex and depends upon a number of variables, particularly your methodological approach so there really is no ‘one size   fits all’. Please do respond to this post and share your experience of the process you took and how it worked for you. Or did you do something similar to me? 🙂

Qualitative data analysis: data condensation (aka reduction)

12 Jul

Rather than trying to squeeze my thoughts and experience of all my data analysis in one blog post, I intend to write shorter (!) posts of different stages as I progress. My last blog post was about developing my analysis plan.  Knowing what I know now, I am so thankful I spent the time doing that!

I am following Miles and Huberman’s approach to data analysis and have the 3rd edition ~ Miles et al., (2014). There are lots of similarities in this edition but also some differences. I prefer  this edition.Mile et al

miles and huberman

The purpose of this post is to share with your my first step of data analysis – data condensation. This used to be called data reduction (Miles and Huberman 1994) but it was changed because data reduction implies “weakening or losing something in the process”.

Picture1

So immediately following my focus groups and interviews, I took extensive notes about salient factors (more about that in another blog post). From these notes I created a contact summary form as advocated by Miles and Huberman and one of my supervisors which synthesised all this information. This is  a very simple and highly valuable thing to do. I have repeatedly referred back to my contact summary forms throughout this process (if anyone wants the template, just ask). I also transcribed verbatim all my focus groups and interviews myself as soon as I could after data collection. This was a very, very long and at times, laborious task, but again highly valuable for really getting to know my data.

Picture3

I listened to each audio recording (listening only ~ no note taking). Then I read each transcript (reading only ~ no note taking). Then I listened to each audio re-coding again whilst reading my transcripts. This time I scribbled notes down on a pad and drew various mind maps and diagrams. After all that, I was pretty sure I had immersed myself in my data (even though I hated listening to myself!).

I prepared transcripts for importing into NVivo 10. This involved ensuring consistent format and style and anonymising my participants by allocating each of them a pseudonym and a code to differentiate public, healthcare and media professionals (a blog post about this here). This process took quite a bit of time, but if not done thoroughly, I can see how this could have caused me many problems later on.

1st level coding: I developed a starting coding list based on my theoretical framework and wider literature to get me started (initial deductive approach). I listed these codes onto a coding framework with clear operational definitions so I had a clear understanding of what type of data needed to be assigned to each code. Throughout this stage, codes were revised or removed and additional codes and subcodes were created as new themes emerged from the data (inductive approach). As I revised my codes, each transcript was re-read and re-coded. I made sure at this stage I didn’t try to force my data into anything and that codes and sub-codes were all kept very descriptive.

Pattern coding: This was about working with the 1st level codes and sub-codes so that they could be grouped into more meaning full and general patterns. This process was a little more challenging for me because at times I was aware that my thinking was going a little too fast and that I needed to remain fairly descriptive. I was also frightened about condensing too much and losing some of what I had. However, the beauty of NVivo is that you have an audit trail so if you do need to go back, everything is still there (I saved a copy of my NVivo project at the end of every day). While pattern coding, I examined my data carefully and asked a number of key questions such as: What is happening here? What is trying to be conveyed? What are the similarities? What are the differences? In doing so, I also explored not only the similarities but also the idiosyncrasies and differences. This process took quite a number of iterations before I was happy to move on.

Memoing: to help me through the process of coding, I created LOTS of memos which captured a wide range of my thoughts and concepts. I was also able to link my memos to my data and any external resources such as websites or literature. Again, I cannot stress enough how valuable this has been (and still is). My research journal was also created as a memo.

Propositions: it took me a little while to get my head around what I needed to do here as I have always associated propositions with case study research. This was another lengthy process but has helped so much as I started to gently move from the descriptive stage to a more conceptual and interpretive stage. I went through all my coded data and developed propositions from them – so basically a summary or synthesis of my data. I initially developed 613 propositions then reduced this to 479 following removal of duplications. In order to have a better visualisation of these, I left my computer and turned to flip chart paper. I printed each proposition (within their pattern codes) on different coloured post-it notes and arranged and re-arranged them (lots of times!). This then led me to revise my pattern codes again. Of course, with that, I revisited all my data and yet again re-coded into my final revised pattern codes and sub-codes. Just to say at this point – this doesn’t mean these pattern codes are written in stone. They can be (and will likely be) altered again as my interpretation progresses.

So in a nut shell – that was my data condensation. Obviously we know that qualitative data analysis is not a linear process and requires many, many iterations. While at times, this may be frustrating, it’s necessary and can be fun!

On a final note, if anyone is thinking about a CAQDAS programme, I cannot recommend NVivo 10 enough – I absolutely love it (I cannot comment on any other CAQDAS programme as I have only used NVivo) I know many people prefer manual analysis for a number of reasons, which is absolutely fine. NVivo has helped me hugely to store, manage and interrogate my data (of course it won’t interpret or write up my findings though!). The support you receive from QSR International via many ways is first class also.

I am now in the throes of data display and developing lots of Framework Matrices. Another really exciting stage and one that is continually challenges me on my current thinking.  That will be my next blog post. If you want to ask me any questions about my experience of data condensation, please ask away. Any comments would also be very welcome! I’m really trying to keep my blog posts short, but as you can see, I’m not doing well with that!

%d bloggers like this: