Tag Archives: NVivo

Qualitative data analysis: data display

20 Oct

The first thing I want to say is that data display was lots of fun!

So my last blog post finished after I had developed and played around with my propositions before moving onto data display.

Miles et al (2014) dedicates 6 chapters to data display (part 2 of their book). I read and re-read these chapters a number of times before I could get my head around everything. Had I not done this, I can see how I may have gone down an inappropriate avenue. Miles et al provide various suggestions along with some smashing examples about how data can be displayed – mainly though matrices and network displays.

For my study, I created matrices (with defined rows and columns). Miles et al describe matric construction as “a creative yet systematic tasks that furthers your understanding of the substance and meaning of your database” (p.113). A key point that resonated with me was that it’s not about building correct matrices – it’s about building ones that will help give answers to the questions you’re asking. To do this, they advise us to “adapt and invest formats that will serve you best” (p.114).

An important conclusion I came to? I didn’t need to use (or fully understand) all the matrices/network displays. I took what I needed to (role-ordered matrices) and combined it with a little of something else (Framework matrices) to allow me to display my data in a way that helped me move on with analysis and progress through to interpretation – always with my research questions at the forefront of my mind (and pinned to my office door).


So here’s what I did: I created a matrix for each main theme (n=4) and each focus group (n=15). In total I created 60 matrices.

My participants were entered along the first row and within each participant cell I also identified key demographic characteristics.  Each subtheme was a column heading. I can’t provide an example of one of my matrices in NVivo as the data is legible, so the image below is a QSR example from their volunteering study.

xxxThe beauty (and massive time saver) of NVivo is that when you click in each cell (number 4 ), the data that you have coded (for the individual within that theme) is displayed on the right of your matrix (number 3). This is referred to as the ‘associated view’. Obviously when you first create your matrix all the cells in the middle will be empty so from the coded data (associated view) a summary needs to be entered into each cell.

For my study, I read through all my coded data and my summaries were developed using the following:

  • Including sufficient detail that was understandable and not overly cryptic
  • Retaining my participants language
  • Sometimes including short verbatim excerpts if I thought it was necessary. All quotes were kept in italics
  • Including my commentaries (in a different colour) about context and focus group interaction

A simple, but important thing I noted when writing my summaries is that not all cells were coded, therefore no summary was required. I always wrote ‘NC’ in those cells so I knew that cells were not empty due to an unintended oversight.

Not surprisingly, as with all stages of data analysis, this process was extremely time consuming. However, by the time I completed it, I had so much more insight into what my data was telling me – for example, the similarities, the differences, the unsurprising and the surprising.  I generally gained a much deeper understanding of what was going on.

However, it didn’t end there. I wanted to compare and contrast my data not only within focus groups, but between focus groups. This I found difficult on a computer screen as I had to jump back and forth across so many matrices.  So….. similar to my propositions, I left my PC and went back to flip chart paper. To be honest, it was a nice break from sitting at my PC.

Another beauty of NVivo is that the matrices can be exported into excel. I did this then transferred them again into a word document (I like prettifying my tables with colours etc. and could only do that the way I wanted in word). It cost me a little more time, but nevertheless, it was worth it. I then printed my matrices out (all 60 of them). For each main theme and subthemes I sellotaped 3 flipchart paper sheets together (so that they were long enough to display all 8 focus groups matrices down both sides) and glued my public focus group matrices down the left hand side and my healthcare professionals’ focus group matrices down the right hand side.

These matrices on the flip chat paper then became my focus for a few weeks. I read them, compared them, returned to the literature, returned to my memos, reflected and took time away to think (long dog walks on the beach helped hugely with this). While I did this, I used the white space in the centre of my flipchart paper (between the matrices) to scribble down my thoughts and concepts. For me, this stage enabled the progression from description to interpretation. I even took them to one of my supervision sessions so I could talk through some of my thoughts and illustrate the process I took to get there. I can show you an image of this as the text is not legible – this is one main theme (with 4 subthemes (column headings (in blue)). The peach rows are my participants:



So in a nut shell, my data display process helped me to get my creative thinking underway for interpretation. I then used these matrices to help me write up my first draft of my findings.

I hope this has been helpful. Qualitative data analysis is so diverse and complex and depends upon a number of variables, particularly your methodological approach so there really is no ‘one size   fits all’. Please do respond to this post and share your experience of the process you took and how it worked for you. Or did you do something similar to me? 🙂

Qualitative data analysis: data condensation (aka reduction)

12 Jul

Rather than trying to squeeze my thoughts and experience of all my data analysis in one blog post, I intend to write shorter (!) posts of different stages as I progress. My last blog post was about developing my analysis plan.  Knowing what I know now, I am so thankful I spent the time doing that!

I am following Miles and Huberman’s approach to data analysis and have the 3rd edition ~ Miles et al., (2014). There are lots of similarities in this edition but also some differences. I prefer  this edition.Mile et al

miles and huberman

The purpose of this post is to share with your my first step of data analysis – data condensation. This used to be called data reduction (Miles and Huberman 1994) but it was changed because data reduction implies “weakening or losing something in the process”.


So immediately following my focus groups and interviews, I took extensive notes about salient factors (more about that in another blog post). From these notes I created a contact summary form as advocated by Miles and Huberman and one of my supervisors which synthesised all this information. This is  a very simple and highly valuable thing to do. I have repeatedly referred back to my contact summary forms throughout this process (if anyone wants the template, just ask). I also transcribed verbatim all my focus groups and interviews myself as soon as I could after data collection. This was a very, very long and at times, laborious task, but again highly valuable for really getting to know my data.


I listened to each audio recording (listening only ~ no note taking). Then I read each transcript (reading only ~ no note taking). Then I listened to each audio re-coding again whilst reading my transcripts. This time I scribbled notes down on a pad and drew various mind maps and diagrams. After all that, I was pretty sure I had immersed myself in my data (even though I hated listening to myself!).

I prepared transcripts for importing into NVivo 10. This involved ensuring consistent format and style and anonymising my participants by allocating each of them a pseudonym and a code to differentiate public, healthcare and media professionals (a blog post about this here). This process took quite a bit of time, but if not done thoroughly, I can see how this could have caused me many problems later on.

1st level coding: I developed a starting coding list based on my theoretical framework and wider literature to get me started (initial deductive approach). I listed these codes onto a coding framework with clear operational definitions so I had a clear understanding of what type of data needed to be assigned to each code. Throughout this stage, codes were revised or removed and additional codes and subcodes were created as new themes emerged from the data (inductive approach). As I revised my codes, each transcript was re-read and re-coded. I made sure at this stage I didn’t try to force my data into anything and that codes and sub-codes were all kept very descriptive.

Pattern coding: This was about working with the 1st level codes and sub-codes so that they could be grouped into more meaning full and general patterns. This process was a little more challenging for me because at times I was aware that my thinking was going a little too fast and that I needed to remain fairly descriptive. I was also frightened about condensing too much and losing some of what I had. However, the beauty of NVivo is that you have an audit trail so if you do need to go back, everything is still there (I saved a copy of my NVivo project at the end of every day). While pattern coding, I examined my data carefully and asked a number of key questions such as: What is happening here? What is trying to be conveyed? What are the similarities? What are the differences? In doing so, I also explored not only the similarities but also the idiosyncrasies and differences. This process took quite a number of iterations before I was happy to move on.

Memoing: to help me through the process of coding, I created LOTS of memos which captured a wide range of my thoughts and concepts. I was also able to link my memos to my data and any external resources such as websites or literature. Again, I cannot stress enough how valuable this has been (and still is). My research journal was also created as a memo.

Propositions: it took me a little while to get my head around what I needed to do here as I have always associated propositions with case study research. This was another lengthy process but has helped so much as I started to gently move from the descriptive stage to a more conceptual and interpretive stage. I went through all my coded data and developed propositions from them – so basically a summary or synthesis of my data. I initially developed 613 propositions then reduced this to 479 following removal of duplications. In order to have a better visualisation of these, I left my computer and turned to flip chart paper. I printed each proposition (within their pattern codes) on different coloured post-it notes and arranged and re-arranged them (lots of times!). This then led me to revise my pattern codes again. Of course, with that, I revisited all my data and yet again re-coded into my final revised pattern codes and sub-codes. Just to say at this point – this doesn’t mean these pattern codes are written in stone. They can be (and will likely be) altered again as my interpretation progresses.

So in a nut shell – that was my data condensation. Obviously we know that qualitative data analysis is not a linear process and requires many, many iterations. While at times, this may be frustrating, it’s necessary and can be fun!

On a final note, if anyone is thinking about a CAQDAS programme, I cannot recommend NVivo 10 enough – I absolutely love it (I cannot comment on any other CAQDAS programme as I have only used NVivo) I know many people prefer manual analysis for a number of reasons, which is absolutely fine. NVivo has helped me hugely to store, manage and interrogate my data (of course it won’t interpret or write up my findings though!). The support you receive from QSR International via many ways is first class also.

I am now in the throes of data display and developing lots of Framework Matrices. Another really exciting stage and one that is continually challenges me on my current thinking.  That will be my next blog post. If you want to ask me any questions about my experience of data condensation, please ask away. Any comments would also be very welcome! I’m really trying to keep my blog posts short, but as you can see, I’m not doing well with that!

%d bloggers like this: