Wednesday, March 26, 2014

Post-class reflection...the interpretative coding act

I really enjoyed class yesterday, learning about NVivo and thinking through the act of 'coding' qualitative data. Two things have stayed with me from our discussion:

First, I've been reflecting on the ways in which NVivo sets you up to think about coding in a particular way. Obviously, you can 'manipulate' the package to do what you need; however, this is a good reminder about the importance of working toward coherency across your methodology-method-analytical approach-selection of digital tools. Coherence requires that we compare and contrast packages in light of our analytical focus and purpose. This, of course, requires time and even playing with packages across one project to see what feels most intuitive and allows us the MOST flexibility.

Second, I've been really struck with this idea around not having adequate opportunities to explore  software packages that support the analysis process, particularly as graduate students. There is a long history of minimal attention being given to the place of CAQDAS packages in qualitative methods training. I would argue that the early experiences with technology and qualitative inquiry training (or the lack thereof) create long lasting patterns in subsequent generations of scholars. Of course, it not simply about 'training' in CAQDAS use; rather, it is the dual task of 'training' and 'practicing reflexivity' around technology choices. Nonetheless, we have a ways to go in this area. I hope this semester is one small step forward in your training process. We'll keep at it...

Wednesday, March 12, 2014

Post-class reflection...Transcription & Digital Tools

Really great questions posed in class yesterday and on the 'tickets'! I'm going to attempt to respond to your questions, while inviting you all to share your insights as well.

**
Why do I prefer Transana over InqScribe? 

I use both. Transana is a tool I sometimes prefer when I'm considering engaging in analysis in a more systematic way when I'm transcribing. I work with fairly 'large' interactional data sets and it is often helpful for me to NOT construct a Jeffersonian transcribe for all 300 hours, for instance. Rather, I create 300 hours of verbatim transcripts and then go through and carefully select segments that I will come back to and apply the Jeffersonian process too. This means that I'm making some pretty significant analytical decisions and I want to systematically document these. I've found that Transana allows me to do this in a way that is convenient for me. I view InqScribe as a transcription tool -- not useful for supporting the type of analysis decisions I describe above. However, I have other work where I may be transcribing 15 life narrative interviews, which will later be thematically analyzed. I would use InqScribe. First, I typically (in this case) create a verbatim transcript and then move to work within a package like ATLAS or NVivo. However, more recently, I've started just doing that type of transcription within ATLAS. In this way, the bulk of my research process is occurring within one package. It's a balancing act -- one which requires creativity and flexibility.

**
What about moving across two packages? (e.g., Transana --- ATLAS.ti)
Sometimes it is needed and actually serves to support the research process. There are certain projects that will call for you to move across packages. I'll send you all a chapter in which I describe my own use (and rationale for it) across two packages. Part of this move to work across packages is grounded in the need to 'manipulate' the package to do what I (as the analyst) need it to do. 

**
Are there packages that support PDFs for literature reviews?
I would recommend exploring NVivo and/or ATLAS.ti. Both of these packages support PDF files and are quite useful packages for systematic literature reviews.

**
How much space (in a paper) should one give to discussing how digital tools are being used within the research process? 
This is a challenging issue. There has been SO little discussion in the qualitative community around what should be shared and why. I would argue that we need to share more than we typically have shared. Why? The process becomes more transparent when we make explicit how we used particular tools. For instance, rather than stating, "I used ATLAS.ti throughout," it would be more helpful to say what features were used. Perhaps there is minimal space or reason for sharing the details of a given feature, but as a reader and evaluator of what I'm reading I want to know what features were used (e.g., coding, memoing, etc.). In general, we've tended to do a decent job talking about the tools that support data collection; however, when discussing reflexivity (did we maintain an audio diary or a blog, for instance) and data analysis, there has been far less explicit discussion. One of the critiques of qualitative research is that the analysis occurs in a 'black box.' This is a fair critique. However, it is up to us as qualitative researchers to make explicit to others what happened in this 'box,' as we maintain a commitment to make the process transparent. Transparency and rigor are very much linked (and I would suggest ethical practice is central to the process, as ethics is made evident through transparent reporting).

**

Wednesday, March 5, 2014

Post-class reflections: The process of generating data

I really enjoyed class yesterday. There were so many insightful questions posed, highlighting the ways in which qualitative researchers must continually position themselves as 'questioners'/reflexive practitioners.

As I reviewed the "tickets-out-the-door," I noted that one of the overarching 'themes' was around naturally-occurring/researcher-generated data. Questions included...
  • Why focus on these data types in a course on digital tools? 
  • What are the affordances of considering these two types of data? 
  • Do these types of data (when categorized this way) results in generating/producing validity?
I'll begin first by reiterating that these two constructs (researcher-generated and naturally occurring) are not viewed as mutually exclusive, but rather as a tool for understanding differences between data types.  So, why spend time thinking through these differences in a course focused on digital tools? First, in general, emergent technologies are affording researchers opportunities to explore new forms of naturally-occurring data (e.g., online communities). Yet, these 'new' forms of data bring with them emergent ethical dilemmas and the need to carefully examine the 'place' of the researcher. As was mentioned in class yesterday, the researcher is always already present. Our power as researcher is in place, yet how this plays out may look difference across data types. So, it is important to begin thinking through data types, recognizing that these 'types' will also be linked to our methodological positions/orientations. Second, what might these new forms of naturally occurring data afford us? Access is one particular gain. We may be able to learn with and from new communities. However, we must also keep in mind that this 'access' is often a privileged access. Technologies typically require resources. As such, we need to remain reflexive about who has access to participate in work that is bound by technological access. What socio-economic and geographical bounds might limit our participation? Another gain may be the opportunity to engage in the study of social life in contexts not examined previously. New contexts lead to new understandings. This highlights the importance of thinking about the Internet as not only a tool for research, but a research context as well. As one of the readings noted this week, contexts like Facebook, Twitter, and any other 'new' social media site on the horizon are one of many social spheres. Qualitative researchers often ask questions about how meaning comes about and the place that everyday life plays within this meaning-making process. As such, exploring across social sphere is often a central pat of our work. These are just a few ways to think about the affordances.

The question around validity and new data types fascinates me. I would be interested in hearing all of your thoughts around this. Indeed, being able to access varied data types would allow for the pursuit of variability, which is important in relation to validity. Other thoughts?

****
There were a few comments/questions around the differences between online and face to face data collection. I copied a table below that I think is helpful for thinking through the gains/losses. I encourage you to fill it out, particularly if you are considering how you might collect data.


Observations
Gained
Lost
Interviews/
focus groups
Gained
Lost
Face to face






Face to face






Researcher-initiated online discussion (using blog, social networking site, email discussion list or other tool)






Phone









Email/
asynchronous










Instant messenger/text-based synchronous chat













Video conferencing (2 way audio/video e.g. Skype)







****
There was another question around balancing the ethical concerns of participants' views of themselves and the researcher's interpretation?

"Ethical balancing" around around our interpretative practice is always going to be central to our work as qualitative researchers. In fact, some scholars have highlighted how this very 'balancing act' becomes a validity move within our work. For instance, perhaps it is at this moment in our research process that we return to the participants with our initial interpretations. Yet, what if there is a vast difference between how we see what is happening? These are validity questions. One thing we need to consider is how we will report this difference, as reporting it highlights how we are going about validating our findings. This is also a space for us to lean into the methodological assumptions that drive our work. These relate to researcher power (who has the last word?) and the epistemological and ontological claims we make related to 'truth' and 'meaning.' Are we reporting one truth or one of many possibilities? How do we share this with participants? Should we? I would argue that the methodological positions you take up will also inform how you answer these questions. If you all would like to explore these issues further, let me know! I'd be happy to share some readings around validation strategies and dilemmas in qualitative research.