home *** CD-ROM | disk | FTP | other *** search
- Comments: Gated by NETNEWS@AUVM.AMERICAN.EDU
- Path: sparky!uunet!paladin.american.edu!auvm!PSUORVM.BITNET!HJDM
- Message-ID: <QUALRS-L%92122701092899@UGA.CC.UGA.EDU>
- Newsgroups: bit.listserv.qualrs-l
- Date: Sat, 26 Dec 1992 22:08:22 PST
- Sender: Qualitative Research for the Human Sciences <QUALRS-L@UGA.BITNET>
- From: David Morgan <HJDM@PSUORVM.BITNET>
- Subject: Pt. 2 Quant Journals
- Lines: 125
-
- In the previous posting, I argued that rhetorical issues were
- more important than reliability issues in getting qualitative work
- published in quantitative journals. The reason is that if your
- rhetoric is convincing enough, the reviewers may even congratulate
- you on using new methods to tackle difficult problems. Of course,
- it also helps if you are working on a problem that is central to
- their field. But there is no guarantee that this will be enough,
- so it also helps to convince them that you have done your work in
- a careful and systematic fashion--what I am terming the
- reliability issue.
-
- There is no getting around the fact that numerical evidence is
- the most effective way to convince quantitative reviewers that
- your procedures are acceptable. Thus, I would recommend, if at all
- possible, to start the presentation of your results with a couple
- of tables. I want to be clear, however, that I am *not* advocating
- that anyone should throw together some tables just to sell
- qualitative results. If nothing else, you should forgo this
- cynical exercise because there is very little likelihood that it
- will work. If your numbers don't really have much to do with your
- insights, this will be both obvious and annoying to a
- quantitatively trained reviewer. If you really do not have a
- numerical way of summarizing your argument, then you are better
- off to rely on rhetoric, rather than to try to fool your reviewers
- in their acknowledged area of expertise.
-
- The simplest way to produce the kinds of numbers that I am
- advocating is to do some counting as part of your coding
- procedures. I can think of both a strong and weak argument in
- favor of including this kind of counting in a qualitative
- analysis. The weak argument is that all right to use numbers in
- summarizing qualitative analyses because we are not talking so
- much about how conclusions are reached as how they are presented.
- Thus, the weak argument is that if counts are just alternative
- ways of presenting what you feel is worth knowing in your data,
- then why not use them to communicate with your intended audience?
- (This is similar to points I made in the first posting.)
-
- I should be clear that I am not talking about using the counts
- as the primary summary of one's findings. Instead, what I have in
- mind is to "motivate" the rest of the analysis through a
- preliminary demonstration of "what is in the data" using tables of
- counts. When I do this in my own work, the heart of my "results
- section" still consists of stating and elaborating the central
- themes that I found in the data, and I usually devote more total
- page space to quotations than to tables. But I use the tables to
- start the presentation, and then follow that with a much thicker
- description of not just *what* the numbers showed, but also
- discussions about *why* those were the patterns in the data.
-
- This leads me to the strong argument in favor of counting
- things in qualitative data. If one in fact has something that is
- countable, counting can be a useful way of uncovering patterns in
- your data--provided, of course, that one then pursues the deeper
- question of why these patterns exist. I have a brief piece coming
- out in _Qualitative Health Research_ in January that makes the
- case for this approach, which I call qualitative content analysis.
- In that article, I go to some effort to distinguish this approach
- from traditional forms of content analysis, that use computerized
- searches of textual data to produce tables which constitute the
- major results of the research. In my view, almost any coding that
- is worth doing requires doing your own reading. And numerical
- patterns that one finds in text are just a means of reentering the
- data to ask more sharply focused questions.
-
- (I might as well add that I wrote this article in response to a
- first round of reviews on a substantive article that will also be
- coming out in a later issue of that journal. In this case, there
- were two reviewers who were very concerned that I had used
- numerical procedures in a qualitative analysis. Admittedly, what I
- was doing was unfamiliar enough to require some further
- explanation, but this does point out that problems surrounding the
- use of numbers are not limited to quantitatively oriented
- reviewers who are uncomfortable with qualitative analyses.)
-
- As a final topic, I want say a few words about numerical
- assessments of inter-rater reliability. This is an issue that
- shouldn't really concern qualitative researchers who aren't using
- countable code categories. But when we do use such codes, either
- through the weak argument that they can be a useful way of
- summarizing insights that were arrived at through other means or
- through the strong argument that counts can be useful in analyzing
- some forms of qualitative data, then we do need to have some
- certainty about the integrity of our coding efforts. More to the
- point, your quantitative reviewers will probably demand it. (Good
- discussions on how to calculate these various "coefficients" can
- be found in textbooks on the more numerical variant of content
- analysis.)
-
- Do we really know much more after we compute these
- reliabilities? In principle, the answer is yes, because they
- demonstrate the consistency of our coding principles. In practice,
- however, I doubt if they tell us much, because a determined
- researcher can almost always reproduce his or her coding
- procedures. One of my clinical colleagues laughingly dismissed
- inter-rater reliability as a "follie a deux." And what researcher
- would pay good money to a coder who wasn't able to use the
- project's coding system (i.e., apply the codes in the same way
- that the researcher would).
-
- In the limiting case, these calculations undoubtedly do weed
- out a few efforts at coding that are so personal that the
- researcher cannot find anyone else who is capable of reproducing
- them. I personally find that a more valuable use for these
- procedures is in the early stages of coding, to point to code
- categories that need more thought and to alert coders when they in
- fact are seeing different things in the data. Truth to tell,
- however, quantitative researchers do attach considerable
- importance to these numbers, so, when we go their journals, it
- behooves us to do likewise.
-
- Inter-rater reliabilities are a good place to close because they
- can indeed be an example of what qualitative researchers fear
- most: a requirement that we produce numbers which we feel are
- virtually meaningless in order to get published. I personally do
- not find this to be a serious problem because it does not force me
- to distort my data or to make misstatements about how I did my
- analyses. In my case, it simply asks me to do a little something
- that I personally do not feel should be necessary. But if I
- started complaining about all the things that journals expected me
- to do just to get published, it would take up a lot more bandwidth
- than I've already wasted with this discussion!
-
-
- ==>David Morgan hjdm@PSUORVM
-