Methods

                        From this theory of hegemony online, I deduced 14 hypotheses that could be quantitatively tested, based on data gathered from users of online communication.  Each of these hypotheses is listed in the results section of this paper.  I tested them with a triangulation of methods: a unique digital diary, a questionnaire, and content analysis.  In brief: I recruited 137 unpaid volunteers from three Washington-area universities that draw a diverse student body from across the nation and world: the University of Maryland, College Park; Howard University, and The American University.  Each of them signed an informed consent form, which had been approved by the Human Subjects Review Committee at the University of Maryland College of Journalism.  Each person participated in the study one time, between Oct. 28 and Dec. 19, 1998.

                        The study subjects created digital diaries of their online movements for 20 minutes by copying the URL of each Web page they chose to visit and pasting it into a spreadsheet I gave them on an anonymous disk--or by entering onto the spreadsheet a brief description of each e-mail they chose to read or write. (Appendix C is a printout of one of the completed diaries and questionnaires.)  They also noted on the spreadsheet the elapsed time, in minutes and seconds, that they opened each new Web page or e-mail message, based on a software program I gave them that enabled a digital clock to always remain visible in one corner of their screen.  Before the subjects began their online session, I took about 10 minutes to train them on the mechanics of completing the digital diary and answer questions.  I told them to use the 20 minutes to do whatever they would normally do online for that amount of time, whether it be in e-mail or on the Web.  And I asked them to use each Web page or e-mail message until it no longer held their interest; i.e., not to race through it because they were conscious of the elapsed time.  

            After 20 minutes online, the subjects answered questions about their online experience and off-line demographics.  Lastly, they responded to a 30-item gender scale (Bem, 1981) and a seven-item alienation scale (Travis, 1992).

                        Subjects responded to the gender and alienation scales--and rated the quality of each Web page or e-mail message they chose--by using a “temperature scale” that I displayed for them (Appendix D).  Rather than use a Likert-type scale with bounded, discrete, ordinal categories, I devised this unbounded scale that would yield interval data, which can be analyzed with more robust statistical tools, such as linear regression.  Along with collecting interval data, my intention in using this scale was to give subjects as much freedom as possible to distinguish among the online texts they chose and to express the strength of their feelings.  With Likert scales, much precision is lost between choices such as “occasionally” and “some of the time,” while “never” and “always” are choices that “almost never” literally apply.

                        Leaving the scale unbounded resulted in each study subject having a different range for their ratings, but I retrospectively standardized the scores by transforming each rating by each person into a Z score, based on the particular range and standard deviation of his or her ratings.

                        The 137 digital diaries yielded a sample of 993 Web pages, which I content analyzed for latent hegemony, based on whether they exhibited dominant culture or counterculture and closed architecture or open architecture.  Appendix E diagrams the coding criteria, which were pretested, simplified and refined twice.  Appendices F, G, H, I, and J contain examples from the sample that show each of the five types of Web pages in my coding schema: negotiated hegemonic, negotiated counterhegemonic, closed hegemonic, closed counterhegemonic, or pluralistic, respectively.

            Intercoder reliability was analyzed by drawing a random subsample of 125 Web pages from the overall sample of 993, based on a formula by Lacy and Riffe (1996, p. 968).  Five colleagues coded the subsample, and their coding was compared with mine.  Our coding matched on 105 of 124 pages (one Web page in the random sample had expired), yielding an agreement rate of 84.7% and a kappa value of .72 at 0.0005 significance, which indicates very good intercoder reliability.