Council of Communication Associations Meeting
April 28, 2003
Washington, DC
The meeting was called to order by Chair Linda Putnam, Texas A&M University. Attendees introduced themselves.
The bulk of the meeting centered on the National Research Council’s project to include communication in the next study of research doctorates. The council heard updates about the project, as well as reports from several schools, which conducted a pilot study for NRC in the spring.
Several schools raised questions about objective measures and how books count, size of journal list, etc. James Watt, RPI, noted issues concerned with administering study in a combined social science and humanities department. Many of the humanities faculty felt the study followed the “big science” model.
Ed Fink, University of Maryland, raised queries about who should be included as faculty when the criteria is stated as members of dissertation committees, folks who teach doctoral students, and faculty who supervise them. Question centered on the specific time period in which this occurs or occurred and on whether faculty in cognate areas outside the department should be listed. Should emeritus be listed? What about adjunct faculty?
Chuck Salmon, Michigan State, and Steve McDowell, Florida State, mentioned technical problems in the study: 1) some folks listed did not receive the survey, 2) some thought it was spam because of the subject line, 3) some faculty had trouble opening and completing the survey, 4) faculty had trouble checking off the appropriate box, 5) concerns for subfields and taxonomy, 5) problems with tracking folks in interdisciplinary programs.
Salmon noted that faculty raised questions about why NRC was asking for demographic data—including last employment, marital status, number of children, etc. The timing was bad and some faculty were on spring break. Faculty seemed confused when they hit the “save file” as to what happened with the survey response data.
The Council heard a presentation by Jim Voytuk, Senior Staff Officer of NRC, concerning the project. Voytuk reported that NRC has 4 major committees that are working in different areas of the study:
1. Reputational measure and presentation of data
2. Student outcome and processes
3. Taxonomy of fields and subfields
4. Objective measures for faculty and programs
He reported that eight pilot schools were included in the study to tap institutional, programs, and faculty questionnaires. The Dean of Graduate Schools at most schools served as main contact and surveys were distributed through Offices of Institutional Research. NRC originally planned for three types of student surveys, but due to privacy issues for students, it may be difficult to collect this data. Now NRC is moving to have only one questionnaire for students admitted to candidacy, as determined by departments. When asked about how to identify a doctoral student for counting purposes, Voytuk said that NRC is relying on the institution’s best judgment of the intent of the student. In some programs, all doctoral students will have M.A. and in other programs entering graduate students will be doctoral. Institutions need to do a good job with the data given
In 1993, NRC relied on institutions to determine programs. Institutions were rated in a particular program area if the area had produced 3 doctoral students in the past 3 years. NRC asked institutions to identify which programs on the list were at this school. This did not produce the best results. It might be better if academic associations help in providing this list and giving a cross-reference for the real Ph. D. program recognized by the field. In 1993, most programs were aggregated as the institutional level. This is open for discussion and decisions, particularly if different programs are housed in different colleges.
In response to questions raised about which faculty to include, Voytuk responded that faculty who serve on dissertation committees outside the department should not be included. Faculty should be supervising doctoral students and included if part of a program’s Ph.D. training. This should be made on the determination of the current assignment of faculty members. In most cases, Deans are involved in deciding which faculty to include in a program. However, members of the programs should be able to review this list.
Voytuk outlined several parts in the process:
Reputational Survey — Random selection of raters from faculty pool at institutional level, aim to have 200 raters for each school. Each rater will assess 50 programs in accordance with the following criteria:
a. How well faculty knows the program?
b. Quality of research faculty
c. How much the program has changed in the last five years? Improve, stay same, or decline.
Faculty Raters — Raters will be sampled from ranks beginning with full, then associate, then assistant professors—based on the number of faculty and number of Ph.D. students in a program; raters will not rate their own programs nor the institution in which they received their Ph.D’s.
Sub-fields — the aim is to match raters with programs of the same type or same areas of sub-fields to produce better ratings. NRC will not report findings by subfield but subfields will aid in describing what a program is, whether to aggregate it or not, and what comprises very different areas of the field.
Data analysis and output — The plan is to use a bootstrapping technique or random halves in which sample random rankings to produce a range or quartile for rankings — e.g., 10th to 30th or 15th to 21st quartile. Data analysis will center on fields as a whole for these rankings. Raw data will be placed on CD which can provide essays about different fields — constraints, where going, sub-fields. Academics can then use raw data as needed to produce different information for fields and sub-fields. In the last survey the CD had two data files—one for institutions and one for individual raters with no institution.
Timetable — NRC is not able to begin this fall as planned. Fundraising is still ongoing and NRC is trying to raise 1.5 to 2 million dollars in difficult economic times. The current goal is planning to do the study in 2004-2005 with the report available in 2006.
Objective Measures — The following will be used:
1. Grants — typically federal agencies, but this study will also include foundations, travel grants for humanities, etc. (National Endowment for Humanities).
2. Awards — typically national awards and fellowship—a list of about 20-25 were included in the past study. None of the hard science awards are included.
3. Books — some ISI data on books. NRC plans to count books in this study and is still trying to decide how to do this, via Library of Congress, Books in Print, or Academic Presses.
4. Book Chapters — are identified in ISI files (one of 15 identifiers) and will be included. This includes proceedings as well as edited books by commercial publishers.
5. Authorship — NRC counts up to seven authors on a publication.
6. Quality Indicators — when asked if NRC weights publications by quality, Voytuk reponsded:
a. Typically based on citations or impact data. Last study located span of data from 1981 to 1991, but did not use it because they did not have prior employment of faculty as checks .
b. With information on prior employment, it would be possible to track faculty records for more than 5 year period, but 10 years is likely to produce an unmanageable amount of data.
In response to the report, the Council appointed a task force to monitor the NRC project. Linda Putnam was appointed to chair the task force. Other members are Ed Fink of the University of Maryland, Bill Balthrop on the University of North Carolina, Ted Glasser of Stanford University, Charles Self of the University of Oklahoma, Scott Poole of Texas A&M University and Charles Salmon of Michigan State University. The Task Force will coordinate efforts to prepare the field for the NRC study, including working on a comprehensive list of Ph.D. programs, working with ISI to add journals to their database, preparing letters and materials for distribution to Deans and Department Heads about the NRC study, and continuing to work with NRC on any updates in the study.
The task force will work through CCA to provide NRC information on list of Ph.D. universities and programs in the field (list of programs by university that grant Ph.D. in communication). The group will get the list of the 20-25 types of national awards that NRC used in last study and circulate this to CCA members. The task force will also work on a typology of the subfields to bring back to CCA for discussion. CCA will eventually submit it to NRC for inclusion in final study.
The task force will Contact folks in Society for Cinema Studies for list of film studies Ph.D. programs and input on how to classify film studies in a typology. It was also discuss with Association members ways to help NRC with objective indicators, e.g., which foundations to include that have provided us grants, how get objective measures of books — what academic and commercial presses to include
The council then discussed a report from Tim Stephen of CIOS regarding a new database of full-text journals being created by EBSCO. He expressed concern about a commercial group trying to create a monopoly of these materials. Stephen is proposing a new, non-profit database called ASCUS. He encouraged CCA members to consider it instead of EBSCO. Discussion followed. The consensus was that this was a decision that each group will have to make on its own. CCA will take no formal position on this issue. Putnam will write Stephen with this information.
The council then conducted elections for chair. Charles Self of the University of Oklahoma was elected chair for a one-year term. The executive directors of the member associations will conduct a conference call of all the groups to select a new executive director. [Michael Haley of ICA and Jennifer McGill of AEJMC/ASJMC were elected to serve as co-executive directors of CCA for three years. McGill will handle the administrative details, while Haley will serve as the Washington, DC, liaison (for hotel arrangements or issues related to NRC).]
Suggestions for other agenda items that CCA might consider in the future include obtaining a Communication Directorate of the National Science Foundation, working with COSSA to lobby congress for money for communication research, and coordinating the efforts of the field in response to EBSCO’s request for digitizing our journals.
Related to the NSF Directorate, we would need at least 50 proposals a year coming through the program director. It would also take about five million in new congressional dollars to support such a program.
The next meeting date will be Friday, Oct. 24, 2003, in Washington, DC. [The was later changed to Monday, Oct. 27, 2003, to accommodate representatives schedules.] There being no further business, the meeting was adjourned.