Interview with Cara KaufmanCara Kaufman and Alma Wills are the partners behind the Kaufman-Wills Group and the authors of the new study, The Facts About Open Access: A Study of the Financial and Non-Financial Effects of Alternative Business Models for Scholarly Journals. The study was sponsored by the ALPSP, AAAS, and Highwire Press and officially released on October 11.
SPARC Open Access Newsletter, issue #91
November 2, 2005
by Peter Suber
Cara Kaufman is also my sister. If her report contained more good news for OA journals, then I'd be glad of her support. But because OA journals took some hits in her report, I can be glad of the evidence that she and I are independent. (Yes, I'm an optimist, and yes, we're on good terms.)
OA activists have criticized some of the methods and conclusions of the new Kaufman-Wills report, and I offered Cara a chance to respond to these criticisms in an email interview. I appreciate her willingness to do so.
For the interview, I drew criticisms of the report primarily from the following three sources, all posted to SOAF on October 14, 2005. When I quote Jan Velterop, Fred Friend, or BioMed Central during the interview, I'm quoting their SOAF "reviews".
From Jan Velterop, October 14, 2005 (Open Access Director at Springer, formerly publisher of BioMed Central)
From Fred Friend (JISC consultant and former director of scholarly communication at University College London)
From BioMed Central, October 14, 2005.
These three reviews raise objections that are civil and fact-based, and lend themselves to a civil and fact-based discussion of the issues that would benefit everyone. My interest is simply to get at the facts about open-access journals, just like the Kaufman-Wills Group, ALPSP, AAAS, and Highwire.
Cara asked me to include this opening statement: Before answering your questions, Peter, we want to thank you and SPARC for providing this forum for responding to questions raised about our study. We also want to thank those individuals who took the time to read the survey and ask questions, offer critiques, and provide alternate interpretations. It is very important to us that the work we do supports the broad dissemination of quality scholarly content. Our professional lives have been spent helping scholarly publishers achieve this goal. There are thoughtful, well-intentioned individuals on both sides of the Open Access "argument," if you will. The report's main objective was "to inform the Open Access discussion". Toward this aim, we welcome post-publication peer review. Although time may not always permit a direct response, we and I hope other interested groups will try to collect any and all ideas for the benefit of future research.
PS: Why the title, "The Facts About Open Access"? I'm sure that you, Alma Wills, and your sponsors are all aware that OA is much broader than OA journals and includes OA archives or repositories. Was there a reason not to make clear in the title that the study was limited to OA journals?
CK: No decision was made to exclude the phrase "Open Access Journals" in the title; the phrase "scholarly journals" appears in the sub-title of the report. When the survey was initiated in March 2004 --before the discussion about Open Access was publicly broadened to discussion of repositories by the NIH, the Wellcome Trust, and the RCUK--it was our impression that in most scholarly publishing circles the phrase "Open Access" was used more often in connection with journals than with archives or repositories. We did not anticipate any confusion.
PS: Fred Friend points out that "the 'online usage' statistics in Table 23 do not appear to have been adjusted to reflect the fact that DOAJ journals usually contain fewer articles, an adjustment which would give a fairer picture of the article download and full text page view situation." Do you agree?
CK: First, I should point out the "Special note" that appears on page 37 and cautions readers about the response error evident in the general characteristic questions dealing with manuscript statistics, circulation and usage, number of paid subscribers, and citations (tables 22-25). For these questions, the response rate was so low that the data cannot be regarded as statistically significant. Therefore, it is impossible to tell whether the results are or are not characteristic of the cohort as a whole. The question about usage was difficult to answer, as standardized usage stats such as those from COUNTER were not in widespread use. Also, to answer the question accurately, respondents had to know how to access and interpret certain usage statistics and then add together the usage statistics from all web sites containing their journals' articles --proprietary web sites, online hosts, third party licensors, institutional repositories, author web sites, and national archives.
By comparing usage by cohort, we hoped to measure the relative usefulness of each cohort's journals to scholars. Below table 23, we observed, "Of those journals responding, online usage among ALPSP non-profit, AAMC, and HW subset journals appeared to be much higher than ALPSP for profit and DOAJ journals." If the findings had been statistically significant, then it might be that Subscription Access and Delayed Open Access journals were of greater use to scholars than Full Open Access journals. If, over time, Full Open Access journal usage was found to be growing relative to Subscription Access journal usage, it might point to the growing prominence of Full Open Access journals. In our opinion, adjusting the data as Fred Friend suggests would help answer a different aspect of the usage question. If we could see that the per article usage was just as high for Full Open Access as for Delayed Open Access journals, for instance, we might conclude that each article was of equal use to end users. This would tell us something about the relative usefulness of each cohort's individual articles on average. We believe that comparing usage in either manner is useful and fair.
PS: Jan Velterop points out that "[t]he ALPSP, AAMC and HighWire journals considered are well established; on average, they published their first print issues 40 years ago. The DOAJ (Directory of Open Access Journals) titles are much newer; half started publication during the last decade. Might this mean that the comparison was not exactly 'like-for-like'?" Can you tell from your data, or can you estimate, how OA journals would stack up against subscription journals if you had compared journals of the same age?
CK: On page 7 of the published report, we write "The ALPSP, AAMC, and HW subset journals studied were well established; on average, they published their first print issues 40 years ago (table 16, page 33). Full Open Access journals were much newer; half started publication only in the last decade."
We agree with Jan Velterop that the comparison of cohorts was not necessarily "like-for-like". Indeed, we acknowledge the differences in cohorts throughout the report. On page 7, we add that "It is important to remember [Full Open Access journals'] relatively younger age when looking at factors such as manuscript submissions, number of articles published, impact, and distribution statistics." To moderate questions about viability raised in the report's Introduction, we pondered on page 24 whether the newness of DOAJ journals was a more significant factor in determining a journal's long-term viability than its business model.
At the same time, if we cannot talk about a cohort as a group for statistical purposes, then we cannot talk about the same group for the purposes of making other claims such as widespread roots and success. And, as others have pointed out, if these are early days (and they are), then just as it is too early to declare non-viability, then it is too early to declare success. Given the differences in terminology used to measure success by different journal groups, it was understandable why one group of journals' success ended up being measured by the standards of the other group. We hope that future studies will find ways to adjust for this.
It would be interesting to know, as you say, how OA journals would stack up against subscription journals if we had compared journals of the same age. The data from all the surveys were entered into the SPSS statistical software package, which does support cross tabs. If the sponsors would like to compare several journal characteristics looking only at each cohorts' journals established up to five or seven years ago, we could do so.
In the absence of data, you asked whether we could estimate how the younger journals in each cohort would compare. From our experience in launching new journals and our close study of various data elements from this study, we think that the majority of Full Open Access journals --those published by the smaller publishers-- still would differ in several regards from the comparably young journals from other cohorts studied. For one thing, we think that Full Open Access journals on the whole would be more likely to be published by an academic department or individual than younger journals in the other cohort. We might also find that Full Open Access journals overall would be more likely to be supported by volunteer labor and grants than other young journals in other cohorts. We think that the newer journals would be more likely to be Full Open Access, Delayed Open Access, or Optional Open Access than the more established journals --no matter the cohort. In short, we think that we would find that some differences are a function of age and some are not.
In completely separate research, we have heard from relatively small publishers that they have started new journals as Open Access and explain their decision by adding that they think the demand for publication by authors will be greater than the demand for subscriptions by readers. We also have talked with long-established small, niche journal publishers that have said they are transitioning from Subscription Access to Open Access because they are losing so many subscribers and need to find an alternate revenue stream. One more observation: in our years of journal acquisitions and development, we often have seen that new journals begin with one or two revenue streams, but those journals are most successful (in terms of readership, number of submissions, impact factor, financial stability, and years in existence) eventually are supported by multiple revenue streams with more than one revenue stream contributing a substantial portion of total revenues.
From the study (tables 29 and 30), we see that all journals get a proportion of their funds from some subset of these: advertisers, sponsors/foundations, research funders, authors (whether article payments or page/color charges), readers, and institutions (via subscriptions or memberships). It will be interesting to track how the proportions (we started looking at these in tables 31 and 32) change over time. In our opinion, it is logical that the successful strategies of either "camp" will be imitated by the other "camp"; for example, if memberships are a good idea for Open Access, then we will see them for traditionally non-Open Access journals as well. The idea that there will be an even greater merging of the various characteristics of various publishing models in the future seems likely. After all, both Subscription Access and Open Access journals were planning to test or adopt a different online business model in the near future (table 27).
PS: Jan Velterop observes that you cover full OA models to the exclusion of hybrid OA models which he believes are best suited for established journals. Can you comment on this decision and how it might have affected your results?
CK: Hybrid OA journals were not excluded. We purposely expanded the study to include additional cohorts to ensure that virtually every type of model was represented. As we said on page 3, "Initially Phase 1 of the survey was to be limited to journals offering different models of Open Access: for Full Open Access, those included the Directory of Open Access Journals and for Delayed Open Access, journals published on the HighWire Press platform." Especially to satisfy our questions about the prevalence and impact of various types of access and financial models, we added AAMC and ALPSP populations to embrace the broadest possible cross-section of journals.
Table 26 (page 40) shows access models by cohort. One of the options provided was "Most articles by subscription; free if author-side paid." For Phase 1, only one journal, a for-profit ALPSP member journal, responded. As you know, Optional Open Access is a relatively new but seemingly growing model. I would venture to say that it was even less common when the surveys were mailed. Either publishers with Hybrid Open Access journals did not respond to the survey, or their numbers (at the time) were such that their absolute response was minimal.
In phase 2, in-depth interviews with five publishers offering Optional Open Access were conducted and most of these were with established journals. This was done as we saw thought we saw a trend emerging and also to capture some of the commercial publishers and university press publishers that did not respond to the survey. Case study 9 (page 67) discusses a journal publisher's experimentation with Optional Open Access. Optional Open Access also is discussed in case studies 18 (page 83), 20 (page 88), 21 (page 89), and 22 (page 91).
PS: BioMed Central objects that the report "incorrectly states that BioMed Central does not operate external peer review on most of its journals. In fact, all of BioMed Central's journals operate full peer review using external peer reviewers." How do you respond? If you accept BMC's correction of your understanding of its peer review methods, how far does that cut against your conclusion that, in general, subscription journals use more rigorous forms of peer review than OA journals? This conclusion seems largely based on the way you interpret BMC journals. I quote the report at p. 25: "Although all cohorts reported submitting their original research articles to peer review, AAMC [Association of American Medical Colleges] and HW [HighWire] subset journals appeared to have the most rigorous peer review, as measured by their reliance on external reviewers. Full Open Access journals tended to depend heavily on editorial staff only for peer review; this practice was extremely uncommon among the other journal cohorts. When BMC and ISP journals were excluded, the peer review practices of the remaining Full Open Access journals were much more similar to those of the other journal populations."
CK: In post-publication interviews, we learned from BMC that it does "operate full peer review using external peer reviewers." In light of this, we revisited the data and prepared an addendum to the report <http://www.alpsp.org/publications/pub11.htm>, with additional and corrected findings and interpretation. A summary of the addendum follows:
All journal populations used some form of peer review. Full Open Access journals published by smaller publishers (not BMC or ISP) were less likely to rely on external peer review than were other cohorts. A greater percentage of Full Open Access journals than other journal populations studied offered post-publication peer review. Virtually none of the Subscription Access or Delayed Open Access subset journals relied solely on editorial staff for review. Of the Full Open Access journals responding to the survey, a greater percentage relied on editorial staff exclusively for peer review than any other cohort. When the data were analyzed more deeply, however, it became clear that the higher percentage was almost totally due to the response from one set of journals. Without ISP journals in the mix, the percentage of Full Open Access journals relying on editorial staff for peer review was not significantly different than any other cohort. If one takes the position that external peer review is more rigorous than review by editorial staff, then one might still conclude that those cohorts that relied more heavily on external peers exclusively had more rigorous peer review. This would mean that the AAMC and HW subset journals practiced more rigorous peer review. It is difficult to tell for sure, because there is no universally accepted definition of terms such as external reviewers and internal staff and no widespread agreement as to how the terms translate into rigor of review. Plus, such a large percentage of journals use a mix of internal and external peers and we know whether those journals relied more or less on internal reviewers. The journal's acceptance rate may also have an impact on how rigorous the journal's peer review appears. In other words, does a higher rejection rate necessarily signify more rigorous peer review? Perhaps peer review is a function of the journal's size, youth, or another characteristic. More research is needed.
PS: BMC also objects to the way the report classifies its journals. "The study groups BioMed Central together with Internet Scientific Publications (ISP) as a cohort, and indicates that this was done because over half of the responding open access journals were from these two publishers. ISP and BioMed Central have little in common as publishers, and so the conclusions drawn about BioMed Central by looking at this cohort are not meaningful and are often misleading. For example, the BioMed Central/ISP group of journals is reported to offer online manuscript submission on a lower percentage of journals than other journal groups. The report picks up on this as a surprising finding, suggesting implicitly that open access journals are lagging behind in this regard. In fact, BioMed Central offers online submission of manuscripts on every one of its journals. Not only that, but BioMed Central's manuscript submission system is widely praised by authors, many of whom tell us that it is the best online submission system they have used." Your response?
CK: The reason for reporting separate results for a subset of DOAJ journals was explained on page 27 of the study: "Note: Of the 248 DOAJ Full Open Access journals that responded, 60 were published by BioMed Central (BMC), sharing the same general editorial principles and access policies, and another 63 were published by Internet Scientific Publications (ISP), again with common principles and policies. This means that 123 of the 248 Full Open Access responses, or just fewer than 50%, were represented by two publishers. Findings are reported with and without data from these two large Full Open Access publishers --BMC and ISP. Without BMC and ISP data included, one can better understand those Full Open Access journals being published by other, typically smaller organizations."
The last line of this explanation is the reason the data were separated out. Although we can appreciate and are sympathetic to the problems that our groupings raised, we did not in fact lump BMC and ISP together in a cohort. What we did was show the data for all Full Open Access journal respondents and show data for all Full Open Access journal respondents excluding BMC and ISP.
When the data first were entered, we did not separate out BMC or ISP. We had not wanted to draw attention to any particular journal or journals. But presenting data for all DOAJ respondents also was imperfect. The combined data raised several questions, especially regarding financial support which was a major area of exploration. BMC and ISP journals not only represented 50% of the total respondents but their respective journals also shared certain characteristics. The fact that the 60 BMC journals were known to rely on author-side fees and the 63 ISP journals were known to rely on industry support, made it seem that DOAJ journals relied more heavily on author-side fees and industry than any other sources. By removing BMC and ISP responses from the data, we could show that the majority of (small-publisher) Full Open Access journal respondents did not rely on author-side fees. This was an interesting and surprising finding to many people.
If you refer to table 29 (page 43), you can see the frequent and wide variations in responses between the DOAJ and the DOAJ excluding BMC and ISP. We thought that it was worth reporting on both the total Full Open Access respondents and the subset of journals published by smaller organizations to expose how differently Full Open Access journals from smaller publishers behaved.
Regarding each cohort's offering of online manuscript submission, we reported on page 11 and in table 38 (page 50) that a larger percentage AAMC and Delayed Open Access journals offered online submission than Full Open Access journals. When counting only the Full Open Access journals from smaller publishers, the percentage of journals offering online article submission rise from 59% to 69%. Knowing what we know now about BMC, it is clear that ISP journals must not offer online article submission and weighted down the full DOAJ percentage. We did not report that a BMC/ISP group of journals offered online manuscript submission on a lower percentage of journals than other journal groups.
We did express surprise at this finding because Full Open Access journals are newer and overwhelmingly published online only. We would have expected Full Open Access journals to have among the highest if not the highest percentage of journals using online article submission. On page 11, we wrote it was surprising that only slightly more than half "offered" online manuscript submission." In the final editing of the report, the word "offered" was changed to "could handle."
PS: Your Table 30 shows that many more subscription journals charge author fees (such as page charges, color charges, and reprint fees) than OA journals. In fact, you show that a majority of non-OA ALPSP journals charge author fees (76%) while a majority of OA journals charge do not (53%). Two questions follow. First, when you point out that author-side fees are "less tried and true" than subscription fees (p. 24), are you saying that dependence on them is a factor in making a journal business model less viable? Second, when you point out that journals that "depend heavily on author charges" may create "some upward pressure on [their] acceptance rate" (p. 68), are you saying that this is a factor in making a journal's peer review process less rigorous?
CK: On page 24, drawing on data from tables 29 and 32, we summarized that "Full Open Access journals rely heavily on revenue streams such as grants, author-side fees, and institutional memberships along with a substantial amount of personal or departmental funding and volunteer labor." By calling this grouping of revenue sources "less tried and true than subscription and advertising revenue," we are not suggesting that dependence on them is a factor in making a journal business model less viable. The remarks were made under the heading "It is too early to tell whether Full Open Access is a viable business model." The conclusions were added in the last draft of the report to call attention to the relative youth of Open Access journals and the business models they represent and to moderate interpretations of the data. By talking about the revenue streams of many Full Open Access journals as being newer, we are trying to point out not that they are less viable but that they are unproven…as of yet, and so it is too early to tell.
On page 68, we have paraphrased the publishing professional interviewed for case study 9, who expressed to us his or her belief that "There may be some upward pressure on acceptance rate if Open Access publishers depend heavily on author charges." As disclosed in the report, this university press publisher relies mostly on paid subscriptions for revenues but is experimenting with Full Open Access for two journals and Optional Open Access for a third. I cannot know for sure whether this individual meant that that upward pressure on the acceptance rate made a journal's peer review process less rigorous. The publisher was offering an opinion in response to our request for predictions about the future. Each of the case studies concluded with perspectives and predictions either directly quoted or paraphrased by the interviewee.
PS: Jan Velterop observes that "[m]ost ALPSP journals made a surplus (75%). 41% of the full OA journals made a loss; 24% broke even, and 35% made a surplus. Might this mean that 1/4 of the long-established ALPSP journals still doesn't make a profit and that almost 60% of the new open access journals at least break even, and well over half of those are profitable?"
CK: Table 33 (page 46) shows, as you report above, that 41% of Full Open Access journals had a loss, 24% broke even, and 35% had a surplus. Saying as Jan Velterop did that almost 60% of Full Open Access journals were at least break even and noticing, as you have, that almost a quarter of ALPSP journals operated with a shortfall over last fiscal year is another valuable way of looking at the data. Following that train of thought, we can say that a much higher percentage of journals in other cohorts at least break even --78% of ALPSP, 94% of AAMC, and 91% of HW subset journals (compared with to 60% of the OA journal noted above). As suggested by Fred Friend and discussed earlier, it would be interesting to see whether the age of the journals is a greater determinant than business model as to a journal’s current financial success.
In fact it seems possible that the age of the journals may have been a significant factor in the financial health of both Full Open Access and ALPSP journals. When we revisit the study's frequency tables, we can see that 14% of the ALPSP journal respondents were from journals with start dates within the last five years. There are 12 new journals (2004 start dates) and a total of 17 (with start dates in the last five years) included in the ALPSP totals.
PS: You've summarized one conclusion of your report by saying, "It is too early to tell whether full open access is a viable business model" (p. 24). I have four questions about this conclusion. First, doesn't this way of putting it undercut your important showing that there are many business models for OA journals, not just one? For example, you show that, contrary to common belief, most OA journals do not charge author-side fees. If your conclusions apply to more than one business model, wouldn't it be better to say so than to suggest that there is just one OA business model?
CK: We agree with you that a major finding in the report is the wide range of business models in existence and that even more change is anticipated. One underlying objective of the study --as reflected in its subtitle of "a study of the financial and non-financial effects of alternative business models for scholarly publisher" was to determine what impact different business models had on various aspects of the journals studied. We looked at the data as a snapshot in time. Having said that, I acknowledge that we could have done a better job stepping back and reflecting on the data.
PS: Second, if it's too early to tell whether the business model of charging author-side fees is viable, is it also too early to tell whether it's *unviable*?
CK: I agree that it is too early to tell about the continued viability or unviability of any of the business models; in the same breath I would say that several data points support the idea that many FOA [full open-access] journals seem to be on uneven footing.
PS: Third, do we need more time to assess the viability of this business model because the data are scanty or because most OA journals are still very young and may stabilize their financial footing over time, for example, by establishing their reputations, improving their impact, and attracting authors? (BMC journals are just one set of OA journals that can show significant improvements in financial security over time.)
CK: We are unaware of data that show the aforementioned significant improvement in financial security over time. Also, the OA publishing models are new and we need more time to determine how age, access model, business model, or other variables play into the viability of Full Open Access journals.
If this is true --that it is too early to declare success OR failure-- then can we ask rhetorically why so many people are so sure at this stage that OA is a success, or a failure? A more constructive approach that has been suggested to us is to shift the discussion so we can figure out what and when increased-access models will thrive.
PS: Fourth, Mary Waltham's June 2005 study for JISC reported that among the society journals in her study, "[t]here is heavy reliance on institutional subscriptions which for all but one journal fell in number" during the three-year period 2002-2004 (p. 3). Fred Friend concludes from Waltham's study that "the current dependence upon institutional subscriptions cannot be relied upon to provide a secure future." Your study shows that "most journals surveyed are planning to test or adopt a different business model in the next three years" (p. 24 and Table 27). In light of Mary Waltham's findings and your own, can we conclude that it's too early to tell whether traditional subscription models will remain viable?
CK: We take your point. We can conclude (not necessarily from this study) but from experience in the field and the conclusions of other studies, that it is too early to tell whether traditional subscription models will remain as they are today. Many of the publishers studied were trying or planning on trying new models because of the interest in Open Access, not necessarily as a result of their own model's failing.
Certainly our professional experience is that there is continued downward pressure on subscription units. Small journals especially are being squeezed. According to HighWire, most of the journals they work with are seeing 3% - 5% attrition per year in institutional subscriptions by number, but NOT drops in readership --online journals are still growing 50%-100% per year in usage or subscription income. As shown in table 28 (page 42), Open Access proponents are influencing journal publishers to either test (11%-59%) or adopt (4%-9%) a new business model in the next three years. At the same time, many journals are experiencing increasing usage and increasing subscription revenues as mentioned above. Without taking into account information and trends noted outside the study, journals still steeped in traditional subscription models seemed to us remarkably healthy and optimistic about the future. Just as with Full Open Access journals, it is too early to tell about the viability of the Subscription Access model.
We should also mention that the subscription-based business model has undergone tremendous change in recent years as publishers attempt to respond to changing needs and wants of the marketplace. Fred Friends' comments indicate that he is not optimistic about the industry's ability to evolve and adapt. It seems to us too early to take that position.
PS: BioMed Central draws on your data to suggest that the future may be brighter for OA journals than for conventional ones: "92% of open access journals were meeting or exceeding revenue expectations, in comparison to 91% of AAMC journals, 83% of ALPSP journals and 76% of surveyed HighWire journals. Similarly, the study finds that revenues from the last fiscal year to the current fiscal year are 'trending upward' for 71% of 209 surveyed open-access journals, compared to between 27% and 67% of subscription-based publishers that were surveyed." Your response?
CK: These additional readings of the data are useful to consider. To draw conclusions, we tended to look at the responses each cohort had to the set of questions about current financial health, trends, and expectations. In most cases, journals other than the Full Open Access journals responded most positively...at the top of the charts. They had the greatest percentage surplus, which is needed by any enterprise to sustain growth. They also had the highest percentage of journals whose revenues exceeded or far exceeded expectations. The bulleted points (pages 46- 47) highlight these data as well. The other issue is absolute dollar revenue. Many of the small OA journals had zero costs and small revenue. The study did not address dollar volume, just the journal's expectations. The fact that a greater percentage of Full Open Access journals than other journal cohorts reported upward revenue trends is positive for Full Open Access journals and indicates that they are in a growth mode. As can be seen by the data, however, this upward trend is mobilized by BMC and ISP journals. Full Open Access journals from smaller publishers had the highest percentage of journals whose revenues trended downward.
Positive revenue surplus and revenue expectations with more static revenue trends also can be explained by the product life cycle and a publisher's experience with a particular journal --again, reflecting the more established age of most of the cohorts and the younger age of the Full Open Access journals. Also, the idea the Full Open Access journals report that they are breaking even has to be viewed in light of the comments from so many Open Access journals, often run by editors, asking what's a business model and or supported by volunteers, departments, or individuals.
The Facts About Open Access, full report, October 11, 2005
Post-publication addendum to The Facts About Open Access, October 24, 2005.
Report overview (first 32 pages)
ALPSP press release
* Here's some news coverage of the report:
Sophie Rovner, Status Report On Open Access, Chemical & Engineering News, October 17, 2005.
Open-Access-Journale mit Startschwierigkeiten, Web-o-rama, October 14, 2005.
Randy Dotinga, Open-Access Journals Abound, But Will They Survive? Forbes, October 13, 2004.
Astara March, Open Access Journals Struggling, UPI, October 13, 2005
Lila Guterman, Survey of Open-Access and Subscriber-Based Journals Finds Changes Afoot in Both Business Models, Chronicle of Higher Education, October 12, 2005.
Open Access publishing 'unprofitable', The Bookseller, October 12, 2005.
Stephen Pincock, Open access report published, The Scientist, October 11, 2005.
Read this issue online
SOAN is published and sponsored by the Scholarly Publishing and Academic Resources Coalition (SPARC).
Additional support is provided by Data Conversion Laboratory (DCL), experts in converting research documents to XML.
This is the SPARC Open Access Newsletter (ISSN 1546-7821), written by Peter Suber and published by SPARC. The views I express in this newsletter are my own and do not necessarily reflect those of SPARC or other sponsors.
To unsubscribe, send any message (from the subscribed address) to <SPARC-OANewsemail@example.com>.
Please feel free to forward any issue of the newsletter to interested colleagues. If you are reading a forwarded copy, see the instructions for subscribing at either of the next two sites below.
SPARC home page for the Open Access Newsletter and Open Access Forum
Peter Suber's page of related information, including the newsletter editorial position
Newsletter, archived back issues
Forum, archived postings
Conferences Related to the Open Access Movement
Timeline of the Open Access Movement
Open Access Overview
Open Access News blog
SOAN is licensed under a Creative Commons Attribution 3.0 United States License.
Return to the Newsletter archive