Professional Documents
Culture Documents
Social Media Freedom of Speech and The Future of Our Democracy Lee C Bollinger All Chapter
Social Media Freedom of Speech and The Future of Our Democracy Lee C Bollinger All Chapter
Social Media Freedom of Speech and The Future of Our Democracy Lee C Bollinger All Chapter
DOI: 10.1093/oso/9780197621080.001.0001
1 3 5 7 9 8 6 4 2
Paperback printed by Lakeside Book Company, United States of America
Hardback printed by Bridgeport National Bindery, Inc., United States of America
To Jean and Jane
And to the next generation:
Katelyn, Colin, Emma, Cooper, and Sawyer
Julie, Mollie, Maddie, Jackson, and Bee
CONTENTS
Acknowledgments xi
List of Contributors xiii
Opening Statement xv
Lee C. Bollinger and Geoffrey R . Stone
Regulating Harmful Speech on Social Media: The Current Legal
Landscape and Policy Proposals xxiii
A n d r e w J. C e r e s n e y, J e f f r e y P. C u n a r d,
C o u r t n e y M . D a n k w o r t h , a n d D av i d A . O’N e i l
vii
viii Contents
12. Platform Power, Online Speech, and the Search for New
Constitutional Categories 193
N at h a n i e l P e r s i ly
18. Profit Over People: How to Make Big Tech Work for
Americans 301
Amy Klobuchar
This is our fourth major book project together, all centered around major turning
points or seminal issues involving the First Amendment. We are always pro-
foundly impressed, first, by the importance and complexity of the role played by
the principles of freedom of speech and press in American society and, second,
by just how illuminating it can be to draw together some of the most informed
and original minds of the time to try to understand these matters better. We,
therefore, begin our thanks to our authors and to members of the Commission
for their commitment to this project. We are also grateful to Andrew Ceresney,
Courtney Dankworth, Morgan Davis, and their colleagues at Debevoise &
Plimpton for the extraordinary help in providing the background paper and in
shepherding the work of the Commission to arrive at its recommendations. We
believe that the contributions of all of these individuals have collectively yielded
a valuable volume of ideas.
There are others who deserve recognition. Jane Booth, General Counsel of
Columbia University, provided the bridge with Debevoise & Plimpton and her
own substantive interventions. The Office of the President at Columbia, under
the direction of Christina Shelby, was simply outstanding in organizing just
about everything. We make special note of Joanna Dozier in this regard and free-
lance editor Lori Jacobs.
We continue to be thankful for the deep relationships we have with Oxford
University Press, and especially with Dave McBride and his entire staff.
As always, we benefit in inexpressible ways from Jean Magnano Bollinger
(Lee’s spouse) and Jane Dailey (Geof ’s spouse). Nothing happens without their
counsel and support (and constructive criticism).
And, lastly, we feel beyond fortunate to have Carey Bollinger Danielson as
our general editor, on this as well as our prior two books. As we have said each
time and do so again, Carey has been the perfect colleague in these ventures.
xi
L I ST O F CO N T R I B U TO R S
xiii
xiv list of Contributors
One of the most fiercely debated issues of the current era is what to do about
“bad” speech on the internet, primarily speech on social media platforms such
as Facebook and Twitter. “Bad” speech encompasses a range of problematic
communications—hate speech, disinformation and propaganda campaigns,
encouragement of and incitement to violence, limited exposure to ideas one
disagrees with or that compete with preexisting beliefs, and so on. Because the
internet is inherently a global communications system, “bad” speech can arise
from foreign as well as domestic sources. No one doubts that these kinds of very
harmful expression have existed forever, but the premise of the current debate
is that the ubiquity and structure of this newest and most powerful communica-
tions technology magnifies these harms exponentially beyond anything we have
encountered before. Some argue that, if it is left unchecked, the very existence
of democracy is at risk.
The appropriate remedies for this state of affairs are highly uncertain, and this
uncertainty is complicated by the fact that some of these forms of “bad” speech
are ordinarily protected by the First Amendment. Yet the stakes are very high
in regard to how we answer the question because it is now evident that much of
public discourse about public issues has migrated onto this new technology and
is likely to continue that course into the future.
Current First Amendment jurisprudence has evolved on the premise that,
apart from certain minimal areas of well-established social regulation (e.g.,
fighting words, libel, threats, incitement), we should place our trust in the
powerful antidote of counter-speech to deal with the risks and harms of “bad”
speech.1 Of course, that may well turn out to be the answer to our contempo-
rary dilemmas. Indeed, one can already see the rise of public pressures on in-
ternet companies to increase public awareness of the dangers of “bad” speech,
xv
xvi Opening State ment
and there are discussions daily in the media raising alarms over dangerous
speech and speakers.2 Thus, it may be that the longstanding reliance on the self-
correcting mechanisms of the marketplace of ideas will work again.
But perhaps not. There is already a counter risk—that the increase in “ed-
itorial” control by internet companies will be biased against certain ideas and
speakers and will effectively censor speech that should be free.3 On the other
hand, even those who fear the worst from “bad” speech being uninhibited often
assert that internet company owners will never do enough on their own to ini-
tiate the needed controls because their basic, for-profit motivations are in direct
conflict with the public good and the management of civic discourse.4 There is
understandable concern that those who control the major internet companies
will have an undue and potentially dangerous effect on American democracy
through their power to shape the content of public discourse. On this view,
public intervention is necessary.
It is important to remember that the last time we encountered a major new
communications technology we established a federal agency to provide over-
sight and to issue regulations to protect and promote “the public interest,
convenience, and necessity.”5 That, of course, was the new technology of broad-
casting, and the agency was the Federal Communications Commission. The
decision to subject private broadcasters to some degree of public control was,
in fact, motivated by some of the very same fears about “bad” speech that we
now hear about the internet.6 People thought the risks of the unregulated pri-
vate ownership model in the new media of radio and television were greater
than those inherent in a system of government regulation. And, like today, those
who established this system felt unsure about what regulations would be needed
over time (in “the public interest, convenience, and necessity”), and they there-
fore set up an administrative agency to review the situation and to evolve the
regulations as circumstances required. For a fuller description and analysis of
these issues, see Images of a Free Press and Uninhibited, Robust, and Wide-Open: A
Free Press for a New Century, by Lee C. Bollinger.7
On multiple occasions, the Supreme Court has upheld this system under
the First Amendment. The formal rationale for those decisions may not apply
to the internet, but there is still plenty of room for debate about the true prin-
ciples underlying that jurisprudence and their continued relevance. In any
event, the broadcasting regime stands as arguably the best example in our his-
tory of ways to approach the contemporary concerns about new technologies
of communication. But, of course, it may be that government intervention in
this realm is so dangerous that social media platforms should be left to set
their own policies, just as the New York Times and the Wall Street Journal are
free to do.
Opening State ment xvii
It has been a joy and an education to work with all the participants in this
project, and we hope the sum of what is presented here will be as helpful, as chal-
lenging, and as stimulating to our readers as it has been to us.
R E G U L AT I N G H A R M F U L S P E E C H O N S O C I A L
MEDI A : THE CURRENT LEG AL L ANDSCAPE
AND POLICY PROPOSALS
Andrew J. Ceresney, Jeffrey P. Cunard,
Courtney M. Dankworth, and David A . O’Neil
Social media platforms have transformed how we communicate with one an-
other. They allow us to talk easily and directly to countless others at lightning
speed, with no filter and essentially no barriers to transmission. With their enor-
mous user bases and proprietary algorithms that are designed both to promote
popular content and to display information based on user preferences, they far
surpass any historical antecedents in their scope and power to spread informa-
tion and ideas.
The benefits of social media platforms are obvious and enormous. They foster
political and public discourse and civic engagement in the United States and
around the world.1 Social media platforms give voice to marginalized individuals
and groups, allowing them to organize, offer support, and hold powerful people
accountable.2 And they allow individuals to communicate with and form
communities with others who share their interests but might otherwise have
remained disconnected from one another.
At the same time, social media platforms, with their directness, immediacy,
and lack of a filter, enable harmful speech to flourish—including wild conspiracy
theories, deliberately false information, foreign propaganda, and hateful rhetoric.
The platforms’ algorithms and massive user bases allow such “harmful speech”
to be disseminated to millions of users at once and then shared by those users
at an exponential rate. This widespread and frictionless transmission of harmful
speech has real-world consequences. Conspiracy theories and false informa-
tion spread on social media have helped sow widespread rejection of COVID-
19 public-health measures3 and fueled the lies about the 2020 US presidential
xxiii
xxiv Regul ating Harmful Speech on Social Media
election and its result.4 Violent, racist, and anti-Semitic content on social media
has played a role in multiple mass shootings.5 Social media have also facilitated
speech targeted at specific individuals, including doxing (the dissemination
of private information, such as home addresses, for malevolent purposes) and
other forms of harassment, including revenge porn and cyberbullying.
These negative effects of social media discourse are profound and en-
demic, and they are not going away. Policymakers and stakeholders around the
world have called for reform, including by regulating social media companies’
approaches to the dissemination of harmful speech on their platforms. This
introductory essay sets the context in which policymakers might think about
how to approach the challenges of harmful speech on social media platforms,
including by making such reforms. It first briefly summarizes the US statutory
and constitutional framework for approaching the regulation of social media
platforms. It then reviews potential US policy approaches, with a particular focus
on proposed and implemented legislation concerning social media platforms in
the European Union.
applied to the users who post on social media platforms and the social media
platforms themselves, has been interpreted to substantially limit government’s
ability to regulate lawful speech.
The rationale for Section 230(c)(2) (the so-called Good Samaritan provision)
was to obviate any legal objection to interactive computer services removing
such materials, lest they be deemed a publisher (or otherwise have some legal re-
sponsibility for the distribution of content) and, as such, be liable for removing
the content. This immunity for removing all manner of material, including
any that is “otherwise objectionable,” gives social media companies broad au-
thority, if exercised in good faith, to remove content that is potentially defama-
tory, harassing, bullying, invasive of privacy, inflicting of intentional emotional
distress, misinforming, or originating from a foreign government or other
source. Although some providers of content have objected to social platforms’
decisions to delete or moderate such content (which they are otherwise gen-
erally permitted to do under their own terms of service), courts generally have
Regul ating Harmful Speech on Social Media xxvii
The First Amendment may well protect from regulation social media
companies’ moderation of user-generated content, including deciding what con-
tent to feature or promote and how to rank or filter user-generated content, if they
are characterized as editorial decisions protected by the Constitution. Whether
such regulations could withstand challenge under the First Amendment will de-
pend on (i) the type of speech being regulated; (ii) if the platform is regulated,
the type of activity by the platform; (iii) whether the regulation discriminates
based on type of speech or can be characterized as content-neutral; and (iv)
whether courts view content moderation by social media platforms as more
closely akin to editorial decisions by print media (which have historically been
accorded the strongest First Amendment protection), broadcasters (which have
received weaker protection), or others such as cable networks or broadband in-
ternet providers.
about the type of community the platform seeks to foster and, as such, exercise
editorial discretion over their platform’s content.”41
carriers like telephone service providers. The Supreme Court has treated regu-
lation of each of these forms of media differently under the First Amendment,
and thus courts’ analysis of social media regulations may depend on which older
media they find most closely analogous to social media. Courts would not be
writing on a blank slate, however: In 1997, the Supreme Court in Reno v. ACLU
held that a content-based restriction on the internet (there, the regulation of in-
decency) could not survive strict scrutiny.49
Supporters of enhanced regulation of social media platforms might urge
broadcast media as an apt analogy for social media because broadcasters histor-
ically have been subject to content-based regulation that would not be constitu-
tional if applied to traditional print media and the editorial decisions made by
cable television operators.50 Beginning in 1927 with the creation of the Federal
Radio Commission, which was replaced in 1934 by the Federal Communications
Commission (FCC), Congress empowered a new federal agency to license and
regulate broadcast media to promote the “public interest, convenience, and ne-
cessity.”51 The system resulted in a number of regulations, including the so-called
fairness doctrine, which, until 1987, required broadcasters to cover public issues
and to give balanced coverage to competing viewpoints on those issues.
In 1969, the Supreme Court unanimously held that the fairness doctrine was
constitutional in Red Lion Broadcasting Co. v. FCC.52 The Court reasoned that the
“scarcity of broadcast frequencies” made the spectrum a “resource” that justified
government intervention to “assure that a broadcaster’s programming ranges
widely enough to serve the public interest.”53 In doing so, the Court recognized
that the government had a proper role in securing the public’s right to be in-
formed.54 Nine years later, the Court elaborated in FCC v. Pacifica Foundation
that the “uniquely pervasive” nature of broadcasting, which may be seen or
heard without viewer consent, supported the constitutionality of the regulation
of indecent speech on broadcast media.55 During that same period, the Court
repeatedly rejected similar government regulations as applied to other media,
including newspapers56 and cable television.57 Only a few years after Red Lion,
the Court in Miami Herald Publishing Company v. Tornillo struck down a state
law that granted political candidates who were criticized in a newspaper the right
to have the newspaper publish their response to that criticism.58 The Court also
declined to extend Red Lion to cable television in Turner Broadcasting Systems
v. FCC, reasoning that cable television lacks the scarcity of frequencies that
characterizes broadcast media.59
The rationales behind the Court’s deference to regulation of broadcasters—
the scarcity of frequencies and the unconsented- to pervasiveness of the
medium—do not map cleanly onto social media platforms. In Reno, the Court
considered the broadcast analogy and its prior precedents and expressly declined
to extend Red Lion to cyberspace. Its decision was, in part, based on its view that
xxxii Regul ating Harmful Speech on Social Media
of privacy.84 The high cost of liability and of monitoring and removing content
potentially could drive smaller platforms out of business or deter new entrants
to the market, perhaps contributing to an increase in the dominance of large
platforms.
Apparently recognizing these risks, many legislators and other policymakers
have proposed amending or supplementing Section 230 rather than eliminating
it entirely. Some of these proposed amendments are aimed not at curbing
harmful speech but, instead, doing the opposite: dis-incentivizing platforms
from moderating content, for instance, by limiting the types of content that inter-
active computer services may remove under Section 230.85 The extent to which
such proposals would actually have their intended effect is doubtful, because
users do not have a legal right to disseminate speech—even lawful speech—via
a social media platform. Accordingly, under current law and the platforms’ terms
of service, they do not now and would not have in the future a particularly robust
cause of action against a platform for removing content absent some legal basis
(such as racial discrimination).
Other proposals seek to incentivize platforms to remove unlawful speech by
eliminating Section 230 immunity for civil claims based on violations of certain
state or federal laws. Such proposals are necessarily limited to unlawful speech,
because Section 230 applies only where the aggrieved party can bring a viable
legal claim against the platform. Even within these limitations, these proposals
could reach online harassment (including revenge porn or doxing to the extent
that it meets the elements of state-law torts or might be outlawed by statute), cer-
tain forms of hate speech, and potentially tortious speech, such as defamation.86
A more modest approach, which is included in the Platform Accountability
and Consumer Transparency Act (PACT Act) proposed by Senators Brian
Schatz (D-HI) and John Thune (R-SD), is to amend Section 230 to create a
notice-and-takedown process.87 This would be similar to the notice-and-take-
down regime of US copyright law, 17 U.S.C. § 512, which limits the copyright
liability of an online service provider if it receives a notice that it is storing
infringing material and then takes it down promptly. A measure of this kind
would not put the onus on platforms to identify and remove unlawful content.88
Instead, under that model, an internet content provider that did not otherwise
know it was hosting or distributing illegal content would be entitled to immu-
nity from suit for intermediary liability if and to the extent it acts expeditiously,
on receipt of notice, to take down such content. By putting the burden on users
to report unlawful content, a notice-and-takedown regime also alleviates at least
some concerns about the compliance costs for platforms to monitor the enor-
mous volume of content they store and disseminate for potentially illicit con-
tent and determine whether it is, in fact, unlawful. At the same time, this regime
is vulnerable to abuse, and a meaningful counter-notice regime would need to
xxxviii Regul ating Harmful Speech on Social Media
be instituted to ensure that content that ought not to have been taken down is
restored.89 In addition, policymakers would have to determine what would or
should happen if the uploading user, in a counter-notice, asserts that the content
is lawful. Finally, this approach, as noted, would not address harmful, but only
unlawful, content.
vulnerable groups such as children.104 In this respect, the Digital Services Act
could create tensions with the fundamental right to freedom of expression
safeguarded by Article 10 of the European Convention on Human Rights (the
ECHR),105 just as proposed social media regulations in the United States could
conflict with the First Amendment. With that said, the ECHR permits some-
what broader limitations on free expression than does the First Amendment, for
instance, by allowing for the criminalization of Holocaust denial.106
The Digital Services Act stops short of requiring platforms to actively search
for and remove harmful content; instead, it would create a notice-and-take-
down regime for unlawful content107 and impose a range of transparency and
reporting requirements on platforms. These include requirements to describe
content-moderation policies in user terms and conditions, including algo-
rithmic decision-making and human review;108 publish annual reports regarding
the platform’s content moderation practices109 and “significant systemic risks”
of operating their services in the EU, including with respect to dissemination
of illegal content and intentional manipulation of the service; and implement
“reasonable, proportionate and effective measures” to address those risks.110 The
Digital Services Act is awaiting final approval from the European Parliament and
EU Member States, and will apply from the later of fifteen months after its entry
into force or from January 1, 2024. For very large online platforms and search
engines, however, the Digital Services Act will apply from a potentially earlier
date, being four months after their designation.
Once adopted, these reforms may have some influence on the US approach
to regulation of social media platforms—given the cross-border nature of their
services—and may have a more direct effect on US users’ speech. In partic-
ular, the Digital Services Act may establish new soft norms that social media
platforms will apply voluntarily to US users. After the adoption of the General
Data Protection Regulation (GDPR), for example, Facebook extended many of
its privacy protections to US users.111 Similarly, Reddit allows users to request a
copy of information that Reddit has about their account, as required under the
GDPR and the California Consumer Privacy Act, even if the user is not in the
EU or California.112 Social media companies that are required to comply with
the Digital Services Act may ultimately adopt a global approach to content mod-
eration, such as notification of content removal, providing the reasons for re-
moval, permitting users to challenge that removal, and providing transparency
with respect to these measures. One reason for them to do so might be both
legal and pragmatic: to ensure that they get the benefit of EU-based liability pro-
tection even with respect to content uploaded from outside the EU that is the
subject of a complaint from an EU-based service recipient.
PA RT O N E
AN OVERVIEW OF THE
PROBLEM
1
The popularity of social media has exposed fault lines in the law of the First
Amendment. These fault lines are principles that are familiar, established, and
foundational to First Amendment law. They present no difficulties in a wide
range of cases. But, for one reason or another, they don’t entirely make sense.
Sometimes they rely on questionable factual premises. Sometimes their norma-
tive bases present an unresolved conflict. Sometimes they extend a reasonable
claim to the point of implausibility. Sometimes their scope is chronically unclear.
They are routinely stated as axiomatic, but they actually allow for exceptions,
and the exceptions are not well defined.
These fault lines reflect genuine problems, not a failure of clear thinking on
the part of courts or anyone else. They exist because of genuine dilemmas or
tensions; any system of freedom of expression would have to deal with them and
would have trouble solving the problems they reflect. And just as people can live
for a long time without incident near geological fault lines, the constitutional
law of free speech works reasonably well even though it has been built on top
of these fault lines. But, like geological fault lines, these foundational aspects of
the law can come apart when placed under stress. The widespread use of social
media places them under stress. That does not mean the system will fracture.
Relatively modest shifts in the law might relieve enough of the stress to allow
things to continue pretty much in the way they have been. But the alternative
outcome is also possible. The development of social media might require a large-
scale rebuilding of some or all of these foundations.
To put the point another way, many of the problems that social media present
for the First Amendment are not new problems. They were already there, and we
were already dealing with them, in one way or another, although maybe not in a
4 An Overview of the Problem
Line-drawing problems are ubiquitous in the law, and clear rules will
produce distinctions that seem arbitrary; those are familiar points. So it
is not surprising to find principles like these that are often stated in cat-
egorical terms but that, in fact, allow for unacknowledged or vaguely de-
fined exceptions. In particular, there is a tendency—understandable and
beneficial—to try to distill First Amendment principles into clear rules,
because clear rules make individuals more confident about when they can
speak freely and make it easier for officials, including judges, to resist the
pressure to suppress unpopular speech.
Clear rules can be fragile, though. Circumstances can change in ways that
make the rules seem too arbitrary and that increase the pressure to create
exceptions. That seems to be what is happening to these First Amendment prin-
ciples with the growth of social media.
Another random document with
no related content on Scribd:
Nova Scotia 1.00
“Christian Herald,” New York 55,000.00
Total $1,034,073.74
HOW NEW YORK RAISED FUNDS
FOR ITALY
The experience of the New York State Branch in raising relief funds
for a considerable number of disasters shows that several simple but
indispensable things must be done in order to ensure adequate
contributions—adequate, that is to say, to the emergency needs,
and, as it will no doubt interest many Red Cross members to know
what these things are and how they have been done, a brief
description of the last appeal is offered.
When on the morning of December 29th last word came to the
State Headquarters in New York City from Mr. Magee, the national
secretary, authorizing and directing an appeal to the public for funds
wherewith to meet the needs of stricken Sicily and Calabria, the
secretary of the State Branch, Mrs. William K. Draper, and the state
field agent were with the office secretary. For such an emergency
there is a recognized program of work. The first thing to be done, of
course, was to publish the appeal. At once, within an hour, notices
were sent to all of the local newspapers. This notice stated that the
American Red Cross had appealed to the people of the United
States in behalf of the earthquake sufferers; that all funds sent to the
State Treasurer, Mr. Jacob H. Schiff, at the State Headquarters
would be forwarded with the utmost expedition through the federal
state department to the Italian Red Cross, and that all persons
sending their contributions in this way would have the fullest
assurance that the money would reach the desired destination, and
would learn later from official Red Cross reports how it was spent.
Subsequently three ladies, members of the State Branch, visited all
of the newspaper offices in the city and enlisted the co-operation of
the editors in keeping before the public the function and record of the
Red Cross, and the name and address of its local treasurer. It was
realized that in order to get the best results the name and address
ought to be printed every day by the papers in a conspicuous
position. Unless this were done day after day, many persons inclined
to give would forget this detail and let the occasion pass.
The chairman of the state executive committee, Mr. Cleveland H.
Dodge, had meantime been notified. He satisfied himself by personal
inquiry that all necessary measures were being taken to give
publicity to the appeal and handle the contributions when received.
The State Branch has twenty subdivisions, and these in case of
similar disasters have been informed by letter, the small saving of
time generally not justifying the expense of telegraphing. In this
important instance, however, the chairman directed that the
subdivisions should be notified by telegraph. Within an hour or two,
therefore, every subdivision secretary in the state was advised of the
appeal, and the morning papers in each locality published it, together
with the name and address of the local treasurer, and a statement
that the Red Cross, as the official emergency relief organization, was
the proper channel for the transmission of funds to Italy. These
telegraphic messages were followed by letters of formal direction.
The Branch’s responsibilities were not discharged by these efforts.
We all know that a large portion of the public does not realize the
significance of the Red Cross, even in time of the most important
functions. Confused by the many claims on its attention, this portion
of the people hesitates as to the advisable course to take and ends
by waiting for fuller information. It was, therefore, of the greatest
assistance to the cause of practical relief that the President of the
United States, in his proclamation of the disaster, should point out
the Red Cross as the proper depository for popular contributions.
When Governors and Mayors do the same the representation is
impressive and convincing. One of the earliest acts of the Secretary
of the State Branch, therefore, was to write to Governor Hughes to
request him to follow the example of the President and direct the
public to the Red Cross, though naming the Treasurer of the State
Red Cross. Communication with the Governor’s secretary by long-
distance wire followed. The Governor readily appreciated the
wisdom of the proposal and issued the following proclamation:
“To the People of the State of New York:
“The calamity which has visited Southern Italy and Sicily
must not only excite our deep sympathy with those so
suddenly stricken, but our desire to aid in the relief of their
pressing necessities. To this we are prompted by humane
impulse and by our friendly interest in the people so largely
represented among our citizens.
“I recommend that contributions be made through the New
York State Branch of the American National Red Cross, which
is in communication with the Italian Red Cross and has
undertaken to receive and forward funds offered for relief.
“It may be hoped that the generosity of our people, which
has had such beneficent illustration in the past, may again
have abundant expression.
“Given under my hand and the Privy Seal of the State at the
Capitol in the city of Albany this thirtieth day of December, in
the year of our Lord one thousand nine hundred and eight.
“(Signed) CHARLES E. HUGHES.
“By the Governor:
“ROBERT H. FULLER,
“Secretary to the Governor.”
“The New York State Branch of the American National Red
Cross has offices at 500 Fifth avenue, New York City, and
contributions may be made to its Treasurer, Mr. Jacob H.
Schiff, there or at the address of Kuhn, Loeb & Company, 52
William street, New York City.”
So from their origin the Red Cross seems to have a special right to
these stamps. Their success will be apt to cause various
organizations to desire to copy this idea. This will lead to an
unfortunate result. Such repetitions will tire the public and the
multiplicity of the stamps will create a lack of interest and destroy
their usefulness not only for these other charities, but for the purpose
for which they were revived in this country—the anti-tuberculosis
work of the American Red Cross. It is to be hoped that our
unfortunate American habit of “running a good thing into the ground”
will not lead in this case to the destruction of the usefulness of the
Red Cross Christmas Stamp by the overproduction of these charity
stamps.
Some Charity Stamps of Sweden.
California—
California Red Cross Branch and its Subdivisions $4,530.49
To be applied to Sanatoria, educational work,
Day Camps, District Visiting Nurses, etc.
Colorado—
Associated Charities of Colorado Springs $684.62
To be applied to establishing free sanatorium
for Tuberculosis patients.
Connecticut—
Connecticut Red Cross Branch $5,677.18
To be applied to the establishment of Day
Camps and for individual cases of tuberculosis
among the poor.
Delaware—
Delaware Red Cross Branch $1,152.17
To be applied to purchasing site for
dispensary and salaries of two nurses.
District of Columbia—
District of Columbia Branch $2,906.06
To be applied to maintenance of Day Camp.
Florida—
General Federation of Women’s Clubs,
Jacksonville $1,397.23
To be applied to an anti-tuberculosis
campaign.
Georgia—
Atlanta Committee on Tuberculosis $1,500.00
To be applied to salary of local efficient
secretary and educational work of anti-
tuberculosis society.
Augusta Committee on Tuberculosis $90.76
To be applied to day camp for Richmond
County.
Illinois—
Chicago Tuberculosis Institute $7,417.51
To be applied towards support of dispensary
department consisting of seven tuberculosis
clinics and small appropriations towards
sanatorium patients’ milk and egg fund, etc.
Indiana—
Indiana Red Cross Branch $3,831.58
To be applied to the aid of two specific cases
of tuberculosis and balance will probably be
expended in aiding existing anti-tuberculosis
organizations.
Iowa—
Burlington Red Cross Division $237.00
To be applied to Iowa tuberculosis fund.
Kansas—
Kansas Red Cross Branch $154.46
To be applied to educational work.
Kentucky—
Kentucky Anti-Tuberculosis Organization,
Louisville $2,300.00
To be applied to educational work in
Louisville and general promotion and
organization throughout the State.
Maine—
Maine Red Cross Branch $2,500.00
To be applied to Day Camps, tuberculosis
classes, educational work, State Sanitarium.
Maryland—
Maryland Association for the Prevention and
Relief of Tuberculosis $5,201.24
To be applied in educational work and in the
support of four special tuberculosis nurses and
the special tuberculosis dispensary maintained by
the Association.
Massachusetts—
Massachusetts Red Cross Branch $13,000.00
To be applied to Day Sanatoria, visiting
nurses, etc.
Michigan—
Michigan Red Cross Branch $3,344.17
To be applied to the erection of a Day Camp
for tubercular children on property owned by city.
Civic League, Bay City $394.15
To be used in supplying nurses to
tuberculosis patients.
Minnesota—
State Board of Health (St. Paul) $1,506.86
To be applied to educational work of State
Anti-Tuberculosis Association.
Missouri—
Missouri Red Cross Branch $475.00
General work of organization, education and
relief.
Nebraska—
Nebraska Association for the Study and
Prevention of Tuberculosis (Omaha) $237.08
To be applied in educational work.
Eaton Laboratory (Lincoln) $33.70
To be applied in educational work.
New Hampshire—
New Hampshire Red Cross Branch $1,300.00
To be applied to educational work and
expenses of traveling nurse.
New Jersey—
New Jersey Red Cross Branch $464.53
To be applied to the support of a Red Cross
tent, should the State have a camp for
tuberculosis patients; otherwise the proceeds will
probably be given to the New Jersey State
Tuberculosis Society.
L. S. Plaut & Co., Newark $235.00
Proceeds given to local Anti-Tuberculosis
Society.
Mrs. S. C. Comstock, Montclair, by authority of
New Jersey Branch $927.25
To be applied to support of Summer Day
Camp or to support of patients in other camps.
Anti-Tuberculosis Committee of the Oranges $2,200.55
To be applied in Educational work.
Plainfield Society for the Relief of Tuberculosis $37.35
To be applied to general fund being raised
for establishment of a camp.
New York—
New York Red Cross Branch $21,174.67
To be applied generally to maintenance of
Day Camps.
North Carolina—
Wilmington Red Cross Subdivision $415.00
To be applied to educational work.
Ohio—
Cincinnati Subdivision $1,203.02
To be applied to educational work through