Section 512 Study Roundtable – Section 2

Section 512 Study Roundtable – Section 2


>>Kevin Amer: Okay. Welcome back, everyone. We are about ready to get
started on our second session of the day which again focuses on domestic case law
developments since the close of the comment period. I’ve been asked just
to remind everyone — all of our panelists to
please remember to speak into the microphone when
you’re making a comment, and then if you could please
turn off your microphone when you finish speaking. That will help from the
audio recording standpoint. So, again, as before, I’d
like to invite our panelists to introduce themselves, and
to state their affiliation. And then, again, we invite you to just make a very
brief summary statement, and we urge you,
again, to please try to keep those to
about 45 seconds. So we’ll start with Mr. Band.>>Jonathan Band: So
I’m Jonathan Band, and I represent the
Library Copyright Alliance. We’re most concerned with
the Ninth Circuit’s decision in Mavrix v. LiveJournal. In Mavrix, the court found that a service could lose
Section 512 Safe Harbor by virtue of moderating
the content being uploaded to its site. Though as a service
provider that complies with the EU’s new filtering
requirements could find itself losing its DMCA filter. This is a perverse result. The Motherless decision may
undo some of the damage, but Mavrix remains a
potential landline. Thank you.>>Sofia Castillo: Sofia
Castillo from the Association of American Publishers. AAP continues to believe
that a legislative fix to Section 512 is necessary
to ensure that ISPs that rely on copyright infringement as their business
model are not eligible for safe harbor protection. The decisions in Cox and
Grande provide helpful elements that the Copyright Office
might include in its report with respect to defining a
reasonably implemented repeat infringe or policy. Similarly, to address
the contours of platform responsibility,
the Copyright Office might look to the rulings in LiveJournal
and Motherless, which clarify that screening material for potentially infringing
content does not expel an ISP from the 512(c) Safe Harbor.>>Stephen Carlisle:
Stephen Carlisle of Nova Southeastern University. I need to preface my remarks by all my comments here today
are my personal opinion, and do not necessarily
represent those of Nova Southeastern University. Through the good graces
of Nova, I have managed to [inaudible] maintain
one client from my previous law practice,
a small music publisher of jazz who has perhaps 100 songs. From our viewpoint, 512
is simply unworkable. The whack-a-mole program on its
own makes it simply unaffordable from a time standpoint and a
financial standpoint to send out the number of
notices required. As to the red flag knowledge
which was discussed earlier, I did this last night courtesy of my good friend sitting
next to me, Google. I put the name of my
artist into a Google search. What I got back was
recommendations for two videos. Both of these videos
consisted nothing more than my client’s song
with a static image of the album cover,
top two hits. Now, as far as red
flag knowledge, I think when you have a entire
work that is not modified in any way so it can’t
possibly be fair use, and all there is is the
album cover on there, I think that’s sufficient to
confer red flag knowledge. Thank you.>>Caleb Donaldson: I’m
Caleb Donaldson for Google. The DMCA framework
provides a balanced approach to intellectual property
enforcement. And we can see that
in the flourishing of not only the tech sector, but
the creative industries as well. We’ve talked — we’ve heard
already about the volume of searches that a video
that is uploaded to YouTube, we haven’t heard that — those videos have paid six
billion dollars in ad revenue to the music industry alone. But the creative sector’s
not just established industry players. A study last year showed that
in 2017 there were almost 17 million American independent
creators offering their works for money online. So this just shows the
variety of creativity that the DMCA could support. It’s also laid the foundation for Google’s Best-In-Class
rights management tools, not only our 100-million-dollar
investment into Content ID, but also our bulk removal
tools from search results. We processed 693 million
requests to remove URLs from search results last year. And we did it very quickly, and
with a high degree of accuracy. And we’re very proud
of those tools. And all of that rests on
the framework of the DMCA. Thank you.>>Kevin Amer: Thank you.>>Kenneth L. Doroshow:
I’m Ken Doroshow. I’m with the Recording
Industry Association of America. And I know there’s going to be
a certain amount of redundancy from this morning, so I’ll
try to edit on the fly to keep that to a minimum. But from the perspective
of the recording industry, little has changed
over the two years since the last comment period. And we stand by our comments
from a couple of years ago. Cases like Motherless, for
example, continue the trend of judicial opinions that
read the red flag knowledge requirement out of the statute. We are pleased to see those
repeat infringer cases that the office has noted of BMG/Cox case,
the Grande decision. But we hesitate to take too
much comfort in those decisions, because, as was discussed
this morning, they don’t really
teach us very much. These were very extreme cases. And it shouldn’t be
controversial or newsworthy that a service provider that effectively has no repeat
infringer policy is not entitled to the safe harbor. And, of course, the
Motherless court’s willingness to excuse evident problems with that particular service
provider’s repeat infringer policy suggests that
even BMG versus Cox and Grande can’t be
taken for granted.>>Kevin Amer: [inaudible] –>>Kenneth L. Doroshow: So –>>Kevin Amer: I
think we should –>>Kenneth L. Doroshow: Okay.>>Kevin Amer: Thank you.>>Kenneth L. Doroshow: Sure.>>Douglas T. Hudson: Hi. My name is Doug Hudson
from Etsy. We have two million micro
businesses and creators that might not be
fully represented in some of these discussions. And in that, I think there’s
some points that they see that might become —
that might come together between the two sides. One is that I’ve heard from
IP owners and marketplaces and others that there’s a
dramatic increase in the amount of fraud in the process. Fraud in terms of false
takedowns, in terms of phishing and scamming, and in terms
of, like, gaming the system. On the other side,
they’re seeing — people are seeing fraud
in counter notices. I think we need to seriously
look at 512(f), and find a way to put some more
teeth into the process to protect both copyright
owners, marketplaces and end-users. Second, I think we need to
work on simplifying the DMCA for small IP owners,
for micro businesses, for people who have a
small library of materials. It’s hard for them
to use the process. And, finally, there been
solutions here, in Europe, people have recommended
pre-filters. Pre-filters don’t
work for everyone. That’s not a one-size-fits-all
solution. When you deal in physical
goods, when you deal in creative services that
don’t match digital content, it’s not a viable solution. And so we need to understand
how the flexibility of a system like 512 helps create things in
all sorts of creative endeavors. They’re not just
digital, audio or video.>>Kevin Amer: Thank you.>>Keith Kupferschmid:
So I’m Keith Kupferschmid with the Copyright Alliance. When it passed Section
512, Congress intended to encourage copyright owners
and OSPs to work together to combat existing and future
forms of online infringement. However, over the past
20 years, court rulings and other unanticipated changes in the online environment have
rendered these provisions less effective, creating an ecosystem where mass copyright
infringements are an unfortunate and regular occurrence, and
ISPs are routinely shielded from liability and encouraged to avoid responsibility
and accountability. Over the past two
years in particular, we have experienced
more of the same. The courts have effectively
written the red flag knowledge standard out of the statute. And while there have been some
good recent decisions relating to repeat infringer standard, these decisions are
not the panacea that some would make
them out to be. The force of the Fourth Estate
decision has compounded these problems by effectively adding
a new requirement to the DMCA that the works be registered
before sending a DMCA notice. If you combine these
decisions with the new limits on the Whois database,
there can be no doubt that we are clearly
worse off than we were when we attended these
roundtables two years ago.>>Kevin Amer: Okay. Thank you.>>Arthur Levy: Art Levy, Association of Independent
Music Publishers. Since the last roundtable
with some narrow exceptions, problems with the
DMCA have gotten worse for independent music publishers
and songwriters, not better. Courts continue to write
copyright owner protections out of the DMCA, most
recently in cases that have interpreted
the act’s provisions on constructive knowledge and misrepresentation,
among other issues. As a result, service providers
have less incentive to work to prevent infringement, and
it’s even more burdensome for copyright owners to do so. The whack-a-mole problem has not
been solved, yet ISPs continue to benefit from the safe harbor. And from the perspective of
indie publishers, songwriters and other small copyright
owners lacking the resources to enforce their
rights under the DMCA, the DMCA essentially
offers them no remedy. The Copyright Office should
promote significant DMCA reform, seeking a rebalancing
of the DMCA.>>Peter Midgley: My
name is Peter Midgley. I’m the director of the
Copyright Licensing Office at Brigham Young University. We’re a private non-profit
educational institution. And because we’re private, we
don’t enjoy sovereign immunity from copyright infringement
lawsuits. We’re here because we believe that universities are somewhat
unique in the DMCA ecosystem. Obviously, our primary role
is to educate our students, many of whom are dreaming about
careers in creative industries. At BYU, our animation program and advertising programs are
among the most highly rated in the world. And so we definitely
recognize the value of a robust copyright system. We’re, by no means,
copyright abolitionists. But at the same time, we
are also service providers, and we manage a very large
network to support our students, our faculty, staff and even
visitors to our campus. And in that context, we’ve received numerous
512(c) notices. And the imposition that
it presents for us, the administrative burden
in processing those notices, and the uncertainty associated,
and following the Cox and Grande cases, are
somewhat problematic for us as universities.>>Kevin Amer: Thank you.>>Sasha Moss: Thank you to
the Copyright Office staff for inviting us all
to speak here today. My name is Sasha Moss. And I’m here on behalf of
the R Street Institute, a center [inaudible]
think tank based in Washington, D.C.
and the state. So as the internet has grown, so have the amount
of takedown requests. As such, the burden
has heightened for both rights holders
and service providers to combat infringement. Now consider this. Consumption of legal content is
continually rising as R Street and the Center for Democracy and Technology articulate
our last round of statements. In 2015, audiences legally
consumed 3.5 billion hours of movies online. So as we see, as legal
options become available, users will genuinely
gravitate towards that option. Now, motivation may
differ from user to user. Some users are afraid of pirated
content might come with malware. Others may fear having
their internet cut off for the entire household. Regardless, legal options, as they become available,
will be used. And we need to continue to gravitate towards
that direction. And with nothing’s ever easy, the DMCA as written is not
perfect, but as we know as perfect as the
enemy of the good. I want to thank you
for your time, and look forward
to your questions.>>Mary Rasenberger: Hi. Mary Rasenberger from
the Authors Guild. The Authors Guild is a
membership organization and advocacy organization
with 10,000 members. We have a number, about at
least a third of our members, maybe half, do some
self-publishing. So they are trying to deal
with piracy themselves. In the last two years, ebook
piracy has blossomed, bloomed. It is becoming a real issue. Frequent readers are
more frequently reading from piracy sites. The new cases in the last couple of years only affirm the
collapse of the actual knowledge and the red flag standards
into notice and takedown. And as we all know, notice and takedown is an absurd
way to deal with piracy. 512 is not incentivizing
cooperation, as it was intended to do. And for authors, the main
issue were dealing with is that under 512 we cannot
address the ebook piracy sites. That is, the sites that are
devoted to ebook piracy. They hide behind 512.>>Kevin Amer: Okay. Thank you.>>Mary Rasenberger:
We need to rethink 512 and switch the burdens
to the ESPs. And I just want to suggest that
we look to the EU directive as some kind of model.>>Kevin Amer: Okay. Thank you very much. I’d like to start this session
with repeat infringer policies. Mr. Doroshow, I think you said that the recent cases don’t
have much to teach us. But, nevertheless, I’d
like to throw the question out there, a general question. And that is to what extent
have recent decisions on repeat infringer affected
or clarified the state of the law in this area? And, more specifically, does anyone see any
conflict among the decisions? And I’m thinking particularly
conflict between Cox and Grande Communications
on the one hand, and Motherless on
the other hand. Mr. Midgley?>>Peter Midgley: Yeah. So I actually tend to agree that
the recent case law has not been as helpful for those of us
who are earnestly seeking to implement repeat
infringer policies. I think what we have now
are a couple of cases that make it clear that —
it actually isn’t even clear if a 13-strikes-and-you’re-out
policy is an adequate policy under the statute. What is clear is that
if you don’t enforce it, you’re not eligible for the safe
harbor, which really isn’t all that helpful for those
of us who are trying to implement whatever
is an acceptable repeat infringer policy. We heard a lot of talk
in the earlier panel about what constitutes
red flag knowledge. And somebody said they would
love to know how to put somebody on red flag knowledge. Well, as an ISP, I
would love to know how to implement a repeat infringer
policy that’s going to be held to be adequate, and
what it means to reasonably enforce
such a policy. I can just tell you in a
university setting, again, we’re somewhat unique because
we have pretty close proximity to our subscribers. It’s pretty easy when — I
mean, what our policy is, at BYU at least,
is to forward — to do our best to try to
identify whoever was associated with a given IP address
included in a notification of claimed infringement, which
can be a very difficult process. And we don’t — we can’t do it
all the time given the dynamic nature of our network. But when we can, we
forward the notices on to the people involved. And just anecdotally, I can
tell you that, you know, these are students
that they’re — you know, they see this
big scary legal notice, they show up in my
office, and they say, “I have no idea what
you’re talking about. I don’t know what this is.” And so I have, you
know, some rights holder on one hand telling me
somebody has a problem. I have a student on the other
hand saying I have no idea. And now the question is,
“Well, what’s my obligation?” You know, I guess I’m
one of the few ISPs that actually has a [inaudible]
courtroom on my campus, so I suppose I could
start holding hearings, and have the students show up and invite the rights
holders in. But I don’t know if Cox
or Grande or, you know, who else could take
advantage of that process, and really whether it should
be expected of us as ISPs, what level of adjudication,
what burden do we bear as ISPs to ferret out what are actual
instances of infringement.>>Kevin Amer: Could you
elaborate a little more on sort of what’s taking
place with respect to these notices the
students are getting? I mean, what sort of activity
are these targeted towards? Is it sort of peer-to-peer
activities that go to you, and then you forward
them to the students? Or how does the process
typically work?>>Peter Midgley: Yes. So in almost all instances,
we get these notices. They purport to be under 512(c), but they’re really aimed
at 512(a) activity. Which, again, makes it very
difficult, because in order to even do any kind — you know,
we’re different than YouTube. We don’t have a copy on a
server that we own and operate that we can go and check. This is just content that has
flowed through our network. And to be eligible for 512(a),
we can’t keep copies of it. We have no way of
verifying whether or not — you know, the only thing we
have to go on is the fact that a rights holder has
sent us a notification. So we do our best to
identify who’s involved. We forward the notice on. And what we do at our
campus, we refer the matter over to our Honor Code Office, which does have some
fact-finding capability that we don’t have
in our office. And so, you know, to the extent
a student wants to dispute it, they can go and avail
themselves of that process. But –>>Regan Smith: So it sounds
like [inaudible] under 512(a), you take these notices as,
you know, indicative data as to whether or not there’s
an infringement problem. Right?>>Peter Midgley: That has
been our current policy. You know, again, we’re
looking at the Cox case and the Grande cases,
and, you know, we’re left wondering what
precisely is an adequate repeat infringer policy. I don’t think the
courts have told us that, or the statute certainly
doesn’t appear to tell us that. And so we’re just, you know, doing what we think
is reasonable, and hoping that it will — that
if and when we’re challenged, that we will be eligible for
the safe harbor under 512, given the policy
that we’ve adopted.>>Kevin Amer: So it sounds like you would favor the
statute having more specificity. Is that correct? And I guess, you
know, the second part of that question
is that, obviously, we’ve heard from a lot of people that the repeat infringer
policy was not intended, and should not, sort of impose
a one-size-fits-all policy. And so I wonder what
your response is to that, and what your sort of
suggestions are for ways that would provide more clarity
to universities and others.>>Peter Midgley: Yeah. So I do think — I
agree with the notion that a one-size-fits-all
policy does not work very well. You know, again, I’m here as
one example of a large number of organizations for whom
internet service is not our primary function. It’s an ancillary
function that we provide. You know, if I were the general
counsel of Cox or Comcast or some other more traditional
ISP, I would be paying very, very careful attention
for us as a university. And there are lots
of organizations that have broadband
access, you know, to supplement some other
service they’re providing. And so, you know, one of
the things that we have to consider is whether or not the potential
liability associated with providing internet
access is justified by the benefits that
are provided. And, you know, again,
with statutory damages and all these other
things looming out there, I think that’s a very
real conversation. So that’s something the office
needs to consider is, you know, if it’s difficult for the
Cox’s and Grande’s of the world to adopt and reasonably
implement repeat infringer policies, how much more
difficult is it for those of us that are in traditional ISP
businesses to, you know, wade through all of the statutes
and the case law on this. My final point, I
guess, would be, at least in the
university-specific context, there is 512(e), which is — I don’t know what its
original intent was, but I can just tell
you as somebody who is a copyright
officer at a university, it’s virtually useless. So if you want it to
do something specific for non-profit educational
institutions, which I do think would be
a worthwhile thing to do, I would encourage you to
consider clarifications and revisions of 512(e). And I’d be happy to
talk about that further.>>Kevin Amer: Thank you.>>Kimberley Isbell: So
I just want to ask sort of a practical rather
than a legal question. Does your university either post
what its repeat infringer policy is, or communicate that to
rights holders once they’ve complained about a
particular student or particular traffic
on your network?>>Peter Midgley: Yes. Our repeat infringer
policy, it’s available on our internal university
policy network. It’s publicly viewable
through our website to copyright.byu.edu. I encourage everyone to visit. And so — and we —
one other issue, again, this is a university-specific
issue, but we also have to deal with the Higher Education
Opportunity Act, which includes provisions
specific to copyright infringement. And so in compliance
with the HEOA, we send out an annual
notice to every member of our university community,
all faculty, students and staff, that make them aware of our
repeat infringer policy, direct them to legal
alternatives, and, you know, the other provisions
that are in the HEOA. So that’s another area,
again, if you’re looking to do revisions and
specifically aimed at the non-profit
educational sector, I would encourage the office
to consider the interplay between the HEOA and 512. Which it’s not clear to me
that that was considered in the original implementation
of those two statutes.>>Kevin Amer: Thank you. Mr. Band?>>Jonathan Band: So the Library of Copyright Alliance filed an
amicus brief in the Cox case. And what we were concerned
about was exactly sort of this one-size-fits-all
problem. And we wanted to make sure that,
you know, the court didn’t sort of say, “Okay, this
is the standard and this is the standard that’s
going to apply to everyone. Everyone needs to have
this kind of policy.” Because exactly as Peter
was describing, I mean, certainly universities are
one kind of service provider, libraries are another
kind of service provider. For many Americans, you know,
who aren’t in school, I mean, the place where they
get internet access is at the library. And so it’s very
important that, you know, that sort of the standards
that apply to Verizon and Cox and Comcast not necessarily
be the standards, you know, for a policy — or a
repeat infringer policy for a university
or for a library. I mean, we don’t see a need
for a statutory amendment. We think the language as is
provides enough flexibility, especially because the idea
of appropriate circumstances. And let me just also add,
just to take a step back, and this sort of
connects to points made in the previous session. Maybe you’ll be getting
to that here, too. Less about the constitutional
dimension about the importance of internet access, but
more the practical concern. So as I indicated, for many
— you know, for some, like, 30 or 40% of the
population, the only place where they can get broadband
access is at the public library. I mean, we all walk
around with our iPhones, but a lot of people don’t,
or they’re in regions where there isn’t good coverage. And the access to the internet,
I mean, in a sense it sort of goes beyond the
First Amendment. I mean, it really
goes to the ability to function in this society. I mean, as we read stories,
I mean, you can’t apply for Medicaid in places, or you can’t meet the Medicaid
work requirements unless you file things routinely,
you know, on a website. So that assumes that you know
that you have internet access, and that you know
how to use a website, and that you can know how
to apply for things online.>>Regan Smith: Do you think
libraries should educate — I mean, they probably
do, right, about the need to not infringe copyright if
you’re depending upon this, right, repeat infringer? You have more than one — you
have to repeatedly infringe in order to be potentially
terminated?>>Jonathan Band: Right. No. And, you know, certainly
libraries, particularly in the high-risk situation,
take that very seriously. And they have the
same Higher Ed. Opportunity Act requirements. But my point is this, is that when we’re balancing
the issues relating to terminating internet access,
we need to be aware it really — it goes beyond — I mean, even though the First Amendment
is important, you know, I’m saying it goes — you know,
it’s the, you know, right — you know, life, liberty and
the pursuit of happiness. I mean, you can’t
do those things in this country unless
you have internet access.>>Kevin Amer: So –>>Kimberley Isbell: I –>>Kevin Amer: Go ahead.>>Kimberley Isbell: I just want to ask a very maybe
hyper-technical question. Does LCA view the fact that libraries provide
the physical facilities to access the internet as
making them 512(a) ISPs?>>Jonathan Band: Yes, we do,
because we’re — you know, we — you know, we feel
we fall squarely within the definition of 512(a).>>Brad Greenburg:
Related to that, then I have two follow
up questions. One is that in LCA’s
initial comments you wrote that service providers have
been applying a repeat infringer policy that was actually
at a higher standard than the law requires. I’m curious if you
still feel that way, and exactly what
that standard is. And I’ll add to that. And we haven’t actually
talked about this yet. This might be a good time
to begin talking about it. But at the last round
of roundtables, the service providers
were largely saying that repeat infringer means
adjudicated repeat infringer. That is not what the
court said in Cox. And I’m wondering if that
understanding among service providers has changed,
or if they think that the court just
got that wrong.>>Jonathan Band: Well, as
this conversation indicates, there’s lots of different
kinds of service providers. And I’m sure, you know, the different service providers
have different opinions. You know, it would seem
to me that, you know, an infringer is an infringer,
not an alleged infringer. That seems to me what the plain
language of the statute is. But I agree with you. The courts seem to be going
in a different direction. And, you know, so that — you know, I’m not an
article III judge, so I guess the law’s what
they say, not what I say.>>Kevin Amer: So just — and
one last follow-up question. Just to sort of drill down. I mean, so — I mean,
we — I think, you know, we take your point about the
importance of internet access. I guess the sort of
bottom-line question is so then what are you sort
of suggesting in terms of either a potential change to
the repeat infringer provision to accommodate these
sorts of concerns? Is that what you’re suggesting? Should there be a
different statutory provision for non-profit institutions? Should the repeat
infringer policy not apply in those situations? I mean, what is the sort of bottom-line proposal
that you would favor?>>Jonathan Band: No. I think the statute as written,
you know, it’s a little awkward, that provision, you know,
because it’s also talking about terminating infringers, which I don’t think
we want to do. We want to terminate the
subscriptions of infringers.>>Kevin Amer: I’ll
just [inaudible] it. If –>>Jonathan Band: You know,
just saying, I don’t think that was that well drafted.>>Kevin Amer: I just –>>Jonathan Band: But I
don’t think it needs — I don’t think that
oddity is enough to require congressional
intervention. I think as long as
courts continue — or courts don’t sort of start
imposing additional restrictions on what appropriate
circumstances, so that we — you know, a library can
decide what’s an appropriate circumstance, a university can
decide what’s an appropriate circumstance, and say, “Look —
” — you know, because again, if you’re a university student, and you don’t have
access to the network –>>Kevin Amer: Right.>>Jonathan Band: You can’t
get your homework, you know, you can’t get your
assignments, you can’t –>>Kevin Amer: Right.>>Jonathan Band: You
can’t take your exams.>>Kevin Amer: So but, I
mean, I guess sort of the — the statute, obviously,
contemplates that at some point
people will be terminated if they are repeat
infringers, and they — right. [inaudible] their
subscriptions –>>Jonathan Band:
[inaudible] that.>>Kevin Amer: That
their subscriptions will be terminated. And I take your point about
the need for that standard or, you know, appropriate
circumstances to vary depending on the nature of the
service provider, particularly given the
importance of internet access. But on the other hand, that’s what the statute
seems to contemplate. So are you suggesting that, you
know, for certain institutions that requirement
should not apply?>>Jonathan Band: No, no. What I’m suggesting is that
in your report you talk about how appropriate
circumstances could be interpreted in different — you
know, that what’s appropriate at a public library
or a public — what’s appropriate for a
university, may be different from what’s appropriate for a large commercial
service provider.>>Kevin Amer: Okay. Thank you. Ms. Castillo?>>Sofia Castillo:
Well, first of all, I would like to push back
or disagree with the notion that it was difficult
for Cox and Grande to implement a repeat
infringer policy. In those cases, it was clear
that it was not difficult. They just decided not to do it. Cox had a policy, and it just
decided not to implement it. And Grande didn’t even have
a policy, and it just decided to ignore all the millions
of notices that it received for repeat infringement. So I don’t think it’s about difficulty
levels in those cases. I think, for the purposes of
the Copyright Office study, there are a couple of
elements in the Cox decision that are helpful about repeat
infringer policies in general. And it’s what Mr. Greenberg
was alluding to before about the concept
of repeat infringer. The court said that a
repeat infringer is someone who infringes copyright
more than once, and there is no need
for adjudication. And I think that is something
that the court got right, and that the Copyright Office, in its recommendation,
should stand for. Secondly, the Cox
decision also ruled that a repeat infringer
policy should be assessed from an ISP’s general practices. And I think that is also
the correct interpretation of the law. Then in terms of what
constitutes a reasonable implementation of a
repeat infringer policy, there are three things that the
Copyright Office can include in its report. First, is that an ISP
should meaningfully and consistently
enforce its own policies, whatever that policy is. It’s true that we don’t have
guidance from the courts on that, but at least we do
have guidance and meaningful and consistent enforcement
of such policy. This is from the Cox decision. And then from the
Grande decision, it’s clear that an ISP
should be keeping a log of repeat infringers in
order to be able to say that it has reasonably
implemented a repeat infringer policy. And in third place, AAP believes that ISPs should prevent
terminated subscriptions or terminated users from opening
a new account using simply a different email address
or a different user name, but still be the same person. These decisions are
also helpful in pointing out what is not a
reasonable implementation of a repeat infringer policy. So, for example, in Cox and
Grande, the courts concluded that refusal to terminate known
repeat infringers is one way to not comply with the statute. So I think that’s an
easy recommendation for the Copyright Office
to follow through on. Another one would be
the termination followed by the immediate
or thereafter — shortly thereafter reactivation
of repeat infringers. That also seems to
be inconsistent with a reasonable
implementation of the statute. And then on the question of
the contradictions between Cox and Grande on the one hand
and Motherless on the other, there are at least two problems with the Motherless
decision regarding the repeat infringer policy. The first one is that
512(i)(1) requires ISPs to inform their users
and subscription — or account holders of the
repeat infringer policy. In Motherless, the policy was
simply anything legal stays. And that hardly conveys to a
user that there is a potential for termination if
they repeatedly submit infringing content. Then another thing that the
Motherless court got wrong was that it ruled that
implementation of a repeat infringer
policy based on the operator’s personal
judgment, and without a log of repeat infringers, was
reasonable under the statute. We believe that Judge
Rollingson’s [assumed spelling] dissent is very illustrative
of why this is problematic. And we also think that the Cox
decision, with its standard of meaningful and consistent
enforcement, is actually more in line with Congress’s intent
in implementing the DMCA as a system of shared
responsibilities between ISPs and
copyright owners.>>Kevin Amer: Thank you. Mr. Midgley, did you
want to follow up?>>Peter Midgley: Yeah. Just a couple quick points. First of all I am interested
if there is any guidance on — you know, unlike Cox or Grande,
at least in our university, we’re forwarding
these notices on, and we do receive actual
notice from the subscriber, to the best of our ability,
that there is no infringement. And so what is an ISP to
do if in the implementation of their policy they get
conflicting information? Is that now considered an
infringer when they’ve denied that they’re an infringer? And whose word are
we supposed to take? And how do we deal with that? That’s a question. I would just also like to
say that 512(i)(1) refers to a service provider
system or network. This is a very important
distinction in a university setting. We provide a network, which has
the First Amendment implications we talked about. We also provide the system, which is how our students
access our university. And if the statute isn’t clear
about what precisely we have to terminate once we’ve decided
that there’s a repeat infringer, whether it’s the system or
the network, that’s a very, very important distinction
for us. And we would appreciate
some clarity on that. I can just tell you that non-profit educational
institutions are notoriously risk-averse. And so if — uncertainty is
going to make it very difficult for non-profit educational
institutions to continue to provide the robust
environments that I think we all depend on to provide the socially
beneficial functions of those institutions if there
is uncertainty around how to avoid, you know, potentially
catastrophic liability.>>Kevin Amer: Thank you. I think I’m going to
go to Mr. Donaldson, and then Mr. Doroshow. And then, unless there
are further comments on repeat infringer, we’re
going to move to the next topic. Mr. Donaldson?>>Caleb Donaldson: Sure. I just wanted to say that Cox
on the one hand and Motherless on the other shows that the
courts are getting involved in whether these policies
are appropriate to the nature and purpose and size
of the platform. And that truly one
size doesn’t fit all. And these are sort of
— they provided — the cases taken together provide
a good example of why it’s hard to write a regulation that
would cover all of this. Even putting aside
512(a) providers, the number of different
kinds of 512(c) platforms, and the different
resources available to them, dictates that repeat
infringer policies will have some variation. And that’s true not only from
the perspective of a big company to a little company, but even
within Google’s 512(c) products, of which there are many. You know, we tailor
repeat infringer policies to the appropriate
circumstances given, you know, what the purpose
of the platform is.>>Kevin Amer: Mr. Doroshow?>>Kenneth L. Doroshow:
This may help with the segue to other aspects of the 512. But I just want to make a
comment about the importance of the repeat infringer policy, and the termination
requirements and so on. Very important, obviously. These are important developments with the BMG and
Grande decisions. But they’re not the be-all and
end-all for a couple of reasons. First, if you have to — if you look at the facts of
these cases, in order just to make the point
to prove the case that there was a failure
here, the rights’ owners had to send millions of notices. So there’s a sort of an
upfront burden that’s put on the copyright owner that
is really unreasonable, even to make this
threshold bare minimum case that these ISPs had not
implemented a repeat infringer policy reasonably. And then I think — this is
to echo Ms. Castillo’s point from earlier, even if you
have a perfect situation and a perfect system of
repeat infringer policy and terminations and so on,
you still have the problem of users finding other means — these infringing users
finding other means of access to the internet, whether it’s
through a different service or because there’s a lack
of know your customer rules, they can show up again using
different identification, different account information. So, again, the real action,
it seems to me, is the issue of red flag knowledge and
the representative list, and the sort of — the more
substantive obligations that go to the knowledge of the ISP, and then what obligations
they have upon acquiring that sort of knowledge.>>Kevin Amer: So that, I
think, picks up on a question that I had during
the last panel. Which is how do content
owners typically go about notifying conduit
service providers of infringement on
their platforms? Because that was, obviously
— you know, it sounds like, Mr. Midgley, you still are
receiving notices that purport to be 512(c) notices
in some cases. There was an issue, you know,
previously in the Cox case about service providers
rejecting those sorts of notices. I wonder if you have any sort of
insight that you could provide as to the practice
in your industry of how rights holders
typically go about providing this
information.>>Peter Midgley: I mean,
it’s somewhat variable, depending on the
nature of the service. But we do send DMCA
compliant notices both to 512(a) providers
and 512(c) providers.>>I wanted to — did Ms.
Moss and Ms. Rasenberger, did you have comments on
repeat infringer or — you did? Okay.>>Mary Rasenberger: Yeah.>>Kevin Amer: Because
I think — okay.>>Mary Rasenberger: So real
quick, since I listened to –>>Brad Greenburg: Mic.>>Mary Rasenberger: The issues
raised here — can you hear me?>>Kevin Amer: Oh, no. Please turn –>>Brad Greenburg:
Your mic’s off.>>Mary Rasenberger:
Oh, thank you. As I listen to the issues raised
here and in the earlier session, it occurs to me that a role that
the Copyright Office might have if 512 isn’t completely
revamped, as I earlier suggested
it might be, it would be to provide
best practices to convene the different
industries, the different types of service providers,
and have best practices for both adequate repeat
infringer policies, and also for — going into
probably the next question, for red flags and
red flags knowledge, where it really differs
by industry. And in that way, the service
providers couldn’t say, “Oh, the watermark doesn’t
necessarily mean infringement.” I mean, they would
be educated in terms by industry by industry.>>Kevin Amer: Sure.>>Sasha Moss: Just to
quickly branch off that note. Something that R
Street has been looking into with Legislative Branch
Capacity working group is this idea of capacity within
the legislative branch, so the first branch, which
copyright [inaudible] is part of the Library of Congress. I see you all included that. And something the PTO has
instituted is the PTO Inventor Assistance Center, almost
like a toll-free call number where I can call PTO
and ask a question. We have that for basic
services, like the internet. I can call my internet provider
and say I have a question, how to fix something, and because I am paying
a service provider, they have to offer me an answer. I think there could be
an interesting avenue, maybe through the registration
process and fees allotted to offer this kind of
assistance to rights holders.>>Regan Smith: So we have
the Public Information Office. And they answer hundreds
of thousands of questions every year. So that might be something
where they could call.>>Sasha Moss: They could look
into it in the Copyright Office. And I just think that’d
be an easy way not to solve the problem by any
means, but to offer avenues for the legislative branch to continually beef
up its capacity. If the PTO could have it within
the administrative branch, there’s no reason in my mind why
the legislative branch can’t do the same.>>Kevin Amer: Thank you. I’m going to go to
Mr. Kupferschmid, and then I think we’re going to
have to move to the next topic.>>Keith Kupferschmid:
Yeah, I’ll try to be brief, and sorry for getting in
the way of you moving on. But there’s a lot of
discussion so far on this panel, and also in the first panel, about this one-size-fits-all
does not work. And I don’t disagree with that. But if we’re going to consider
that for ISPs, we really need to consider that for the
other side of the equation, which is the creative community. Right? One-size-fits-all
for the DMCA doesn’t work for the notice system
either, for the little guys, the small businesses,
the individual creators. It just doesn’t work. They can’t afford to bring
these expensive suits against these ISPs, these
repeat infringer suits. They can’t afford to be
sending these millions of takedown notices, you know, that the music industry
might be sending, or anyone else for that matter. They are truly sort of —
if you watch “Star Trek,” they’re the guys
with the red shirts. Right? They get beamed down to
the planet, and they’re toast with — you know, immediately
the guys [inaudible]. You know? They’re the expendable
group here, if you will. So I think that needs to
be taken into account. If we’re going to take into
account how the DMCA works, or doesn’t work, for the small
platforms, we also need to take into account how it works
or, frankly, doesn’t work, for the smaller creators.>>Kevin Amer: Thank you. So I’d like to turn to the issue
of storage at the direction of the user, and
how that relates to the no duty to
monitor provision. And, Mr. Band, I know you
mentioned the Mavrix case. So, obviously, we’ve had two
recent Ninth Circuit cases, Mavrix on the one hand and
Motherless on the other. Both of which involved
service providers that provided some level
of human monitoring. And so I wonder what your
views are on the extent to which these cases have
clarified the law with respect to 512(c) eligibility,
particularly on the issue of when something should
be considered storage at the direction of the user.>>Jonathan Band: Well, I don’t
think they’ve clarified the law. I think they’ve muddled it. And, you know, like I
said, I think Mavrix sort of went in a bad direction. Motherless sort of
improved it a little bit. But, I guess, it just
really seems to be treading in a very dangerous area,
especially as was indicated on the previous panel. I mean, this issue
of moderation, what is appropriate
moderation, it really — it’s a very fundamental issue
that goes way beyond copyright, and it gets into — you
know, and it gets into 230, but then it also gets into, you
know, these broader issues of, you know, what do we want
the internet to look like. And –>>Regan Smith: Well,
[inaudible] –>>Jonathan Band: It seems –>>Regan Smith: It also
goes to copyright, though. Right? Because it says — 512(c) it says “ability
to control such activity.” And some of the cases
[inaudible], we have to reconcile
copyright [inaudible] –>>Jonathan Band: Right. But –>>Regan Smith: You know, in
thinking about it and kinds of these other issues
that are very important that we heard about [inaudible].>>Jonathan Band: Right. But 512(m) says, you know, you can’t condition
eligibility on monitoring. And so it really — you know,
sort of the sense of Congress, both in 1996 when the CDA was
being discussed and then in 1998 when the DMCA was being
discussed, was, you know, that there wasn’t going to
be requirements to monitor, but that people were going
to be encouraged to do it, because it was this recognition that monitoring was
a good thing, and moderating was a good thing. I mean, moderating
not monitoring. But that you wanted to have,
if possible, human involvement, because, you know, you could
make all these determinations algorithmically and so forth. And so we just — that’s
what’s so troubling. Now, it could be that
in the specific facts of Mavrix you could sort of
say, “Well, okay, they were — there was so much human
involvement,” and, you know, that they sort of numerically
were filtering out, you know, two-thirds, three-quarters
of the content, so it really was sort of a
situation like a publisher that — where, you know,
a hundred submissions — you know, a hundred
authors submit novels, and only one gets published.>>Kevin Amer: Well,
that’s what they –>>Jonathan Band: But it’s –>>Kevin Amer: I’m sorry
to interrupt, but, I mean, that seems to be the
point that, you know, I think we were trying
to get at. When — I mean, at the beginning
you said that, you know, Mavrix — I don’t know if you
said it was wrongly decided, but, you know, you said that
it has sort of muddled things. And, you know, it seems to
me that there was quite a lot of content-based selection
going on in that case. You know, if that
doesn’t constitute storage at the direction of the service
provider — I don’t know, can — it’s hard to think of
examples that would. Is it?>>Jonathan Band: [inaudible] and I guess — and
I don’t remember. My recollection was
that in Mavrix that there wasn’t anything that
— you know, I’m sure there was in the trial court,
but I don’t remember in the appellate decision, if there was any sense
of quantification. But I — and I, obviously, don’t want to have
hard and fast rules. But I think quantification does
give you a sense of how much –>>Kevin Amer: Quantification
in terms of how much –>>Jonathan Band: Well, right.>>Kevin Amer: What
percentage [inaudible] –>>Jonathan Band: Like, so
at what point do they stop — does it really — if I’m the
service provider, and, you know, I’m a platform, and I’m
getting in, you know, hundreds or thousands of submissions
a day, and I’m just kind of doing this very quick and
dirty, you know, cat video, yes, you know, something else, no. You know, that kind of —
and, again, it’s not even me. It might even be, again, the
community that’s sort of — or volunteers who
are doing that. That’s one thing. And then if you end
up with, let’s say, 80% or 90% of the content
that is submitted by users, that ends up going
up, then it’s — I think it’s pretty easy to
say, “Yeah, that is storage at the direction of the user.” On the other hand, if you have
a situation where, you know, 90% gets screened out for a
variety of reasons, and not — you know, including that
it’s not appropriate or the quality isn’t
good enough, I mean, you really do have these
sort of editorial decisions, then it starts looking a
lot more like a publisher. And then you could sort of say,
“Well, that starts looking — ” and, again, it is a continuum. And in the specific facts
of Mavrix, I don’t know. I don’t know if it was —
were they screening out 10% or were they screening out 90%, or was it somewhere
in the middle. But all I’m saying is
that some of the language in that decision was
troubling, and reflected a lack of sensitivity that,
again, Mother [sic] will to some extent correct it. But still, you know, the bigger
point is that we don’t want to make — we don’t
want to put platforms in this impossible decision —
you know, in a possible place where if they try to moderate
or try to look at and make sure that the stuff is really
appropriate, that they end up losing their safe harbors.>>Kevin Amer: Thank you. Ms. Castillo?>>Sofia Castillo: I
think it would be helpful for the Copyright Office to
look at these two cases to stand for the proposition
that screening material for potentially infringing
content is an activity that enhances public
accessibility of content stored at the direction of the user,
and does not expel an ISP from the 512(c) Safe Harbor. The types of screening in these
two cases were very different. Motherless was screening
for illegal content. Their policy was, again,
anything legal stays. And so the court there found
that this was an activity that was acceptable for
purposes of the safe harbor, because it would still render
the content to be stored at the direction of the user. The type of screening in
LiveJournal was different. It was for substance. The court called it manual,
extensive and substantive. And it was a much closer call. And one thing they were not
screening for was infringement. And so at that point —
I think what’s helpful from these two decisions is
that if an ISP is screening for substance, and it’s not
screening for infringement, then it is possible that it
will lose its safe harbor. In Motherless, the ISP
was simply screening for illegal content, including
copyright infringement. I think these decisions are
helpful in that they attenuate, to some extent, the ISP’s
incentive not to look at user submissions
for infringement. And they also clarify
that screening content for substance is not an
accessibility-enhancing activity. And that the ISP might
lose its safe harbor if it engages in this behavior. I think one other point I would
like to make is that we disagree with court’s interpretation
of Section 512(m) so far. The title of that provision
is Protection of Privacy. And both the Senate and the
House report make it clear that Congress’s intent with this
provision was to prevent ISPs from violating privacy laws, such as the Electronic
Communication Privacy Act, when they were pursuing
efforts to address infringement. This section was not meant to
say that ISPs have no obligation to monitor whatsoever when it
comes to copyright infringement.>>Kevin Amer: Thank you.>>Brad Greenburg: Yeah, I just want to ask a quick
follow-up question on that. If I understood correctly, you
were saying that to the extent that a service provider is
screening for illegal content, they should also be screening for copyright infringement,
which is illegal. So the question there I
have is does that mean that if a source provider is
screening for child pornography and snuff films only that
they are going to suddenly be out of the safe harbor? And if it’s not what
you’re saying, what is sort of the limiting principle
between screening for no illegal content
and all illegal content?>>Sofia Castillo: No, no,
that’s not what I’m saying. What I’m saying is what the
court said in Motherless was that screening for illegal
content of any kind, so child pornography and
copyright infringement, were things that Congress could
not have meant to discourage by eliminating the safe harbor. So what I’m saying is
that if ISPs are screening for illegal content, including
copyright infringement, then they shouldn’t lose
their 512(c) Safe Harbor. Does that make sense?>>Brad Greenburg: It does. My question is what if
they’re only screening for some illegal content, but
not copyright infringement?>>Sofia Castillo: Hm,
that’s a closer question. Right? Because according
to the LiveJournal decision where there isn’t any discussion
of screening for any kind of illegal content, in that
case the court seemed to think that on remand the ISP
might lose its safe harbor.>>Kevin Amer: Mr. Carlisle?>>Stephen Carlisle: I think that content moderation
is a good thing, and that it should
definitely be encouraged, because the alternative to
that is no moderation at all. And it just becomes
an absolute, you know, free-for-all in a cesspool. I think that perhaps by getting
better practices out there that we could solve a lot of
these particular problems. And I’ll reference them
from my own experience. I used to be a musician
and write songs. Now, in order to get
these heard, I placed them on a website called
ReverbNation. Now, according to
ReverbNation’s terms of service, I had to warrant that I was
the author of the material, or I had to license —
property license the material, or it would not go up at all. And I think that a lot of the
problems that we’re experiencing with red flag knowledge, and a
lot of experience about well, it’s got a watermark
on it, but who owns it, we could have better practices
along these lines before we get it posted at the
direction of the user. Perhaps the threshold question
is who owns the material? Is the user who’s posting
the material claiming to be the owner of the material? Are they the proper
licensee of the material? Or is the material
in the public domain? I think that these factors would
go a long way to eliminate a lot of the guess work and the
problems that we’re experiencing between the Mavrix case
and the Motherless case about how much content is — you
know, moderation is required.>>Kevin Amer: Doesn’t
Google already — Mr. Donaldson, maybe
you can answer this. But, I mean, doesn’t Google
require people to affirm that they have the rights to upload whatever it
is they’re uploading?>>Caleb Donaldson: Yeah,
our terms of service include that you have the right to
upload what you’re uploading.>>Kevin Amer: So, Mr.
Carlisle, I wonder sort of are you suggesting
something kind of from the regulatory
standpoint –>>Stephen Carlisle: Yes.>>Kevin Amer: That
would — okay.>>Stephen Carlisle: Yes.>>Maria Strong: Yeah,
actually, if I can follow up with Mr. Donaldson,
so, I mean — with following up with what
Mr. Carlisle was saying. Could you maybe explain a little
bit if the situation happens, you know, when folks
are using Content ID, and I understand there are a
variety of additional products that are offered on the YouTube
platform, to answer the question of your connecting the copyright
owner and the alleged infringer to take their dispute offline,
to go to the contract question that Mr. Carlisle raised, is
there anything you can share about maybe some of the
experience you guys have seen in the use of both not only
just Content ID, but also some of the other tiers of
service [inaudible]?>>Caleb Donaldson:
Yeah, sure, absolutely. Content ID resolves 98%
of the copyright disputes that arise on YouTube. So it’s been very effective. We’ve also just recently
introduced the Copyright Match tool, which you’ve alluded to,
and that allows smaller creators to easily find matches
to their works, and to file takedown notices
in a much more streamlined way. We’ve rolled that out now
to 400,000 smaller creators. And we’re continuing to
expand the universe of people who are eligible for a
Copyright Match tool. So, you know, we’ve
seen good results. You know, to circle back to the
Beyonce question from earlier, those songs are a demonstration that the record label wants
those songs on the platform, Beyonce’s record labels. They’re licensed. And, you know, in
most of the cases, if you can recognize the
song, so can Content ID. And if, you know, Beyonce
or some other artist chooses to monetize some fan’s upload of
the printed lyrics and the song, we’re happy to help with that.>>Regan Smith: Can I ask
you is that always clear — is that always going to
be clear that the Beyonce, or whoever the rights owner
has opted to leave that up? Or how do we know
that that’s true? I mean, YouTube may know, but –>>Caleb Donaldson:
Yeah, it’s complex. I don’t think there’s an easy
way for the public to find out. It’s true, though, that
YouTube has, you know, more than a thousand deals
with music rights holders, including all of the
largest music rights holders. So the vast majority of content on the service that’s
music is licensed. I’ll say further that in general in the music industry
there’s a huge problem with incomplete data. That publishing houses and
record labels, to some degree, in collecting societies can’t or won’t reliably tell
you exactly what the list of works is that they represent. And so we’re working with
incomplete information.>>Regan Smith: Mr. Levy, did
you want to engage with that, because I think sometimes
independent musicians have a slightly different perspective?>>Arthur Levy: I
absolutely did. Yeah — sorry. I absolutely did. Content ID and Content Match
rely on representative lists. And it’s fine for publishers that have direct
arrangements with YouTube. But a lot of our
independent publishers, and certainly songwriters,
don’t have those direct deals. And, therefore, as
far as I know, are unable to submit
a representative list that would keep their
content off of YouTube. Is that right?>>Caleb Donaldson:
Content ID doesn’t rely on a representative list. It relies on ingesting a
copy of the music to make — a fingerprint is
even too simplistic, to make a statistical
representation of that song.>>Arthur Levy: Who’s
submitted by the labels, which is essentially
the same thing. Right? It’s here’s a list of
content that we want to protect.>>Caleb Donaldson: It’s
not a representative list. It’s a complete list of the
things that we’ll protect.>>Kevin Amer: Well, what
about the broader point? You know, in the
last roundtables, we did hear from individual
music creators in particular who were concerned that Content
ID wasn’t available to them. Has that changed in
the intervening years? And is that — you know, have
there been efforts made to sort of expand the universe of
rights holders who are eligible?>>Caleb Donaldson: There
has been some growth in third-party aggregators
of claimants, so that people who work with a smaller
rights holder to send Content ID notices. There’s the Copyright Match
tool, which I just mentioned, that we’re very proud of. That’s a tool better
tailored to smaller creators. And –>>Kevin Amer: Why is it that Content ID doesn’t work
well for smaller creators?>>Caleb Donaldson: Content
ID is inordinately powerful. It’s very complicated to
operate and administer. It allows sophisticated, larger
partners to specify amounts of their material that
they’re willing to use, for example, thresholds. And, you know, we’ve
seen examples where even from those Content ID partners,
a user who isn’t as experienced at Content ID can,
you know, take down or wrongly monetize a
broad swath of content.>>Regan Smith: Is
there an obligation — this may already [inaudible]. But is there an obligation
to monetize a certain amount of material through Content ID? Or could you just use
it all for takedown?>>Caleb Donaldson:
If you were a — if you were a Content
ID partner, you could take it all down.>>Kevin Amer: Mr. Doroshow?>>Kenneth L. Doroshow: Yeah. Just returning to the
discussion about moderation and if a service
provider chooses to screen certain
illegal content, but not copyright infringement, should they lose
the safe harbor. I think it’s our position
is if the means to screen for copyright — that
copyrighted material exists, and they do, then there is
that obligation that in — you know, if a service provider
is interacting with the content on its site for the purpose
of improving its bottom line, and making it a more
appealing site, and benefiting from the presence of the
copyrighted material, and it has the means
available to screen that copyrighted material
out, then we would say yes, you would lose the safe
harbor for that reason. And this goes — there was
enough discussion, I suppose, in the first panel,
so I won’t belabor it, that the availability of these
tools — now, Content ID, obviously, Google
invested a lot of money and built its own solution,
there are other solutions out there that are not
so expensive and costly. And if those are reasonably
available, then we think that that is an appropriate
condition of the safe harbor.>>Brad Greenburg: Well, so I want to follow
up on that question. But, firstly, I do
want to go back to Mr. Donaldson just
to clarify a point. At the outset, you said that Content ID was
$100-million system. The last time we did
these roundtables, now it has been three years,
it was a $60-million system that everybody was
saying should be given to every single ISP
in the world. Is that because of
subsequent investments? Or does that include
other things like Content Match and stuff?>>Caleb Donaldson:
I think we said more than 60 million, but I’d have –>>Brad Greenburg: Okay.>>Caleb Donaldson:
To check the record. And as far as I know,
the number’s accurate and it’s $100 million.>>Brad Greenburg: Okay. So I just want to follow
up, though, on this. The last time around we
heard that everybody needs to be using filtering
technologies, it can’t — maybe they can’t afford things
sophisticated as Content ID. They certainly would take years to develop [inaudible],
even if they could. But since then, even
at the time, numerous large service
providers had some sort of filtering technologies
they were using. I’m sure more have
been developed in the three years
since, or two years since the last round
of comments. So I’m curious to
hear a little more of what has been added
to the ecosystem. And whether or not — what
the feelings are as whether or not we’ve reached a point
where filtering technologies, whether it’s Content ID or something a little more
rudimentary, are STMs?>>Caleb Donaldson:
Just to quickly follow up on your first
little question. It’s inaccurate to refer to — to think of Content
ID as a static entity. And it’s the subject
of, you know, major ongoing investment
all the time at Google. So, you know, an additional
$40 million of investment, give or take, in the last three
years sounds reasonable to me. It’s the work of many,
many people at the company. As to whether they’ve become
standard technical measures, I mean, under the
statute, I’d say no, because they’re not
in widespread use. And so that — you know,
that’s something we would — you know, we’d have to consider.>>Kevin Amer: Mr. Hudson?>>Douglas T. Hudson: So
I think some of this fails to account for the long tail. That, you know, when you’re
doing from long tail content, non-digital content,
small creators, this filtering technology for the foreseeable future
isn’t going to be comprehensive. Just like with the
repeat infringer policy, you’re now faced with
a question of how much of a filtering technology
is sufficient for you to stay inside the — to stay
inside the protection of 512. It’s not going to
be comprehensive — if it’s not comprehensive enough
to you now, whose protection. We’re just kind of — we’re
moving the question over, but the uncertainty
still remains. I think that’s why
the flexibility of the current regime and the
ability to tailor it based on the size of the entity,
the type of content, the type of content provide — content creators needs
to be taken into account. And simply just changing it to add a filtering
requirement isn’t going to solve the problem.>>Kevin Amer: What’s your
response to the argument that, you know, at a minimum you
could filter entire works, for example, and that
the universe of instances where the uploading of a full
work is going to be licensed or fair use or otherwise
permitted is relatively small? I mean, why couldn’t
filtering technology at a minimum capture full works?>>Douglas T. Hudson: What
if the full work is a quilt? How do you — we’re —
our minds are set for, like, digital content. And a lot of the content
that’s being shared or discussed isn’t digital. It may be a picture of
a digital [inaudible]. The picture of it may
be digital, but it is — it gets inordinately
complex when you’re going one or two levels beyond that. If you’re talking about, you
know, a full copy of a movie or an audio work, I think that’s where there’s been
technological work done here to help solve that problem. But my point is that
there’s a huge long tail, and that long tail, when you
add it up, is significant, that the technology that
everyone has been talking about just doesn’t work for.>>Brad Greenburg: I don’t want to lose the forest
for the trees here. But just so we’re
talking about the kind of content might be
uploaded to Etsy. Let’s say, a full image of a movie poster
printed on a t-shirt. Right? Like, why isn’t that
the kind of [inaudible] that could be screened out?>>Douglas T. Hudson: I
think it depends on the type of technology available, and
how reverse image search, how other technologies
could be applied. You know, I can’t speak to
any one particular instance. And — There are also issues where
there are things that are old and things that are new. We can’t determine — for example, there could
be a vintage t-shirt that has something on a
— or a vintage poster. So we’re not generally in
the position to know whether that vintage is correct or
not, unless we get assistance from the copyright holder.>>Regan Smith: Has Etsy
changed its policies at all following this
[inaudible] decision? Or do you do that as just, like, qualitatively entirely
different, because in that case, they’re printing and
they’re producing it themselves [inaudible]?>>Douglas T. Hudson: We do view
that as qualitatively different. Etsy is kind of a pure
marketplace, a pure platform. We don’t handle goods. We don’t do drop shipping. We don’t print on demand. Users are responsible
for their own content. So we’ve used a different set
of facts and a different issue. That said, we do have a
repeat infringer policy. We do have a kind of a
comprehensive set of policies to deal with intellectual
property issues. And [inaudible] into a
[inaudible] property issue such as counterfeiting, which
we view as slightly different.>>Kevin Amer: Mr. Kupferschmid?>>Keith Kupferschmid:
Thank you. So on the issue of filtering, I’m going to take a line
Sasha said earlier which is about the DMCA, that
perfect doesn’t need to be the enemy of
the good here. And we tend to talk — when
we talk about filtering and screening and monitoring, we seem to just focus
on the extremes. And there’s a huge
middle ground there. Right? And this isn’t just
a black-and-white issue. That there can be monitoring
and screening and filtering, or whatever you want to call
it, that can be done in a way that takes into account
different concerns and different types of examples. And just to identify a few — I think, Brad, you had mentioned
sort of the full movie example. What about a test that is never
licensed to anyone, is held sort of in secrecy, never
found online? I mean, if you notify a platform
that that test shouldn’t be up, that should be good, so it
never sees the light of day. I mean, that’s just,
you know, one example. The example that Mickey
[assumed spelling] was talking about earlier today about
metadata on a photo, and then the response
question was,” Well, what if that photo is licensed?” Well, just because you find
metadata on a photo doesn’t mean that that photo is
automatically just taken down. Why isn’t there a
middle ground here? Right? Why isn’t
the question asked to the person who’s
posting or trying to post? That photo is — do you
consider this fair use? Are you the copyright —
are you a copyright owner? Are you licensed
because your name differs from the name that’s
on the metadata? There’s a middle ground. I mean, hell, man, every
time I go to a website, I’m asked if I’m a robot. You’d think that
you could say — you could come back and
just ask a question, right, ask a question in that regard. I think in short, and
what this roundtable and all the roundtables
are about, are not about solving
the problem. It’s about getting us closer
to solving the problem, getting us to a place
where we are right now where things are really,
really not working, we need to close that gap. We really do, because, I mean,
there’s a desperation out there, I’ve spoke on behalf
of little guys, but it’s not exclusive
to the little guys. It’s across the board. And, you know, when it comes
to filtering and in screening, monitoring, there’s
absolutely more can be done.>>Kevin Amer: I’m going
to jump back, if it’s okay, to Mr. Band just to see
if you had a response to that last point
about filtering.>>Jonathan Band: Right. Well, and I think this gets
to the whole moderation point. And certainly, you know, the
example that we heard from Etsy, I mean, if you have an image
from a movie on a t-shirt, I mean, that very well
might be fair use. It all depends on what the
— you know, if there’s — on the context and the
purpose of the image. And that could — obviously, if
you have an automatic filter, that could be a problem. But with respect to sort of
getting back to Sofia’s point about moderation, so imagine you
have institutional repositories. So that’s a lot of the kinds of
platforms that in the libraries and universities
have where they — you know, they have a
platform where people can up — you know, for a department
or whatever. Now, it could be
that in some cases, especially if you start having
a very large repository, that you want to have
some degree of moderation to make sure that the stuff
that’s being uploaded really belongs there. Well, why should you — it doesn’t make sense that you
would lose your 512 Safe Harbor by virtue of that by virtue
of making sure that the stuff that is uploaded there is
appropriate to that website, as opposed to just
looking to make sure that it’s not illegal content. And I’m even thinking of
a repository like SSRN, which is owned by one of your
members, [inaudible] also here. You know, they have huge amounts
of content, and, you know, a lot of us in the room have
probably uploaded content to that site. It doesn’t go up automatically. At first, you know, it has to
be reviewed by someone at SSRN who is — and I don’t know
what they’re screening for, but among other things
they’re deciding where it — you know, where it’s
appropriate to go. But it’s also –>>Kevin Amer: But
doesn’t that — I mean — I’m sorry to interrupt. But, I mean, that sounds like
volitional conduct to me. I mean, that sounds like, you
know, someone making a choice, the intermediary, the service
provider making a choice about whether or not
to post something. I mean, if I were just to email
you, you know, some materials, and you had your own
website, and you decide — you know, even if you post
100% of them, it seems to me that there’s an argument
that, you know, while I have expressed my view that I think they should
be uploaded, ultimately, you’re the one who
kind of says yes or no.>>Jonathan Band: Right. But I still think, at least
for purposes of 512(c), that uploading is at the
direction of the user.>>Kevin Amer: But if you
have the ultimate choice, then how is it at the
direction of the user? I mean, I take your
point about, you know, a sort of high-level
filtering for illegal content, or something of that nature. But I just wonder how we sort
of draw the line properly if we at some point are talking about whether the content is
suitable for the platform.>>Jonathan Band: Again, it
just seems to me that if it’s, you know, under the terms of
the statute, I mean, you know, it is the user that is sending
the stuff, and if, basically, everything was going to end up
on the site, so long as it is, you know, the kind of thing
that should be on that site, that’s a very different
situation from the traditional
publishing model, where you really could
sort of say okay, because they are making that
kind of qualitative decision that only, you know, one
submission out of a hundred or of a thousand is
going to be disseminated. You know, I think
that that just falls on a different place
on the spectrum.>>Kevin Amer: Oh, I’m sorry, Ms. Rasenberger,
you’ve been patient. Thank you.>>Mary Rasenberger:
Thank you very much. Thank you. A couple points. I want to go back to what Mr.
Carlisle said about, you know, having some sort of “I
affirm that I own this, I licensed it — I licensed
it or it’s fair use.” And terms of service
are not enough for that. I mean, we all know nobody
reads the terms of service. So to echo what Keith said, I
think it would be really good if whenever you uploaded
something to any site, you have to say “I own
it,” or “I licensed it,” or “I believe it’s fair use.” Why not? I mean, not only do you
have to now say I’m not a robot, but you have to identify bikes
or storefronts or something. And if you don’t have very good
eyesight, that’s sometimes hard to do in those photos. So I also, though — one
thing we haven’t talked about, and I want to make sure we do, is the question of
who the user is. And I know that the
Mavrix case touches on it. But I want to — we
also haven’t talked about the bad actors here,
and how ineffective 512 is against the really bad actors. So I want to give an
example of a site right now that we’ve been dealing with
for a couple of years already. It’s Ebook Bike. It’s owned by a gentleman
named Travis McCrea, who founded the Pirate
Party in Canada. He also is one of
the principal members of a religion called
Copynism [assumed spelling], and hosts their website. The sacrament for Copynism is
that copying is a sacred duty. So he owns this site. He hides behind Section 512. And I will say for most of the
ebook piracy sites that is true. They say, “Oh, we don’t
know anything about it. It’s all user uploaded content.” Now, to upload content, you
have to become a member. Most of the members
we know are part of the Piracy Party or related. If you remember, you
were instructed on how to buy an ebook,
strip out the DRM, upload it to the site,
and then return it. So you don’t even have
to pay for the ebook. So I just want to make
sure that we think about these kind of cases. We have sent notices to —
authors have independently, and we organized groups
of authors to do this, send notices to Ebook Bike. Sometimes it works,
sometimes it doesn’t. I mean, sometimes the
site doesn’t even work, the notice form. We have sent notices to Google. We have sent notices
to these servers. Now, the server provider
did take it down. But, of course, he just
went and got another server. Right? And this has been over
two years we’ve been struggling with this, completely
ineffectually. When we started, it was mostly
the independent authors’ books that were there. Now every — all
fiction works are there, particularly any popular
fiction books can be found on EBook Bike now. And we are left without
anything that we’re able to do, other than to bring a lawsuit
and litigate whether 512 — whether they’re protected
by 512. But with all of these open
issues, we can’t do that. Those cases cost millions
and millions of dollars, which authors can’t
afford to do. And who knows what
the outcome would be.>>Kimberley Isbell:
I just want to follow up on that a little bit. You know, in the first
roundtables, we also heard a lot about these sort of
pirate and bad actor sites. But do we really think
Congress ever intended to cover those types of sites –>>Mary Rasenberger:
Of course not.>>Kimberley Isbell: In 512?>>Mary Rasenberger:
Of course not. But the way that the courts, particularly the Viacom/YouTube
case, and the [inaudible] case, which have now become
the ingrained law in all of the circuits,
that make it possible for the bad actors
to be protected. Or, I mean, it’s possible that
we could win a litigation. But you have to go — because
the burden has been put on the copyright owners, and
pretty much every aspect of 512, except for the repeat
infringer policy, those cases are very,
very hard to prove.>>Kimberley Isbell:
But what is the answer? Is the answer to pull
back 512 for everyone, including the good actors? Is it to have a clearer off
ramp for the bad actors? I mean, how do we deal with this
without blowing the system up?>>Mary Rasenberger: Well,
that’s a really good question. So as I said before, I think that we should have best
practices that are in the law, or at least regulations for
what red flags knowledge is. And I think Congress
should step in and say — and clarify that knowledge and red flags knowledge
do not mean only knowledge of a specific infringing
item at a specific location. That is the problem. Knowledge that your site is a
place for piracy, that it’s — that pretty much everything
on the site is pirated, should take you out of 512. And that you should
be able to win, you know, on summary judgment.>>Kevin Amer: Thank you. So we’re running low on time. So I’m going to ask Ms. Moss — if you have a comment
on this topic. And then I’m just to —
because we’re short on time, I’m going to sort of
introduce the next topic. One or two of you during the
introductions mentioned 512(f). And so I invite folks to state
their views about, you know, the state of the law
with respect to 512(f), particularly post-Lens and
post-the-denial-of-certain-Lens. Feel free to answer. [ Inaudible Speaker ] Yes, of course.>>Sasha Moss: So three really ->>Kevin Amer: Oh, I think
your mic is — oh, it is. Okay.>>Sasha Moss: Three
brief notes. The first is regarding my
friend, Ms. Rasenberger, and Mr. Kupferschmid’s point
regarding this [inaudible] of idea of verification
when you upload. That’s putting onus on the user. And I don’t know about you, most users don’t have
a legal education. And they don’t know
what fair use is. And they might not read the
terms of service to find out what fair use
may or may not be. The second brief point is
regarding filtering technology and upload filters. Upload filters are not
working as properly. I think many around the table
like to say they would be. For example, in the EU
[inaudible] had her video taken down off of YouTube because
it said there was infringing content in her video. It was a speech on the
floor of the EU Parliament. And the third note, I just
briefly want to mention, is the moderator’s dilemma. This idea of seeking
after content that may or may not be infringing. As we saw with the passage
of [inaudible] and CDA 230, this creates the
[inaudible] raised on what can or cannot I take down, or am
I taking down legal content versus the infringing
content as I intended to do? And that’ll wrap me up, so
you can start doing 512(f).>>Kevin Amer: Okay. Thank you. Mr. Levy?>>Arthur Levy: Yeah. Now we’re on 512(f)?>>Kevin Amer: Yes.>>Arthur Levy: Excellent. So Lens is still a major problem
for us, because it appears to require — it does require that a copyright owner
consider fair use before it take down [inaudible] dissent. It doesn’t give any real
guidance as to what that means, what considering fair use is. It’s kind of hanging out there
as a potential time bomb for us. Again, for small publishers
and certainly for songwriters who may have just massive
amounts of infringing examples of their works out on the
internet, to you have to engage in a four-point analysis of fair
use prior to sending a notice for each and every one of
those, would be truly burdensome and potentially expensive,
if you have to have some of that staff to do it.>>Regan Smith: Well, do
you interpret that case as imposing a one-size-fits-all
standard on everyone? Or can you look at whether it
is an individual copyright owner or an individual user
filing a copyright — a counter notice who may
be less sophisticated than the defendant in that case?>>Arthur Levy: I don’t think
the ruling really helps us make that determination.>>Brad Greenburg: It’s,
like, you mentioned that Lens is a problem,
but didn’t actually — I didn’t hear you guys
talk about automation. In fact, whether or not
you can use automation in making a fair use assessment. The last time we did these
roundtables, we heard some — there was some sense that
there probably was still some room there. And my question is your
thoughts on what — with the Lens cert
denied, whether and how the Ninth
Circuit has left in place room for automation?>>Arthur Levy: Well,
again, I’m not sure if it addressed it directly. It seems as if language
regarding automation has been taken out of the second
version of the opinion. That’s a concern for us. It might very well mean that
they’re going to interpret it so that we cannot use
automation, which, again, increases our cost burden and the ability to
protect our works.>>Kevin Amer: So we’re
running out of time. So I think we’re just going to
do kind of a lightening round. I urge you all to be brief. Mr. Hudson, do you have a
response to Mr. Levy’s concerns>>>>Douglas T. Hudson: Only that
you just need to keep in mind that the platforms that
are in the middle there. We’re not in a position to
make various determinations. You have to rely on
the data [inaudible] by the copyright owner. The response is provided
by users. And in the world of filters,
where does liability live for the intermediary trying to just enforce the
system as it is? So this is why an
enforceable mechanism for 512 of when either side
violates their duty to follow the laws is
important for platforms to enable copyright owners to
get protection, and enable users to express their own
creative content.>>Kevin Amer: Mr. Carlisle?>>Stephen Carlisle: Yeah. I think that for small
creators, independent musicians, Lens becomes a
good-news/bad-news joke. Fair use is incredibly complex. We can’t even get the courts
of this nation to agree on a simple standard
for fair use. Everything has to be
examined on its merits. And for an independent musician
to be required to make that kind of assessment before sending
a takedown notice is really burdensome, especially
when the alternative — I mean, you take
red flag knowledge, we have these very sophisticated
companies professing that they have no idea what
a red flag knowledge is, or whether something’s
infringement. From my standpoint as a musician
and a creator, it’s much easier to figure out whether
something’s infringing or whether it’s — and whether
something is, in fact, fair use.>>Kevin Amer: But, I mean,
isn’t the statute sort of premised on the idea that
really, you know, it should — it’s going to be the copyright
owners who ordinarily are — you know, have the
most knowledge about whether the
use is authorized, and in general should
have the responsibility for monitoring platforms? I mean, isn’t that sort of the
basic bargain that was struck?>>Stephen Carlisle: Yes. And I think it was the
wrong bargain to strike. I think putting the sole onus on policing the entire vast
internet on copyright owners — some of them are very, very
small and don’t have the money or the time or the ability to monitor the entire
internet 24/7. I think it was the
wrong balance to strike.>>Regan Smith: It’s a drill in
on the misrepresentation part of the statute, 512(f). I mean, this is only
liability for knowingly, materially misrepresenting
that something’s infringing, or doing, you know, the
same type of representation for our counter notice. So if you are someone, and
you have an honest mistake as to whether something
is fair use or not, why is that a problem? If it’s complicated,
and you do your best, and you’re not [inaudible] new
material and misrepresenting, is 512(f) really a risk?>>Stephen Carlisle: I think
that the problem, again, is it goes back to
material misrepresentation, and what the ultimate
standard on that’s going to be. It seems to me that even the
court in Lens struggled mightily with what Mrs. Lens
was doing was — or rather what Universal
Music Group did was a material misrepresentation there. And you have a very
sophisticated, you know, actor. I think that — you know, I
think it’s a very gray area. And I think it’s a problem.>>Regan Smith: Yeah. I think it’s suggesting
for the little guy where knowingly is very helpful on both sides of,
you know, the system.>>Stephen Carlisle: Yes. But, again, you’re
dealing with somebody who may be a creative person. They may know something
about copyright, but, again, that knowingly part of it — musicians can get incredibly
aggressive when it comes to asserting their rights. And sometimes they’re right,
and sometimes they’re wrong. And especially in
an area with music where there is a
lot of homogeneity. And there’s a lot of
musicians out there who will hear any similarity
as being infringing.>>Kevin Amer: Okay,
Ms. Castillo?>>Sofia Castillo: Just a quick
response to Ms. Moss’ concern with the taking down
of legal content by over aggressive filtering. I think that’s what
you were referring to. I think for those cases, we
have the counter notice system. And that is working. So I think concerns with
the accidental takedown of legal content should
not be a reason not to implement filtering, or
not to look at filtering as a solution for rampant
infringement online.>>Kevin Amer: Okay. We’re going to go to Mr. Band,
and then I think we’re going to have to close things down. But we do have our
open mic at the end, so if there are things
left unsaid, feel free to sign up for that.>>Jonathan Band: So this is in
response to Mary’s point about, you know, litigation
being expensive. Yes, it is litigation. And many lawyers in
the room like the fact that litigation is expensive. But putting that aside, the
point is courts are very good at figuring out who’s a good
guy and who’s a bad guy. And if you’re a bad guy, courts
find a way to hold you liable. Napster lost. Grokster lost. I think sometimes the — sometimes rights holders
either they’re not as careful as they should be in
selecting their defendants, or they have a misperception of who’s a good guy
and who’s a bad guy. It didn’t really make
sense to go after YouTube. It really didn’t make
sense to go after Google. It doesn’t make sense to
go after [inaudible] Trust. You know, courts, you know, when
they look at these defendants, you know, they look at the
balance of what’s going on, and they will — usually
they’re very good at figuring out who’s abusing the system. And they will find a
way to shut them down.>>Kevin Amer: We had, I think, a reference to the
[inaudible] Trust case. So I think we’re going to let
Ms. Rasenberger [inaudible].>>Mary Rasenberger: Thank you. I appreciate it. I won’t talk about [inaudible]
Trust, which I was not at the office [inaudible]
when that was brought. I do want to just
mention good actors, because I mentioned
bad actors before. Good actors who want
to keep pirated ebooks and audiobooks off
their site can do it. And they do do it. Amazon uses fingerprinting. And they are pretty successful at keeping pirated
copies off their site. And when something slips
through, they work with us. They take it down.>>Kevin Amer: I think we’re
going to have to leave it there. Thank you all very much. We will start back
up at one o’clock.

Leave a Reply

Your email address will not be published. Required fields are marked *