Net Neutrality Will Require Us to Shine the Light on Internet Providers
A neutral
Internet—one where Internet service providers (ISPs) can’t unfairly limit our
access to parts of the Net, create special fast lanes for some services, or
otherwise handle data in non-neutral ways—will require more than just rules
that prohibit bad conduct. We’re also going to need real transparency.
Transparency is the crucial
first step toward meaningful network neutrality. Without a detailed and
substantive window into how providers are managing their networks, users will
be unable to determine the reason why some webpages are slow to load. New
services that hope to reach those users will have a harder time figuring out if
there is some artificial barrier in place, and competitors won’t know whether
and how they can offer better options (assuming some kind of competitive
environment exists)
Fortunately, the FCC realizes how important transparency will be in
ensuring a neutral Net. A key section of the network
neutrality proposal released by the FCC last month asks for comments
on how the agency should require Internet providers to disclose how they manage
traffic over their networks. Here are some initial thoughts.
Today, we’re in
the dark
The FCC’s current transparency requirements are too vague to catch most of the harms of non-neutral behavior. At
the moment the only thing an ISP has to do to be “transparent” by FCC
standards is “publicly disclose accurate information regarding the network
management practices, performance, and commercial terms of its broadband
Internet access services.”
For most Internet providers
this means a quick paragraph or two on their website describing at a very high
level how they deal with congestion, and perhaps some statistics about how
close their advertised speeds are to the true speeds users experience.
In order to generate these statistics, many of the largest ISPs take part
in an FCC study called Measuring
Broadband America. This ongoing study uses third-party white boxes (router-like devices that
users plug into their home Internet connections) distributed to volunteers
across the country to measure broadband speeds. The study averages data about
download and upload speed and latency over the period of a month. (Latency is
the time it takes for a packet of data to travel from one point on the network
to another.)
Unfortunately, Measuring Broadband America in its current form can’t detect
most of the harms of non-neutral network practices. That’s because most
of its tests only measure the speed of a connection to artificial testing
servers, not connections to popular websites that people normally access in the
course of their browsing. Current testing would never capture, for example, the
recent problems with slow Netflix download speeds for Comcast and Verizonsubscribers.
The only current test that
does measure how long it takes to access popular websites isn’t very rigorous
and is limited to webpage loading time, not capturing other essential factors
that indicate forms of ISP misbehavior, like application-specific traffic
discrimination or content modification.
We need more sunshine
If the FCC plans to issue net
neutrality rules that actually make a difference, the agency needs to expand on
its transparency requirements and demand that ISPs disclose more details about
the management of their networks.
More specifically, in addition
to measuring download and upload speed and latency, ISPs should also disclose
statistics on jitter, uptime, packet loss, and packet corruption, among other
details. Here’s what those terms mean:
·
Jitter is the variability in the
latency of packets, i.e., how much the delay between a packet being sent from
its source and being received at its destination changes over time. Low jitter
is important for applications like VoIP and video-chat, because if packets take
different lengths of time to travel, the resulting audio or video stream can
appear jumpy.
·
Uptime is the percentage of time a
user’s Internet connection is actually available. Uptime is important because
even if your connection is ridiculously fast, it’s not very useful if it’s down
most of the time.
·
Packet loss is the percentage of packets
that never make it to their destination, usually as a result of being dropped
due to congestion.
·
Packet corruption is the percentage of packets
that are corrupted while in transit.
This data also needs to be reported in a more granular form than it is
currently. Right now ISPs only report one-month averages, released every six months.
We need data on an ongoing basis so that users and the FCC can catch harmful
changes to ISP network management procedures more quickly.
Transparency will also require ISPs to do more than just test against their
own servers. We know all too well that ISPs can offer wildly different
qualities of service depending on their peering arrangements. And in a growing
number of cases, websites are paying ISPs directly, instead of a web backbone
company for interconnection, like the deal struck between Comcast and
Netflix, for example.
Reports of network quality
need to capture the experience the customer will get when talking to a large
set of end points that are (1) well-connected to the Internet backbone and (2)
unwilling or unable to pay ISPs for special peering arrangements. In other words, we need to know the kinds
of service received by companies that have special peering or interconnection
deals, as well as what type of service ISPs give to that startups that cannot
afford special deals.
For instance, if an ISP hosts
its own material or its own services, performance metrics for those services
should be tabulated separately from those for servers hosted in unaffiliated
data centers. Expanding testing this way will capture any discriminatory tiers
that ISPs are implementing in their peering, hosting and content delivery
network arrangements. The FCC should require the disclosure of a range of
statistics about these metrics, as well as their average values.
Levels of detail
Ideally all of this reporting
would take the form of a cumulative distribution function, a graph which would
allow endpoint service providers and consumer watchdogs to estimate the worst
network problems consumers would experience 1% of the time, 5% of the time,
etc., so that the public can get a sense of how variable they should expect
their service to be.
In addition to reporting how
often various levels of service are achieved, these statistics should also be
reported as a function of what percentage of subscribers achieved those
statistics on average (i.e. what percentage of individual customers had average
speeds, latencies, uptimes, etc. at a given value) so that regulators can
verify that ISPs are providing the same level of service to all of their
customers.
Finally, although
not strictly a transparency requirement, we believe that consumer watchdogs
should begin testing ISPs for other forms of non-neutral behavior, specifically
application blocking, throttling, and content modification. This sort of
discrimination can be just as damaging as unfair peering or interconnection
agreements, and we will need to be on guard that ISPs do not attempt to skirt
any net neutrality rules this way.
Transparency must be part of the rules
Of course, disclosure alone is
not enough to protect the promise of an open Internet. But transparency—when
properly implemented—can be a powerful tool.
To make the tool effective,
transparency rules must result in information that can be used by both experts
and everyday users. And user-facing transparency shouldn’t be shallow, even if
in layman’s terms.
Comentarios
Publicar un comentario