"But New York City, now considered the epicenter of the virus in the U.S., saw download speeds drop by 24% last week, compared to the previous 10-week range. That said, NYC home network connections, which have a median speed of nearly 52 Mbps, are managing."
It took me a few times reading it to understand what this is trying to say. What they mean is that many of these cities that are showing slowdowns already had higher-than-average median speeds. Many of the cities that aren't seeing slowdowns were already pretty slow to begin with. E.g. Chicago has only slowed down 10% but its median speed is only 26.79 mbps.
If I want to download a particularly big file, I use tethering via my phone. Seriously, about 90Mbit/s download on the phone, that's faster than most places I've worked and also faster than my home connection.
It was so obviously better, that I switched home to be on 4G instead of ADSL, because the 4G was both cheaper and faster. It's working out well, but the phone still manages to be faster than the router.
In Norway if one has a copper phone line into house, then the speed is typically bound by 40 Mbit/s. And it is not just rural areas. The apartment I live in is close to the center of Oslo. But as it is an old building with few flats it is not profitable to lay down the fiber even if it is just extra 10 meters from the near office building. With 4G with a good phone or modem one gets 50-300 MBit/s depending on the provider and coverage.
true, but qos is usually terrible and there is no sla to speak of on the consumer offerings.
the most common offer (vodafone cable) has frequent outtages where you only hope is to call a robo-center and the optimal outcome is a troubleticket and maybe a refund.
upload speed on gbit-cable is also only 50mbps...
if your business relies on connectity, best to avoid it.
A dedicated 50mbps/50mbps fiber circuit for a business will be better quality than a home cable (coaxial) “100mbps” connection, shared between a million houses with 2mbps upload. Notice how you can’t even find upload bandwidth advertised for residential cable internet, much less other factors affecting connection quality.
Maybe not in your case, but it my case, that big pipe is shared with a lot of people, and full of MITM, shaping, "threat detection" etc, that makes it sloow. And currently fronted by a shite VPN for all of us working from home :)
Your office connection is probably scaled well. If it was as fast as your home broadband it would mean your company overpays for it and actually looses productivity.
Office work does not require fast Internet access and does not require that your Youtube videos load instantly.
We did some research and it seems broadband Internet access does not improve productivity and may actually reduce it.
Any minuscule increase in productivity (I mean a total of 5 minutes of loading times per employee per day) is meaningless. People can't focus on their real work for 8 hours straight. When Internet works faster, people just use more of it and are less selective about how they use it. People also get quickly used to increased performance and will be complaining at almost any performance point.
Another research on build times showed that hugely increased build times don't improve productivity either. At first yes -- people get excited. But then, when build times are very short (say 5s compared to 10m) most (but not all) developers just stop staring constantly at their IDE and thinking about how to structure their code and instead reduce their iteration to a minimum (say couple of lines of code) and just restart to see if it works. This seems to actually reduce quality of produced code.
Checking the actual report (broadbandnow.com, not the linked article, which is blogspam) they looked at the weekly median download speeds for the previous ~10 weeks. Last week, 88/200 cities had a median download speed lower than any of the previous 10 weeks.
I'm having trouble imagining a failure mode that would cause total collapse under load rather than just reduced per-user speeds. Networks have to deal with spikes in demand pretty frequently anyway, usually during the evenings when lots of people are streaming video. A sudden increase in overall demand throughout the day doesn't seem like it should be that big of a deal; and so far it seems like it isn't.
How about the mode where, video streaming makes normal browsing impossible for some users? Their packets just don't survive the public net in the blizzard of video packets? That would cause apparent collapse for some at least.
Not sure how 'net neutrality' would factor into that - a free-for-all would mean it might happen more, but a pay-as-you-go would eliminate a whole class of users. A hard problem.
When it gets full (like it is some places), then the only strategy is to drop packets. If they are dropped systematically (by some rule) then some demographic loses some part of their internet entirely. That's also well understood.
Its a product of non-neutral nets, where by definition they do something by a rule (instead of randomly for instance).
And even randomly dropping (neutrality?) stresses certain subsets of network traffic more than other e.g. video can recover from dropped packets; TCP traffic not so much. Again stymying certain classes of activity more than others.
It seems almost funny now, but in the late 90s there was a lot of talk about how the internet could fail under its own load. The arguments never made that much sense to me, but they attracted attention...
So thoughts on rural areas- my experience is that they generally have more workers that will be considered essential than in urbanized areas- agricultural workers, truck drivers, people in the supply chain, etc. That might provide some moderating forces on the lower bandwidth options they are often stuck with.
That being said, schools doing distance learning over low bandwidth dsl or fixed wireless is going to be interesting...
I'm on rural wireless broadband about an hour outside of SF. While the service has never been amazing (3Mbit on a good day, 10Mbit at best), it's been sub-1Mbit (and sometimes sub-0.1Mbit) for quite some number of days now. The rural wireless ISPs are swamped.
I had to do a temporary move, and ended up in the only rural county in NC with county-wide municipal fiber. It's working great, FWIW. I'm hopeful this situation will finally prove to the rest of the state that rural fiber is a worthwhile thing to do at scale. I've seen people argue it's not worth it b/c it doesn't attract enough jobs to pay for itself, but the additional network resiliency and quality of living improvements alone are worth it. Especially in times like this. If Wilkes County can do it, so can others.
FYI, "lower bandwidth" can effectively mean dial up connections only, depending on how far someone lives from a major population center.
My uncle lives about 50 miles west of Minneapolis/St. Paul, and unless he wants to pay more than his entire (fixed) monthly income, his only option is dial-up. Cell data isn't even very good out there.
Lots of people in the US in rural areas are going to be entirely shut out of schools and businesses for the duration.
Agreed, I don't want to minimize the impact on them. I was responding to the article's DSL population.
For your uncle- I know that both suburban and rural providers in my area are providing various "COVID19 lifeline" deals, including fixed wireless operators serving some super rural locations. Might be worth looking into.
If it was overloaded handoffs or connections, the problem would be cheap and easy to fix. Unfortunately the problem is overloaded last-mile. Way more expensive to fix, because it costs money and cuts into profit margins.
I used to read about how ISPs were overselling and would be in trouble if everyone was using their connection to the fullest. It was always talked of as a hypothetical because it used to be pretty unlikely that every person would be home and needing to use the internet so much at the same time. Now that so many people are home and needing to use the internet, it’s not hypothetical and we get to see how ISPs are actually able to handle the load: and as many predicted it would not be able to keep up. I think this isn’t news in the sense that everyone thought it would be fine; I think this is news because this was a known but not addressed situation. It will be interesting if this leads to capacity upgrades or if we’ll see more pleas for major content producers and consumers to constrain their resources to keep the infrastructure running. I think people would like to see upgrades so that we can get the previous speeds and quality but we’ll see.
The hypothetical already exists, it's called the evening peak hours. I'm not familiar with every part of the US however in Europe most ISPs saw no need to throttle video as the current situation merely makes the usual peak last longer.
Why would it be 2x the normal evening peak hours? Yes, normally maybe second shift workers and the rolling shifts of people going out to eat would reduce normal peak usage slightly, but I can’t imagine it being 2x.
It is common for ISPs to have 2000+ customers sharing a 1 gig line. Regardless of the speeds offered end users use about the same amount of bandwidth. End users all get "full speed" until the link is 95%+ used.
This is about the contention ratio, and service delivery. When you have many users you can still deliver great service even if the contention ratio seems higher. There are two main factors for planning the bandwidth needed, the average data use/bandwidth use, and what max utilization is during the peak, typically 6-8pm when the highest number of users are using the network. If you are selling 100mb plans, as long as you have at least 100mb of available bandwidth during the peak times, any individual user can still burst up to the plan max. So that is 90% on a 1 gig line, or 95% on a 2 gig line. 1 gigabit a second for a month is about 325k gigabytes total. Or about 160gb per month per customer data use for 2000 customers sharing a 1 gig line with near 100% use. There are also multiple steps where the contention ratio or bandwidth use matters ranging from very local, last mile issues to international ones related to peering.
In the US, most providers are operating under a near total government-granted any government-protected monopoly. They like to pretend there’s competition in the market but that’s mostly smoke and mirrors, designed explicitly to preserve their special status as the only game in town.
The fact that they are oversubscribed to the point where it doesn’t work well in an emergency would not be such a big deal - except that they are granted a moat.
They can and should be held to a higher standard as a result. The term “critical infrastructure” comes to mind.
Seems to be a worldwide thing. I have a 50Mbps symmetric optical fiber line in Bengaluru, and servers hosted outside India have come to crawl these days, even while latency/bandwidth within the city are as high as before.
I wonder if the undersea cables have become overwhelmed in the past few days.
i understand wireless is tied to the internet backbone, but i'd imagine with so many people at home, wireless data congestion must be at an all time low, with everyone (probably) on their home wireless networks rather than using cellular data.
I think by default some Android-based devices and iPhone in recent years (since about 2014 I believe but don't quite me on that) can use cellular data even when we are on WiFi if the WiFi signal is weak like when you've wandered too far from the access point.