I note that a number of phone company shills that have tried to discredit your statement, so I will respond here instead of trying to correct each one. I have calculated that a fair price for true unlimited access would be ~$150/month: rent for ~1/300th of an OC48 + other operating/service costs and profit.īut none of that quite excuses ISPs from interfering with their customers' traffic unless the customer has specifically requested it. The recent stories about heavy users getting either kicked off or pushed onto higher-margin business/special service shows that ISPs are starting to push the extra operating costs down to the relevant customers. that would be more than $100/month uplink cost per customer.īecause the top ~5% of customers (ab)uses ~90% of the bandwidth, over-subscription reduces the ISPs' infrastructure costs for typical users by >90%. The same OC48 can accommodate little more than 250 wire-burning, non-oversubscribed 10Mbps customers. Many ISPs have "reasonable use" clauses in their otherwise "unlimited" service plans and this cap appears to be around 250GB in many cases, which would theoretically allow ISPs to fit roughly 3000 high-bandwidth 250GB/month customers per ~$30k/month OC48.
Without it, the same service would cost closer to $200, with $50 of both amounts being the ISP's operating income for the service class. Oversubscription is what makes it possible for ISPs to offer 10Mbps service under $80. There's some variance in that, such as that a fast university user who's networkily near one of the exchange points that your ISP uses may be more attactive than a user who's geographically farther away but on your carrier's network, but in general being crude and greedy isn't as bad as you'd expect. You can do a bit better for scalability if you weight IP addresses or BGP ASNs as well - usually there's enough correlation that overall performance doesn't change much, and it helps your ISP a lot. Most of the P2P protocols support a cruder approach - checking ping times or other TCP or UDP packet transmission latencies - and even these are a good start, because local stuff tends to stay local. The fancy way to do it is to look at BGP autonomous system numbers to determine who's sharing with whom, but even just trying to keep systems in the same /19 or /16 together is a good start.
BitTorrent trackers can provide somewhat the same capability, if they want to. Napster had centralized databases tracking who was downloading what songs, so if they wanted to they could easily enough have made sure that users stayed within their local networks whenever possible, especially for universities that had scaling problems. fast user machines) if they want to reduce imports from outside.
#Torrent filter forge 5 mac osx download#
Overall backbone downstream traffic can still increase, but carriers that care about that should be encouraging their customers to use protocols that download locally when possible, and can put up their own P2P caching servers (i.e. The scalability issues are really critical here - if people usually upload material to other users of the same carrier and in the same geographical area, they're not touching the backbone for high-volume media, only for tracker support, and since _everybody_ on the consumer broadband networks is primarily an information consumer, not producer, the traffic's more likely to stay local, and the traffic ratios which affect what the broadband company pays for traffic are very skewed and P2P balances them a bit rather than exacerbating them. The "backbone" bandwidth, which is what costs broadband companies money based on traffic levels, is going to be more affected financially than technically - it's a small number of locations, and broadband companies can monitor it fairly easily so they can keep up with growth.
(Large Universities are a special case, where the bulk of the traffic is probably for relatively popular material, students have more shared tastes than random neighborhoods, and upstream is usually faster and often symmetric.) It's obviously better if you're uploading material that's being downloaded by somebody on your local distribution network, but for general applications that's unlikely - too few people want too many different files. P2P has much different financial and technical effects in the two environments.įor cable modems and DSL, the local distribution transmission technologies are asymmetric, but the upstream media from the head end or DSLAM on up normally has more slack, so the technology tends to limit the amount of resources P2P can consume. The two most important points are the Internet Backbone feeds and the neighborhood distribution networks. First of all, there are different places in a network that can be oversubscribed, and of course they're different for cable, DSL, and other architectures.