From a post on a different site...
January 21, 2017
Dear President Trump:
Congratulations on your Inauguration as President of the United States.
During your speech you mentioned space, and I know from reading the news that
a Mars mission is something your team might be considering. While there are
ample technically capable people at NASA, I thought you might like to hear a
suggestion about what might fire up ordinary people in the US, and I dare say
all over the world, about embarking on such a project.
Instead of being just a US project, make it a competition between the US,
Russian, and any other government that wants to take a go at it. The governments
each build one or more Earth - Mars ferries using technology that has been
relatively well tested, but "forbidden" and mothballed. Included as part of the
government side of the project would be emergency vehicles that can transport
people between the ferries, or keep them alive till they can be picked up.
Private companies (like SpaceX) using well tested technologies can build the
systems that get stuff between the ground and Earth orbit and a ferry, and down
from the ferry to Mars. They can do this for whatever projects they care to take
on, and can take a ride on any government's ferry. The governments get to claim
any territory on Mars first accessed from one of their ferries.
NASA should be able to confirm that this is really possible, almost today!
Just ask them about NERVA - ORION is stupid, but NERVA really makes sense on a
ferry, as opposed to being used as a launch technology. The Russians have
something equivalent; the Chinese probably not, but could quickly catch up.
Others would have a harder time.
War has often been the competitive element that brought new technology out of
the shadows into the real world, instead of being quietly buried in some private
hands. Lets try a different way to introduce the competitive element at the top
of society, and so get the animal spirits going throughout world society that
brings on huge economic growth. Lets find a way to allow technology that exists
to be used!
<<redacted>>
Good luck and best wishes!
Sincerely
Niket Patwardhan
Comments on Technology and Business
img
Wednesday, January 24, 2018
Tuesday, October 7, 2014
HTTPS is less secure than HTTP
HTTPS is less (not more!) secure than HTTP
- Niket Patwardhan
OK, the title is supposed to be provocative, and maybe unbelievable. But just hear me out...
Just what is security? In an information centric world, it means not divulging information to "untrusted" parties.
So, what happens when you communicate with a web server from your handy cell phone (they even call it "Handy" in Germany!)
Your cell phone first has to tell the phone company it wants to talk, and because the phone company is not giving out data (or phone) service for free, the phone has to identify itself as associated with a paying account. In the US, this typically means the phone company knows who the caller is, since most accounts are on a monthly cycle and the phone company wants to keep billing you. Of course, you could be using a prepay account, in which case they may only know which SIM card is making the call. On the TV action shows they call this a "burner" phone. Because the connection to the phone company is over the air, while the phone company does not know precisely where you are, it knows which cell tower your phone is talking to, so it knows approximately (within a couple of miles) where you are. Because the phone connection is over the air, anybody with the right equipment can listen in, or jam the communication.
Once the phone company is satisfied, it sends your message to the phone company or ISP servicing the webserver. But because of the shortage of IPv4 addresses, the message goes out using one of the IP addresses owned by the phone company. Because the phone company needs to send a response back to you, it remembers which address it used for you, although it can change. This feature is called NAT (Network Address Translation). Thus the webserver typically does not know who the sender is, nor can it correlate messages sent by the same person unless that person chooses to expressly identify himself or herself to the webserver. You do that by logging in and after that your browser sends a bit of data with every message that lets the webserver know who you are. There is a lot of detail about how this is done without allowing somebody else to counterfeit your identity, but that is basically what happens.
In between your ISP (or phone company) and the webserver's ISP is the Internet, where anybody along the transmission path is presumed to be able to look at the messages going back and forth and do whatever they want with them, including throwing them away, altering them, duplicating them or inserting their own messages. In reality, given the commercial requirement of good service, most messages make it through unaltered, and the ISPs work hard to make sure that is true. Any path containing slow or malicious nodes is quickly blacklisted and an alternate path is found. Given the business need not to let the ISP at the other end (or especially the ones in the middle) steal their customers, they will do anything they can not to place their customer information on the Internet. It took a law to get them to post caller ID, and another law to incorporate GPS and report location information.
OK, now you are Ms. Average Jane. You have to trust your phone company with your identity, otherwise you are not going to get service. You bought the phone from your phone company (or maybe you are leasing it) so you have to trust them about the workings of the phone, including any encryption it may do for you. The phone company does not care what messages you send or receive unless a government gets involved, in which case they are going to respect the interest of the government rather than yours. The government does not care what messages you send and receive, so while they may be looking at your messages, they wont be doing anything about it. So you can trust them all. Who you can not trust is your neighbor with listening equipment who can embarass you, or steal your identity and thereby your wealth, but that is unlikely because such equipment costs real money and requires significant technical expertise, which they are unlikely to have. The other party who is likely to break security is the webserver itself - especially if you are browsing before a transaction and collecting information before a purchase - they are exceedingly likely to spy on your browsing and alter the terms of the transaction based on what they observe. And this is where HTTPS is less secure than HTTP (for you, Ms. Average Jane), because at this point there is no difference between HTTP and HTTPS in the availability of the content information. But there is a big difference in the ability of the webserver to identify your phone as being the one that did all the browsing prior to the transaction - because HTTPS for efficiency purposes reuses the session/encoding key, the webserver can connect the browsing to the transaction even if you (Ms. Average Jane) did not log in while doing the browsing, and blew away your cookies and DOM storage before embarking on the transaction. It wont matter if you switch ISPs or your location, or your ISP switches your address, or you use TOR, because you are still using the same phone and browser, and session/encoding key.
The original purpose of the HTTPS system was to ensure that the client (you, Ms. Average Jane) could be certain you were talking to the right webserver, have a private conversation with that webserver, and that you could identify yourself to the server only when you chose to do so within that private conversation. This was extremely inconvenient for many webservers conducting real transactions, because a lot of profit rides on knowing your interests before the terms of the transaction are determined. Webservers therefore first responded by requiring you to login and identify yourself before showing you any information, but ran into the wall of average street smart humans who are reluctant to browse and visit sites where they have to identify themselves first. Imagine being asked for your driver's license before you can look at cars in a car dealership, and having a salesman follow you around while you are looking! In an old style market, a boy would be assigned to follow a prospect when she entered a market, and paid to bring her in and report to the store owner her interests while she was bargaining for an item. Google is that boy in the modern world.
OK, now you are Mr. Not So Average Joe, working on some large or important deal. That nosy parker neighbor is now an industrial spy, with significant resources and technical acumen. You appreciate the 2048 bit encryption HTTPS that the webserver of your deal partner is using to prevent leaks. But your phone came preloaded with garbage root certificate authorities who will issue a certificate for any purpose (including modifying the software on your phone) to anybody who pays enough money, and they paid your phone manufacturer to install their root certificate in your phone, when it was manufactured and before they knew you were going to get such a phone. Mr. Industrial Spy identifies Garbage Root as a certificate authority your phone uses regularly, pretends to be your deal partner and pays Garbage Root for a certificate that says his webserver is your deal partner's web server and then uses a fake cell tower site near your home to redirect your communication with your deal partner to his webserver, for a man-in-the-middle attack. You, confident in the 2048 bit encryption, fail to take the precaution of direct face to face communication or land line for frank discussions, physical handoff for data transfers and code speak in other scenarios. Your competitor walks off with your most valuable secrets and you are blithely unaware.
Even if your deal partner is aware of the risks of using low grade certificate authorities and gives you a private "public key" certificate to authenticate their server, you cannot delete Garbage Root from your phone because your bank uses them to authenticate their server certificates. They implement "perfect forward security" - meaning they change their certificate regularly and you cant just store their certificate. When your phone sees Mr. Industrial Spy's certificate claiming to be your deal partner's webserver it uses Garbage Root to validate that certificate, and happily carries on blithely unaware that it already has a valid certificate for your deal partner that is different.
There are other ways of redirecting messages to a fake webserver, I just chose the fake cell tower scenario because that does not require low integrity on the part of anybody except Mr. Industrial Spy.
The possibility of doing this is what the HTTPS system was designed to guard against - to prevent a man-in-the-middle attack. Unfortunately, commercial reality kills the foundation of the HTTPS public key system - "trusted" root authorities are fallible and the trust placed in them can be misplaced.
The reason I say HTTP is more secure for this scenario is twofold. One, the existence of a strong public key system makes people careless even in scenarios where the attacking entities have significant resources to mislead or corrupt, while a simple private key distribution system would reduce the number of players that have to be trusted; and two, the HTTPS system as designed enhances the discoverability of "trusted" resources that can be corrupted.
Finally, pretend you are an entity like Ms Merkel against whom government resources are directed.
I should really pick a more noxious personality for my example, with the restraint level of the attacker as close to zero as possible. But I pick Ms. Merkel because it actually happened to her.
In this scenario, most of the actors in the little communication scenario we started with have been corrupted (or influenced, if you want to state things positively) to participate in the attack. The only two non-participants are the webserver and the end client who are trying to keep their communication secure. My contention is that the HTTPS system as it exists today has been purposely designed to fail in this scenario. Put another way, it has been designed to enhance the security of the existing world order against rebels, by making it as impossible as possible at multiple levels to engage in a completely secure communication.
1) if the trusted root authorities are corruptible, no communication using them can be secure.
2) the webserver picks the root authority for a certificate, so the man-in-middle can choose a corruptible root authority if one exists.
3) the root authorities trusted by a client can be identified by simple observation of his/her communication over a small period of time, allowing corruption efforts to be targeted on these
root authorities.
4) It used to be feasible to conceal the trusted set of root certificate authorities by using offline validated certificates. This is being shut down in many ways:-
a) It is no longer possible to designate a privately distributed certificate as the only certificate for a site
b) With the advent of "perfect forward security" it is no longer feasible to mistrust all public root authorities by preloading the certificates of all commercial websites one wishes to communicate with, as these constantly change.
c) Caching websites like Akamai use their own root authority for the websites they serve with their own servers, making it impossible to rely only on one root authority for a site.
5) "Deep Inspection" technology allows routers to react to more than just the IP address when routing packets. Specifically they can target the public keys used in HTTPS as alternate routing information.
Packets sent using HTTP do not have this issue, although other information can be targeted.
6) It is becoming harder for the client to see what root authorities are being used for a specific website, especially on mobile equipment.
7) EAP increases the trackability of the client, reducing his/her security, and can no longer be turned off by wireless systems.
8)"Public Key Encryption" is possibly a con. The density of usable keys drops off as the width of the key increases suggesting that it might be feasible to store a list of all private/public key pairs. Also arguments from reversible computing engines suggest that it might be possible to build a private key generator for any specific public key encryption algorithm.
9) Communication equipment providers and shipments are tracked. Orders can be intercepted and the hardware can be corrupted before delivery.
10) It is illegal to sell systems where the set of encryption methods supported by the system is extensible by the purchaser.
So,the broad based move to HTTPS going on at this point may not necessarily be enhancing the security of web clients - it may in fact be decreasing their security. Whether this enhances the security of the US Government is debatable.
Monday, April 16, 2012
Monopoly Regulation - A Success Story
- Niket Patwardhan
The 1970s through the 1990s saw the flowering of the Information Age in the United States, where information processing services and equipment were made available to individuals and became a major sector of the economy.
While the microprocessor was the seed, a major facilitator of this boom was the regulatory influence of the 1956 consent decree[1] on IBM when it introduced the personal computer[2](PC) in 1981, allowing that seed to grow. The introduction of the microprocessor made personal and minicomputers a feasible proposition, with a lot of small time operators building their own machines and having some success selling them. IBM clearly expected to sell a lot of PCs, and still does. I think IBM also expected their offering to clear the market of competitors, and that also happened initially. However, the 1956 consent decree forced them to design and market it under conditions that really made the information processing industry in particular, and technology generally, take off.
Lets start by taking a look at some of the key terms of this decree, the effect in the early 1980s, and how things have changed since then.
Key Terms of 1956 IBM Consent decree
1) IBM was required to offer to sell equipment in addition to leasing equipment, and on terms that were no more favorable to IBM than terms leasing its equipment.
2) IBM was required to provide parts and information to third party providers of maintenance services for the equipment it sold on equal terms as its own divisions/companies providing such service.
3) IBM could not require information from purchasers about what use was being made of its machines.
4) IBM could not restrict the purchaser on the use, modifications, or attachments to the machines it sold. Lessees could only be restricted to that reasonably necessary to facilitate servicing of the machines.
5) IBM could not lend its name or employees to any servicer of its equipment, including wholly owned subsidiaries of IBM!
6) Applicability was limited to the US.
The effect of the introduction of the PC was electric. While it validated the market, practically every PC manufacturer at that time went out of business. IBM's purchasing power effectively made it impossible for smaller manufacturers to acquire components as cheaply as IBM could. Apple held on with the Mac and a famous Superbowl ad, but was relegated to a small niche servicing the artistic community. Minicomputer manufacturers, who were threatening IBM's mainframe business from the low end, were knocked off. Without the decree, this would have been the end of the story.
What the decree did
IBM was quite used to dealing with the decree by 1980. In spite of its restrictions, they continued to dominate the information processing market. All equipment they manufactured and sold (especially input/output processors) came with clearly defined interfaces and documentation about those interfaces, as well as specifications necessary for the maintenance of that equipment. The personal computer was not going to be any different. It came with a BIOS that was specifically designed to allow the PC to be easily extended by additional I/O equipment manufactured by third parties, and the complete specification was publicly available. The processor was a third party chip manufactured by INTEL. The memory interface was public and you could design and add your own memory. Even the operating system was built by a third party, Microsoft, who supplied a knockoff for others. In current terminology, it was the ultimate "open" platform.
The decree effectively made the PC a publicly usable standard and a huge market for third party vendors. An immediate consequence was the decimation of the US memory business. The huge market, the public standard, and the small size made it feasible for manufacturers in Taiwan, Korea, PRC, and Japan to invest in and build factories to supply add-on memory chips and cards for the PC and ship it to the US. Companies like INTEL who made a living supplying memory chips were forced to abandon memory, and focus on maintaining their lead in microprocessor chip technology, and pour vast sums into R&D to do so. Anybody who takes a serious look is always amazed at just how far ahead INTEL is in microprocessor manufacturing technology. Disk drive manufacturers and display terminal manufacturers flourished. Anybody could build a special purpose IO card or device with complete assurance that there was a processor and platform to make use of it. A user did not even need to load software to use the card, the BIOS specification included a mechanism for the PC to discover the memory provided on the IO card and use the software loaded into that memory by the manufacturer of the card.
The history of computing is littered with examples where a newly introduced product killed an established and profitable market for another product - very fast. The HP35 calculator killed the market for slide rules - the year I joined engineering college (1972) you had to have a slide rule, the next year it was a decoration. WYSIWIG word processors killed the typewriter. Commercial organizations keep tabs on competition, and initially will try to prevent or suppress innovation that could kill a profitable business. So very importantly, the decree made it hard for IBM to track applications to see what was taking off and thereby decide to cut off its competition by introducing its own product in that space. At least not before giving the market a chance to reward the competitor addressing the new application with significant business and a foothold or lock on the new application. Once validated, the assembly of the new product quickly moved offshore to take advantage of lower labor costs. And as offshore suppliers developed their capabilities and technology, the assembly of the PC itself moved offshore. The suppliers started making their factories and technology available to smaller companies for their own products and the virtual fab was born. Then companies could take advantage of the public standard, the lowered costs, and the available infrastructure to challenge IBM on the manufacture of the PC itself. IBM no longer manufactures PCs, it buys them from Lenovo.
What the decree did not do
1) The decree did not prevent IBM from entering any space in the market.
2) The decree did not prevent IBM from leasing its equipment to those who felt that was the best option.
3) The decree did not actually prevent IBM from maintaining a dominant position in the market! Existing PC and minicomputer manufacturers died very quickly when IBM introduced its PC. But they kept trying and it had to out-compete its competitors and provide a significantly better product to do so.
4) The decree did not force IBM to give up its intellectual property.
5) The decree did not force IBM to conform in countries where US law did not apply.
Fundamentally, the decree worked because it took away the power of a dominant player to block access to equipment and technology by competitors.
Where are we now?
Microsoft, (as well as practically every software vendor) "licenses" rather than sells its products. To get a new application to be supported by a modern operating system, one has to inform the OS supplier - whether it is Microsoft with Windows drivers, Apple with iOS, or Google with Android. Electronic equipment manufacturers write restrictive licensing agreements and are generally cagey about providing interface information; so if you want to use it on a different operating system or computer or with other equipment you have a hard time. The 1960s saw most electronic equipment sold with the circuit diagram pasted on the back - this has completely disappeared. Copyright has been extended to almost 100 years, making new development mostly unnecessary - the prime example here is Mickey Mouse and Walt Disney. One would think that it is about time Walt Disney Inc came up with some new characters! Defense and national security regulations force you to inform many suppliers of the particular use you are making of their products, and make it difficult for an individual to acquire some products, or even technical data, for experimentation or design purposes. There is a general lack of publicly available documentation about most modern computer hardware, and definitely about computer software. Worst of all, extra-territorial reporting regulations at both state and federal levels, and mandated back-doors and tracking make use of US products, and even US financial institutions, fraught with hazard for non-US organizations. One can envision a world that rejects US related products, companies and individuals simply for being too much trouble. This is already happening with European banks; they do not like taking on customers who are US residents.
We are now a society that is attempting to rest on its past achievements - we want to stamp out the competition instead of out-competing them. We are willing to sanction the use of simple, blunt instruments by our government that do a lot of collateral damage. SOPA, PIPA, ACTA and the 100 year extension of copyright are prime examples of this - courtesy of RIAA, MPAA, and Walt Disney (although SOPA and PIPA have been abandoned, for now).
In order to preserve innovation and our competitiveness, it might be worthwhile to consider enshrining some of the terms of the decree in law as basic standards of doing business.
1) Every product must be marketed with a "sale" option - where the buyer can do whatever they want with the product.
2) The interface specifications must be published and physically attached to the product - so you can attempt to attach other equipment, or in the case of software, attempt to run it on another machine. The specifications must be physically attached so you can determine how to use it years later.
3) Licensing terms for "sold" products can not limit use of the product or modifications to it.
4) Licensing terms and implementation of "sold" product cannot include reporting requirements.
5) Licensing terms and implementation of "sold" product cannot require automatic update. This is new, and required to prevent modifications a person makes to a product from becoming worthless when the product is updated.
6) Software should be sold with a license to make one backup copy on another medium. This is also new, and is now common practice.
With respect to copyright, I think it would be better for us to confront that lobby directly and knock back copyright protection to 28 years (which was the original extent), allowing generous exemptions for derivative work. Laws with extra-territorial applicability should be barred, and access to technology should be considered at the same level as freedom of speech - as an individual right. Mere possession of anything or attempt to acquire possession should not be illegal or prosecutable.
REFERENCES
1) U.S. v IBM Corp., Civil Action No. 72- 344. Filed and Entered January 25, 1956
2) Timeline of Computer History - Computer History Museum
While the microprocessor was the seed, a major facilitator of this boom was the regulatory influence of the 1956 consent decree[1] on IBM when it introduced the personal computer[2](PC) in 1981, allowing that seed to grow. The introduction of the microprocessor made personal and minicomputers a feasible proposition, with a lot of small time operators building their own machines and having some success selling them. IBM clearly expected to sell a lot of PCs, and still does. I think IBM also expected their offering to clear the market of competitors, and that also happened initially. However, the 1956 consent decree forced them to design and market it under conditions that really made the information processing industry in particular, and technology generally, take off.
Lets start by taking a look at some of the key terms of this decree, the effect in the early 1980s, and how things have changed since then.
Key Terms of 1956 IBM Consent decree
1) IBM was required to offer to sell equipment in addition to leasing equipment, and on terms that were no more favorable to IBM than terms leasing its equipment.
2) IBM was required to provide parts and information to third party providers of maintenance services for the equipment it sold on equal terms as its own divisions/companies providing such service.
3) IBM could not require information from purchasers about what use was being made of its machines.
4) IBM could not restrict the purchaser on the use, modifications, or attachments to the machines it sold. Lessees could only be restricted to that reasonably necessary to facilitate servicing of the machines.
5) IBM could not lend its name or employees to any servicer of its equipment, including wholly owned subsidiaries of IBM!
6) Applicability was limited to the US.
The effect of the introduction of the PC was electric. While it validated the market, practically every PC manufacturer at that time went out of business. IBM's purchasing power effectively made it impossible for smaller manufacturers to acquire components as cheaply as IBM could. Apple held on with the Mac and a famous Superbowl ad, but was relegated to a small niche servicing the artistic community. Minicomputer manufacturers, who were threatening IBM's mainframe business from the low end, were knocked off. Without the decree, this would have been the end of the story.
What the decree did
IBM was quite used to dealing with the decree by 1980. In spite of its restrictions, they continued to dominate the information processing market. All equipment they manufactured and sold (especially input/output processors) came with clearly defined interfaces and documentation about those interfaces, as well as specifications necessary for the maintenance of that equipment. The personal computer was not going to be any different. It came with a BIOS that was specifically designed to allow the PC to be easily extended by additional I/O equipment manufactured by third parties, and the complete specification was publicly available. The processor was a third party chip manufactured by INTEL. The memory interface was public and you could design and add your own memory. Even the operating system was built by a third party, Microsoft, who supplied a knockoff for others. In current terminology, it was the ultimate "open" platform.
The decree effectively made the PC a publicly usable standard and a huge market for third party vendors. An immediate consequence was the decimation of the US memory business. The huge market, the public standard, and the small size made it feasible for manufacturers in Taiwan, Korea, PRC, and Japan to invest in and build factories to supply add-on memory chips and cards for the PC and ship it to the US. Companies like INTEL who made a living supplying memory chips were forced to abandon memory, and focus on maintaining their lead in microprocessor chip technology, and pour vast sums into R&D to do so. Anybody who takes a serious look is always amazed at just how far ahead INTEL is in microprocessor manufacturing technology. Disk drive manufacturers and display terminal manufacturers flourished. Anybody could build a special purpose IO card or device with complete assurance that there was a processor and platform to make use of it. A user did not even need to load software to use the card, the BIOS specification included a mechanism for the PC to discover the memory provided on the IO card and use the software loaded into that memory by the manufacturer of the card.
The history of computing is littered with examples where a newly introduced product killed an established and profitable market for another product - very fast. The HP35 calculator killed the market for slide rules - the year I joined engineering college (1972) you had to have a slide rule, the next year it was a decoration. WYSIWIG word processors killed the typewriter. Commercial organizations keep tabs on competition, and initially will try to prevent or suppress innovation that could kill a profitable business. So very importantly, the decree made it hard for IBM to track applications to see what was taking off and thereby decide to cut off its competition by introducing its own product in that space. At least not before giving the market a chance to reward the competitor addressing the new application with significant business and a foothold or lock on the new application. Once validated, the assembly of the new product quickly moved offshore to take advantage of lower labor costs. And as offshore suppliers developed their capabilities and technology, the assembly of the PC itself moved offshore. The suppliers started making their factories and technology available to smaller companies for their own products and the virtual fab was born. Then companies could take advantage of the public standard, the lowered costs, and the available infrastructure to challenge IBM on the manufacture of the PC itself. IBM no longer manufactures PCs, it buys them from Lenovo.
What the decree did not do
1) The decree did not prevent IBM from entering any space in the market.
2) The decree did not prevent IBM from leasing its equipment to those who felt that was the best option.
3) The decree did not actually prevent IBM from maintaining a dominant position in the market! Existing PC and minicomputer manufacturers died very quickly when IBM introduced its PC. But they kept trying and it had to out-compete its competitors and provide a significantly better product to do so.
4) The decree did not force IBM to give up its intellectual property.
5) The decree did not force IBM to conform in countries where US law did not apply.
Fundamentally, the decree worked because it took away the power of a dominant player to block access to equipment and technology by competitors.
Where are we now?
Microsoft, (as well as practically every software vendor) "licenses" rather than sells its products. To get a new application to be supported by a modern operating system, one has to inform the OS supplier - whether it is Microsoft with Windows drivers, Apple with iOS, or Google with Android. Electronic equipment manufacturers write restrictive licensing agreements and are generally cagey about providing interface information; so if you want to use it on a different operating system or computer or with other equipment you have a hard time. The 1960s saw most electronic equipment sold with the circuit diagram pasted on the back - this has completely disappeared. Copyright has been extended to almost 100 years, making new development mostly unnecessary - the prime example here is Mickey Mouse and Walt Disney. One would think that it is about time Walt Disney Inc came up with some new characters! Defense and national security regulations force you to inform many suppliers of the particular use you are making of their products, and make it difficult for an individual to acquire some products, or even technical data, for experimentation or design purposes. There is a general lack of publicly available documentation about most modern computer hardware, and definitely about computer software. Worst of all, extra-territorial reporting regulations at both state and federal levels, and mandated back-doors and tracking make use of US products, and even US financial institutions, fraught with hazard for non-US organizations. One can envision a world that rejects US related products, companies and individuals simply for being too much trouble. This is already happening with European banks; they do not like taking on customers who are US residents.
We are now a society that is attempting to rest on its past achievements - we want to stamp out the competition instead of out-competing them. We are willing to sanction the use of simple, blunt instruments by our government that do a lot of collateral damage. SOPA, PIPA, ACTA and the 100 year extension of copyright are prime examples of this - courtesy of RIAA, MPAA, and Walt Disney (although SOPA and PIPA have been abandoned, for now).
In order to preserve innovation and our competitiveness, it might be worthwhile to consider enshrining some of the terms of the decree in law as basic standards of doing business.
1) Every product must be marketed with a "sale" option - where the buyer can do whatever they want with the product.
2) The interface specifications must be published and physically attached to the product - so you can attempt to attach other equipment, or in the case of software, attempt to run it on another machine. The specifications must be physically attached so you can determine how to use it years later.
3) Licensing terms for "sold" products can not limit use of the product or modifications to it.
4) Licensing terms and implementation of "sold" product cannot include reporting requirements.
5) Licensing terms and implementation of "sold" product cannot require automatic update. This is new, and required to prevent modifications a person makes to a product from becoming worthless when the product is updated.
6) Software should be sold with a license to make one backup copy on another medium. This is also new, and is now common practice.
With respect to copyright, I think it would be better for us to confront that lobby directly and knock back copyright protection to 28 years (which was the original extent), allowing generous exemptions for derivative work. Laws with extra-territorial applicability should be barred, and access to technology should be considered at the same level as freedom of speech - as an individual right. Mere possession of anything or attempt to acquire possession should not be illegal or prosecutable.
REFERENCES
1) U.S. v IBM Corp., Civil Action No. 72- 344. Filed and Entered January 25, 1956
2) Timeline of Computer History - Computer History Museum
Wednesday, August 24, 2011
How to take over the Internet.
- Niket Patwardhan
Updated: 2012-01-16
This started as curiosity.
I was getting annoyed at how long my broker's website took to put up their webpages. Waiting 15 minutes for the trading page to come up when the market is diving along with your naked long position is not fun. So later on I decided to look at what they were doing to support their website. I found they were using Akamai for their public pages, and their own servers for users private data. But I also noticed that regularly connections would be made to unrelated addresses. Reverse lookups on these addresses pointed me at other webfarm sites, Linode, and yes there were also attempts to access net 127.0.0.0 and 10.0.0.0. It was from the web browser, but where in the huge mass of Javascript and HTML? So I blocked every IP except in Akamai (that is tough - every day, and sometimes more than once a day I discovered a new range of Akamai addresses!) and the broker's space and watched to see what broke.
First, the website completely broke. For security my broker uses HTTPS for all connections, and the certification process needs to visit the certification chain to make sure no certification has been revoked. Right there I have a huge source of my broker's problem with speed - when the market dives everybody is pinging the broker, Akamai, and the certificate providers. As is typical these days, the webpage has hundreds of components, and each is an HTTPS access with the necessary encryption and decryption steps. Also HTTPS access shuts off caching at least in the intermediate proxies, and sometimes even in your browser. Worse is the certificate revocation check. Of course I could tell my browser not to check for certificate revocation, but I am a little anal about that - why would you turn off an important element of security? Then I noticed another little problem - the IE settings to "Check for certificate address mismatch" is turned off! This is like a border guard using a lie detector on an immigrant on a flight from Great Britain and letting him go by because he is telling the truth, but not bothering to ensure that what he is saying is not something like "I am a terrorist from Afghanistan and I am here to blow up the Pentagon"! The Firefox settings say to "Validate a certificate if it specifies an OCSP server" and also do not require the certificate to be treated as invalid if the connection to the OCSP server fails. Two problems here - the certificate could point to an unreliable or colluding OCSP server, and you get a pass if a reliable server is unreachable for some reason. The second is probably a worthwhile risk for most people, but the first is really a security hole that can be exploited fairly easily.
So what is the point of all this about HTTPS? The point is that even with HTTPS in use, with fairly common settings on the browser it is possible to pretend to be another server. The use of web farms and server sharing between websites makes it impossible to use reverse DNS lookups as a reliable guide, but using HTTPS does not work for most web clients either. Furthermore, on HTTPS connections Earthlink (and others) offer a certificate with themselves listed as the certificate authority. If you accept them as a certificate authority and trust signed controls, they can do whatever they want on your system. Even if you dont trust downloaded controls, this is particularly noxious, given that one of the benefits of using HTTPS is defense against man-in-the-middle attacks, which is completely destroyed by this tactic. The contretemps with Diginotar show you how trust can be abused - they allowed issuance of Google certificates to sites not controlled by Google, and BEAST is an example of how the basic encryption mechanism can be compromised.
If your messages get sent to the wrong server, you get whatever pages the attacker wants to send you.
Back to the broker's system. I add back the certificate provider IPs (the ones I believe in anyway!) and find that their system still breaks now and then. Images are sometimes missing, and pages sometimes dont load or are garbled. Also, I turn off Javascript except for trusted sites (my broker!), and my browser wants permission 5 times to enable Javascript. I track down the garbling (is that a word?) to missing CSS files that specify how the page is to be laid out and styled. All these files are on Akamai as far as I can tell. The script warnings are coming from some code from a chart provider, who also seems to provide an image of one set of tabs. Something else happens as I find and add back sites - every now and then a really weird image of a guy and a gal sitting on a bed crops up where a button or a chart ought to be [1]. Not always, but every once in a while. Typically the image has the top half a real picture, and the bottom portion has colored snow. I pick a specific realtime chart (people tend to reload these a lot because they want to see up-to-date charts!) that has this problem and find the image sometimes looks like what I see. And when I look at the IP address associated with the chart it keeps changing, sometimes pointing to "Interactive Data Systems" space and sometimes to "7Ticks Consulting" space, and every once in a while to the 127.0.0.0 net which is a loopback to my own computer! The host name is an alias owned by the chart provider. When I track down the actual name server, it usually tells me some possibly legitimate DNS service provider, but sometimes it is a Linode server, and that server is the one providing the loopback net address. So, the chart provider is probably using a DNS service for dynamic load balancing and one of the DNS servers between me and him has been attacked and corrupted, and winds up pointing me to a bogus server. As a final bonus, I find the DNS system often times out, and does not actually provide my computer with any address, so I have to retry.
How do you get traffic intended for one server to another? Two ways - you somehow bamboozle the routers into sending the packets to the wrong destination, or you bamboozle the domain name system into providing the client with the wrong address for a server. While attacking the routers has been done, it is a totally distributed system with segments often controlled by malicious people. The last forty years have seen so many attempts that the router and ISP industry have a lot of experience with attacks, and pretty much have this under control. Along the way we have lost some interesting capabilities, but so be it.
The Internet domain name system is a completely different matter. It is a semi-distributed system and was designed in a time when efficiency and the ability to route around failures was important and security was an afterthought. Malicious failures were not a consideration, equipment and network failures were. Until recently, the top level has remained in relatively trustworthy hands, so challenges have been few, and experience with maliciousness is low. Now that control over this system is being distributed more widely, we can expect to see a lot more successful attacks until the industry adapts.
The domain name system is NOT totally distributed. It is a heirarchical system, with multiple redundant root name servers (13 to be precise) providing the top level. The root servers look at the rightmost part of the domain (the .com or .us or .edu at the end of the domain name), and tell you which name servers have authoritative information about that domain. Hints about the IP addresses of the root servers are compiled into the DNS clients, and can also be found through the domain system. Here is a table that shows information about them as of May 8, 2011.
A 198.41.0.4 379 Hong Kong Verisign
B 192.228.71.201 86 Los Angeles ISC/isi.edu
C 192.33.4.12 83 LAX PSINet
D 128.8.10.90 147 College Park Univ. of Maryland
E 192.203.230.10 Unreach ???????????? NASA
F 192.5.5.241 179 Palo Alto ISC
G 192.112.36.4 Timeout Japan US DoD
H 128.63.2.53 Hop ???????????? US DoD
I 192.36.148.17 450 Hong Kong RIPE/Sweden
J 192.58.128.30 218 Taipei Verisign
K 193.0.14.129 239 Amsterdam RIPE/NCC
L 199.7.83.42 283 Los Angeles ICANN
M 202.12.27.33 178 Narita, Japan Univ. of Tokyo
Name servers for any domain can delegate authority for a subdomain to another set of name servers, and are no longer the authority for names in that subdomain. For example the name servers that handle the .com domain delegate authority for "blogspot.com" to ns1.google.com, ns2.google.com, ns3.google.com and ns4.google.com. If every computer followed this chain for every name lookup the root servers would get overloaded pretty fast, so name servers tell you how long a piece of information they provide is good, and a DNS client can cache the information and does not have to retrieve it over and over. The protocol used to communicate between the DNS client and a name server is UDP i.e. Unreliable Datagram Protocol. To make things even more simple, most ISPs provide a "DNS server" which acts as the DNS client in the domain name system. You can send a name to this DNS server, and it follows all the steps necessary to figure out the IP address and provides it to you. When you connect to the ISP, the connection process automatically tells your computer about this DNS server. Buried in antiquity, but still implemented, is another shortcut - your computer will first try tacking on your "network DNS suffix" to the name you typed in whenever it attempts to get a translation for a hostname - so you can be lazy and leave that out when operating within your own network.
So how can you subvert this?
1) If you have authority over one of the root name servers, you could replicate enough of the translation chain to provide fake addresses for any domain you choose. All you need is for a DNS client to decide once to use your root server for a top level translation. Up until a few years ago, the US government or a US organization controlled the root name servers and the .com, .edu, .us, and .org name servers. With the formation of ICANN the responsibility for and location of the root servers began to move. As of May 2011, a.root-servers.net was located in Hong Kong. Soon after a series of embarrassing attacks on US servers from an "unnamed country" it looks like this machine was moved to the US, although its IP address has remained the same. Interestingly, the very last router on the path to this nameserver is now reported to be in Romania - not much of an improvement! This move (if deliberate!) was probably accomplished by using a very old but generally inaccessible (for security reasons) routing mechanism called a host route. NASA, DOD and PSINET all have their own root server.
2) You can also use the BGP based routing system to direct packets headed for a root server through a router that you have control over, and mangle the return any way you desire. If you want to intercept only one root server the one you would pick would be a.root-servers.net, because most resolutions would start there. The ability to re-route DNS messages exists for all servers, and is available to whoever has control of the intermediate routers. They can be the legitimate, but malicious or colluding owners of the routers, such as an ISP subject to a government order[2]. See the comments on the path to a.root-servers.net above.
3) If you have control over a "DNS Server" then you can feed whatever you want to the computers that rely on that "DNS Server". This most certainly happens. Earthlink (specifically the nameservers ns1.mindspring.com and ns2.mindspring.com) and Go Daddy for example use name servers that provide their own webserver IP address when you specify a name that does not exist. This allows them to put up a webpage with their ads when you mistype a name.
4) The ambiguity created by the name processing can be exploited. You type in "mail.yahoo.com" and expect to be connected there, but instead your name server returns an address for "mail.yahoo.com.mshome.net" using the default suffix. If you use a company owned computer, your company's domain is probably the default suffix. Guess who is able to read your email and monitor anything you are doing on the web as a result!
5) It used to be possible to supply a fixed address for a name in a hosts file. Because of the ability to compromise your files, this is marketed as a security risk. Also dynamic reassignment of servers by webfarms makes it impossible to use predefined translations and you are forced to use DNS. For whatever reason, this capability (static translation) no longer seems to work on many operating systems. However, if you really want to work without trusting any DNS service, this is the way to go.
6) DNS uses UDP to communicate. This protocol has no security and no sense of order. Combined with caching of nameserver addresses we can attack the system from the outside as follows: send a request for a non-existent address within a domain to the DNS server you are trying to attack. Immediately follow this with a fake DNS response that specifies the nameserver for the domain you are trying to attack. The nameserver specified is one under your control instead of the real one. If you meet the timing window, this will cause the DNS server to cache the nameserver, and until the timeout expires your nameserver has control over that domain. This type of attack actually took place in late 2007 early 2008. Originally, this worked for unrelated domains and that window has been closed in more secure versions of DNS. They also now try to use TCP instead of UDP - unfortunately because of the large numbers of nameservers and clients existent that have not been upgraded, DNS must continue to work with UDP.
7) You can attack the DNS code in the browser host itself. This is a variant of 4).
So here are a number of ways to attack the current system.
What can we do to enhance the security of connections so that we can be assured of connecting to the host we are actually trying to connect to? I think governments have a role to play - making it a lot more expensive to game the system - as well as providing support so they themselves can exert more control. A challenge for any solution in the network space is updating and interoperating with the huge base of existing clients and name servers.
1 - that picture has changed, here is the last one I saw. Here is another that I found on nasdaq.com.
2 - With requirements on ISPs embedded in ProtectIP and SOPA every US ISP should be regarded as untrustworthy! While President Obama has decided to oppose parts of SOPA, that comes about because the proposed law would undermine faith in DNSSEC, leading to use of simpler alternatives which cannot be compromised by the US government. Remember, DNSSEC relies on public key encryption systems, which the US NSA can crack, and more importantly, trace use.
This started as curiosity.
I was getting annoyed at how long my broker's website took to put up their webpages. Waiting 15 minutes for the trading page to come up when the market is diving along with your naked long position is not fun. So later on I decided to look at what they were doing to support their website. I found they were using Akamai for their public pages, and their own servers for users private data. But I also noticed that regularly connections would be made to unrelated addresses. Reverse lookups on these addresses pointed me at other webfarm sites, Linode, and yes there were also attempts to access net 127.0.0.0 and 10.0.0.0. It was from the web browser, but where in the huge mass of Javascript and HTML? So I blocked every IP except in Akamai (that is tough - every day, and sometimes more than once a day I discovered a new range of Akamai addresses!) and the broker's space and watched to see what broke.
First, the website completely broke. For security my broker uses HTTPS for all connections, and the certification process needs to visit the certification chain to make sure no certification has been revoked. Right there I have a huge source of my broker's problem with speed - when the market dives everybody is pinging the broker, Akamai, and the certificate providers. As is typical these days, the webpage has hundreds of components, and each is an HTTPS access with the necessary encryption and decryption steps. Also HTTPS access shuts off caching at least in the intermediate proxies, and sometimes even in your browser. Worse is the certificate revocation check. Of course I could tell my browser not to check for certificate revocation, but I am a little anal about that - why would you turn off an important element of security? Then I noticed another little problem - the IE settings to "Check for certificate address mismatch" is turned off! This is like a border guard using a lie detector on an immigrant on a flight from Great Britain and letting him go by because he is telling the truth, but not bothering to ensure that what he is saying is not something like "I am a terrorist from Afghanistan and I am here to blow up the Pentagon"! The Firefox settings say to "Validate a certificate if it specifies an OCSP server" and also do not require the certificate to be treated as invalid if the connection to the OCSP server fails. Two problems here - the certificate could point to an unreliable or colluding OCSP server, and you get a pass if a reliable server is unreachable for some reason. The second is probably a worthwhile risk for most people, but the first is really a security hole that can be exploited fairly easily.
So what is the point of all this about HTTPS? The point is that even with HTTPS in use, with fairly common settings on the browser it is possible to pretend to be another server. The use of web farms and server sharing between websites makes it impossible to use reverse DNS lookups as a reliable guide, but using HTTPS does not work for most web clients either. Furthermore, on HTTPS connections Earthlink (and others) offer a certificate with themselves listed as the certificate authority. If you accept them as a certificate authority and trust signed controls, they can do whatever they want on your system. Even if you dont trust downloaded controls, this is particularly noxious, given that one of the benefits of using HTTPS is defense against man-in-the-middle attacks, which is completely destroyed by this tactic. The contretemps with Diginotar show you how trust can be abused - they allowed issuance of Google certificates to sites not controlled by Google, and BEAST is an example of how the basic encryption mechanism can be compromised.
If your messages get sent to the wrong server, you get whatever pages the attacker wants to send you.
Back to the broker's system. I add back the certificate provider IPs (the ones I believe in anyway!) and find that their system still breaks now and then. Images are sometimes missing, and pages sometimes dont load or are garbled. Also, I turn off Javascript except for trusted sites (my broker!), and my browser wants permission 5 times to enable Javascript. I track down the garbling (is that a word?) to missing CSS files that specify how the page is to be laid out and styled. All these files are on Akamai as far as I can tell. The script warnings are coming from some code from a chart provider, who also seems to provide an image of one set of tabs. Something else happens as I find and add back sites - every now and then a really weird image of a guy and a gal sitting on a bed crops up where a button or a chart ought to be [1]. Not always, but every once in a while. Typically the image has the top half a real picture, and the bottom portion has colored snow. I pick a specific realtime chart (people tend to reload these a lot because they want to see up-to-date charts!) that has this problem and find the image sometimes looks like what I see. And when I look at the IP address associated with the chart it keeps changing, sometimes pointing to "Interactive Data Systems" space and sometimes to "7Ticks Consulting" space, and every once in a while to the 127.0.0.0 net which is a loopback to my own computer! The host name is an alias owned by the chart provider. When I track down the actual name server, it usually tells me some possibly legitimate DNS service provider, but sometimes it is a Linode server, and that server is the one providing the loopback net address. So, the chart provider is probably using a DNS service for dynamic load balancing and one of the DNS servers between me and him has been attacked and corrupted, and winds up pointing me to a bogus server. As a final bonus, I find the DNS system often times out, and does not actually provide my computer with any address, so I have to retry.
How do you get traffic intended for one server to another? Two ways - you somehow bamboozle the routers into sending the packets to the wrong destination, or you bamboozle the domain name system into providing the client with the wrong address for a server. While attacking the routers has been done, it is a totally distributed system with segments often controlled by malicious people. The last forty years have seen so many attempts that the router and ISP industry have a lot of experience with attacks, and pretty much have this under control. Along the way we have lost some interesting capabilities, but so be it.
The Internet domain name system is a completely different matter. It is a semi-distributed system and was designed in a time when efficiency and the ability to route around failures was important and security was an afterthought. Malicious failures were not a consideration, equipment and network failures were. Until recently, the top level has remained in relatively trustworthy hands, so challenges have been few, and experience with maliciousness is low. Now that control over this system is being distributed more widely, we can expect to see a lot more successful attacks until the industry adapts.
The domain name system is NOT totally distributed. It is a heirarchical system, with multiple redundant root name servers (13 to be precise) providing the top level. The root servers look at the rightmost part of the domain (the .com or .us or .edu at the end of the domain name), and tell you which name servers have authoritative information about that domain. Hints about the IP addresses of the root servers are compiled into the DNS clients, and can also be found through the domain system. Here is a table that shows information about them as of May 8, 2011.
A 198.41.0.4 379 Hong Kong Verisign
B 192.228.71.201 86 Los Angeles ISC/isi.edu
C 192.33.4.12 83 LAX PSINet
D 128.8.10.90 147 College Park Univ. of Maryland
E 192.203.230.10 Unreach ???????????? NASA
F 192.5.5.241 179 Palo Alto ISC
G 192.112.36.4 Timeout Japan US DoD
H 128.63.2.53 Hop ???????????? US DoD
I 192.36.148.17 450 Hong Kong RIPE/Sweden
J 192.58.128.30 218 Taipei Verisign
K 193.0.14.129 239 Amsterdam RIPE/NCC
L 199.7.83.42 283 Los Angeles ICANN
M 202.12.27.33 178 Narita, Japan Univ. of Tokyo
Name servers for any domain can delegate authority for a subdomain to another set of name servers, and are no longer the authority for names in that subdomain. For example the name servers that handle the .com domain delegate authority for "blogspot.com" to ns1.google.com, ns2.google.com, ns3.google.com and ns4.google.com. If every computer followed this chain for every name lookup the root servers would get overloaded pretty fast, so name servers tell you how long a piece of information they provide is good, and a DNS client can cache the information and does not have to retrieve it over and over. The protocol used to communicate between the DNS client and a name server is UDP i.e. Unreliable Datagram Protocol. To make things even more simple, most ISPs provide a "DNS server" which acts as the DNS client in the domain name system. You can send a name to this DNS server, and it follows all the steps necessary to figure out the IP address and provides it to you. When you connect to the ISP, the connection process automatically tells your computer about this DNS server. Buried in antiquity, but still implemented, is another shortcut - your computer will first try tacking on your "network DNS suffix" to the name you typed in whenever it attempts to get a translation for a hostname - so you can be lazy and leave that out when operating within your own network.
So how can you subvert this?
1) If you have authority over one of the root name servers, you could replicate enough of the translation chain to provide fake addresses for any domain you choose. All you need is for a DNS client to decide once to use your root server for a top level translation. Up until a few years ago, the US government or a US organization controlled the root name servers and the .com, .edu, .us, and .org name servers. With the formation of ICANN the responsibility for and location of the root servers began to move. As of May 2011, a.root-servers.net was located in Hong Kong. Soon after a series of embarrassing attacks on US servers from an "unnamed country" it looks like this machine was moved to the US, although its IP address has remained the same. Interestingly, the very last router on the path to this nameserver is now reported to be in Romania - not much of an improvement! This move (if deliberate!) was probably accomplished by using a very old but generally inaccessible (for security reasons) routing mechanism called a host route. NASA, DOD and PSINET all have their own root server.
2) You can also use the BGP based routing system to direct packets headed for a root server through a router that you have control over, and mangle the return any way you desire. If you want to intercept only one root server the one you would pick would be a.root-servers.net, because most resolutions would start there. The ability to re-route DNS messages exists for all servers, and is available to whoever has control of the intermediate routers. They can be the legitimate, but malicious or colluding owners of the routers, such as an ISP subject to a government order[2]. See the comments on the path to a.root-servers.net above.
3) If you have control over a "DNS Server" then you can feed whatever you want to the computers that rely on that "DNS Server". This most certainly happens. Earthlink (specifically the nameservers ns1.mindspring.com and ns2.mindspring.com) and Go Daddy for example use name servers that provide their own webserver IP address when you specify a name that does not exist. This allows them to put up a webpage with their ads when you mistype a name.
4) The ambiguity created by the name processing can be exploited. You type in "mail.yahoo.com" and expect to be connected there, but instead your name server returns an address for "mail.yahoo.com.mshome.net" using the default suffix. If you use a company owned computer, your company's domain is probably the default suffix. Guess who is able to read your email and monitor anything you are doing on the web as a result!
5) It used to be possible to supply a fixed address for a name in a hosts file. Because of the ability to compromise your files, this is marketed as a security risk. Also dynamic reassignment of servers by webfarms makes it impossible to use predefined translations and you are forced to use DNS. For whatever reason, this capability (static translation) no longer seems to work on many operating systems. However, if you really want to work without trusting any DNS service, this is the way to go.
6) DNS uses UDP to communicate. This protocol has no security and no sense of order. Combined with caching of nameserver addresses we can attack the system from the outside as follows: send a request for a non-existent address within a domain to the DNS server you are trying to attack. Immediately follow this with a fake DNS response that specifies the nameserver for the domain you are trying to attack. The nameserver specified is one under your control instead of the real one. If you meet the timing window, this will cause the DNS server to cache the nameserver, and until the timeout expires your nameserver has control over that domain. This type of attack actually took place in late 2007 early 2008. Originally, this worked for unrelated domains and that window has been closed in more secure versions of DNS. They also now try to use TCP instead of UDP - unfortunately because of the large numbers of nameservers and clients existent that have not been upgraded, DNS must continue to work with UDP.
7) You can attack the DNS code in the browser host itself. This is a variant of 4).
So here are a number of ways to attack the current system.
What can we do to enhance the security of connections so that we can be assured of connecting to the host we are actually trying to connect to? I think governments have a role to play - making it a lot more expensive to game the system - as well as providing support so they themselves can exert more control. A challenge for any solution in the network space is updating and interoperating with the huge base of existing clients and name servers.
1 - that picture has changed, here is the last one I saw. Here is another that I found on nasdaq.com.
2 - With requirements on ISPs embedded in ProtectIP and SOPA every US ISP should be regarded as untrustworthy! While President Obama has decided to oppose parts of SOPA, that comes about because the proposed law would undermine faith in DNSSEC, leading to use of simpler alternatives which cannot be compromised by the US government. Remember, DNSSEC relies on public key encryption systems, which the US NSA can crack, and more importantly, trace use.
Sunday, August 14, 2011
NETFLIX meets the Grim Reaper
- Niket Patwardhan
We all know and love NETFLIX. Well - at least most of us! Nearly instant access to movies was the fundamental value proposition and since 2008 they have got a lot of loving and their stock shows it.
But now they have a problem - a big problem. It has always been there, even way back in 2002. How do you convince the media companies to let you have their programming? I should know - I gave up on a digital movie streaming model based on a monthly fee that year. They did not wait for permission and gave up on the "instant access" - instead they used the existing rental agreement framework used by companies like Blockbuster and came up with a much more efficient method of finding the CD or DVD you wanted, and getting it to your home by using the regular mail. So it was not really instant - but it was much easier than driving to the rental store, and most importantly, much easier to return it! Whereas Blockbuster made money off people who were done watching and just could not get around to returning it (and possibly paying $30+ for a single view) when they did rent, NETFLIX made $8 a month every month on movies that were sitting in your place still waiting to be viewed. You could theoretically watch movies for under a buck if you tried hard enough, but of course you were too busy for that. In 2008, with the economy tanking, and a lot of people with time on their hands needing entertainment, this was real value.
Then came REDBOX. They took that $1 price and made it a truly per view price. Back to the Blockbuster model, but now the price is much lower and the outlets are where you would be most days anyway, so it is even faster access than NETFLIX, but with a narrower selection. NETFLIX isn't looking so good anymore and to compete they go for digital streaming and "instant access".
There are two parties that hate this. The media companies are looking at pay per view, and with digital streaming on a monthly charge, this can get to really low values. Instead of the $8 being spread across 8-10 movies max a month, it is possible to watch 300+ movies a month. NETFLIX would make under 3c/movie at that rate, and that is not enough to make the media company happy. I know this happened - I spent a couple of weekends watching back to back episodes of some TV shows. The other party that hated this was the ISPs, their bandwidth requirement to the peering points and their business model just completely blows up. NETFLIX is unhappy too, because 3c is probably not enough to pay for the transmission cost of sending the movie from their site to the peering point.
The reaction was swift and extreme. ISPs instituted expensive data caps, and some repeatedly break the connection. This forces the stream to be resent. Upshot - I wound up paying $80 to watch one movie with repeated hangs! That was the end of NETFLIX for me. But NETFLIX also has a problem with the media companies.
There are technical solutions for the bandwidth problem, and also for the business problem. I sure hope Reed Hastings will find some that work, and make it possible for that vision in 2002 to be achieved.
But now they have a problem - a big problem. It has always been there, even way back in 2002. How do you convince the media companies to let you have their programming? I should know - I gave up on a digital movie streaming model based on a monthly fee that year. They did not wait for permission and gave up on the "instant access" - instead they used the existing rental agreement framework used by companies like Blockbuster and came up with a much more efficient method of finding the CD or DVD you wanted, and getting it to your home by using the regular mail. So it was not really instant - but it was much easier than driving to the rental store, and most importantly, much easier to return it! Whereas Blockbuster made money off people who were done watching and just could not get around to returning it (and possibly paying $30+ for a single view) when they did rent, NETFLIX made $8 a month every month on movies that were sitting in your place still waiting to be viewed. You could theoretically watch movies for under a buck if you tried hard enough, but of course you were too busy for that. In 2008, with the economy tanking, and a lot of people with time on their hands needing entertainment, this was real value.
Then came REDBOX. They took that $1 price and made it a truly per view price. Back to the Blockbuster model, but now the price is much lower and the outlets are where you would be most days anyway, so it is even faster access than NETFLIX, but with a narrower selection. NETFLIX isn't looking so good anymore and to compete they go for digital streaming and "instant access".
There are two parties that hate this. The media companies are looking at pay per view, and with digital streaming on a monthly charge, this can get to really low values. Instead of the $8 being spread across 8-10 movies max a month, it is possible to watch 300+ movies a month. NETFLIX would make under 3c/movie at that rate, and that is not enough to make the media company happy. I know this happened - I spent a couple of weekends watching back to back episodes of some TV shows. The other party that hated this was the ISPs, their bandwidth requirement to the peering points and their business model just completely blows up. NETFLIX is unhappy too, because 3c is probably not enough to pay for the transmission cost of sending the movie from their site to the peering point.
The reaction was swift and extreme. ISPs instituted expensive data caps, and some repeatedly break the connection. This forces the stream to be resent. Upshot - I wound up paying $80 to watch one movie with repeated hangs! That was the end of NETFLIX for me. But NETFLIX also has a problem with the media companies.
There are technical solutions for the bandwidth problem, and also for the business problem. I sure hope Reed Hastings will find some that work, and make it possible for that vision in 2002 to be achieved.
Subscribe to:
Posts (Atom)