What Will Happen If Entire Internet Goes Down

What Will Happen If Entire Internet Goes Down

What would happen to us if the Internet went down…?

The DDoS attack which brought down the Internet in much of United States of America and  parts of Europe left users without Twitter, Spotify, Reddit, Netflix and many other websites. The attack also left PlayStation Network users high and dry throughout the day. The DDos attack was fuelled by your and my common Internet of Things connected devices like smart camera, CCTVs, smart refrigerators etc.

For those out of the loop, on Friday morning, a distributed denial of service attack against DNS provider, Dyn brought down websites and apps across the internet, temporarily barring access to Twitter, Pinterest, WhatsApp, and more for millions of users. While Dyn was able to stabilize the situation within a few hours, a second DDoS attack began in the early afternoon, again disrupting services across the web.

Dyn provides domain name system services, translating common internet addresses into machine-legible information that ensures you get to where you’re trying to go on the web. So every request you make for a website has to go through a DNS server. Citing Flashpoint, a security intelligence firm, Forbes reports that the attackers appear to have used a Mirai botnet against Dyn.

Mirai botnets exploit Internet of Things devices, taking advantage of their low security to employ them in DDoS attacks. In late September, someone going by the handle Anna-senpai released Mirai’s source code, and since then, the DDoS attacks using Mirai botnets have increased.

The 1.2TB DDoS attack using such Internet of Things connected devices caused a complete breakdown of the Internet in the United States and parts of Europe. What would happen if the Internet to the entire world goes down? Such a scenario has been actually predicted by a security researcher, Bruce Schneier. In fact, Schneier says that some unknown entity is already working on bringing down the whole Internet. Schneier in an essay last month revealed that companies responsible for the basic infrastructure of the Internet are experiencing an escalating series of coordinated attacks that appear designed to test the defenses of its most critical elements. It seems likely that the attack against Dyn which brought down the Internet on 21st and 22nd October was a part of this grand scheme.

So what happens if Schneier is right and the Internet is actually shut down?  We look at the impact if such a situation arises in near future.

A situation like this brings  E.M. Forster’s disturbing tale ” The Machine Stops” to mind. Written in 1909, it describes the downfall of a civilization that is totally subservient to an automated life-support system. Foster’s novel has its citizens thinking of the Machine as an infallible deity and live in their individual mechanical wombs, communicating and doing business only through the Machine. They worship it in their fashion until in the words of the author:

There came a day when, without the slightest warning, without any previous hint of feebleness, the entire communication system broke down, all over the world, and the world, as they understood it, ended.

Though we have not yet arrived at such a situation where the Internet is a god but it leaves no doubt that our lives totally depend on the being online today. Here are some things that would happen if there was no Internet :

1. No Social Networking = No online friends.  When Internet connection goes off, the first casualties are social network websites like Facebook, WhatsApp, Twitter, Snapchat. As soon as this happens, users who normally contacted their friends and colleagues through social media try to contact with friends and family through telephone lines. Due to overload, telecommunication services go down. Due to non-availability of Internet, LTE networks go down leaving users at the mercy of landlines.

2. No Internet = No News/Information Without the Internet you wouldn’t be able to get round the clock information like news, or weather forecast. Air-traffic can’t function, high seas become dangerous. Metros come to a complete standstill. Suburban train networks will run slowly bringing big cities around the world to a creeping halt.

3. No Internet = No online banking/ATMs Online banking goes down. ATMs failures cause huge queues at banks. Banks face huge clearing pile up as the cheque clearing and money transfers have to be done manually.  This continues until the time there is a breakdown in services. There would be no PayPal or Visa purchases. Credit and Debit cards would become redundant as Point-of-Sales would not work. NYSE, FTSE, Nifty etc would stop trading in shares and stocks. WTI trades for Crude oil would stop as will Chicago Mercantile Exchange leading to stopping of commodities trade.

4. No Internet = No Satellites. The GPS network would completely breakdown as will the normal satellite communications. Everything which is connected to Satellites would have to be shut down.

The above some examples that come to mind easily. The Internet is much more than dominant in our life today than it was a decade ago. Also millions of IT workers who are dependent on the Internet would be out of work while big companies like Google/Facebook/Uber would shut down.

Seems like a grim scenario! Looks like Foster foresaw the situation in 1909 when he wrote ‘The Machine Stops’. Though there is a counterview to this. The internet is self-resilient.   It can’t simply be “shut down” – it’s not built that way.  Turning off the Internet is like shutting off religion. You just can’t do it.  Come to think of it… it’d be easier to turn off religion than to turn off the internet.

However, we seem to closer to making the Internet our god. Not in the abstract way but the Internet surely dominates our way of life with a god-like presence. Without the Internet we would surely be hurtled to stone age again as switching to manual way of doing things would be too difficult.

Time to remember that the Internet can’t just be turned off… but it can/could be stopped, of course.

read more

Why Is Localhost’s IP Address It’s Meaning And Use

Why Is Localhost’s IP Address It's Meaning And Use

Have you wondered what is local host’s IP and what it actually is?

If you are on Internet 24×7 you might have definitely heard or seen an IP address during your time on the net. The geeky nerds among you might know that points to localhost. But do you know why is localhost‘s IP address and not something else?

The basic logic behind this address is that it is used to establish a connection to the same computer used by the end-user. Below is a detailed answer by a user on Super User forum.

How does work? Why is it called so?

Here is a geeky answer from a techie :

127 is the last network number in a class A network with a subnet mask of is the first assignable address in the subnet. cannot be used because that would be the wire number. But using any other numbers for the host portion should work fine and revert to using127.0.0.1. You can try it yourself by pinging if you’d like. Why they waited until the last network number to implement this? I don’t think it’s documented.

Most developers use local host to test their applications before actually deploying it. When you try to establish a network connection to the loopback address, it works in the same manner as making a connection with any remote device. However, it avoids connection to the local network interface hardware.

But, why does the localhost IP address starts with 127? Well, 127 is the last network number in a class A network. It has a subnet mask of So, the first assignable address in the subnet is

However, if you use any other numbers from the host portions, it should work fine and revert to So, you can ping if you like.

You might also ask why the last network number was chosen to implement this. Well, the earliest mention of 127 as loopback dates back to November 1986 RFC 990. And, by 1981, 0 and 127 were the only reserved Class A networks.

The class A network number 127 is assigned the “loopback” function, that is, a datagram sent by a higher level protocol to a network 127 address should loop back inside the host. No datagram “sent” to a network 127 address should ever appear on any network anywhere. Even as early as September 1981 RFC 790, 0 and 127 were already reserved:
000.rrr.rrr.rrr                 Reserved                     [JBP]
127.rrr.rrr.rrr                 Reserved                     [JBP]

0 and 127 were the only reserved Class A networks by 1981. Out of the two available localhosts, 0 was used for pointing to a specific host and that left 127. So all developers testing their apps and websites have to use 127  for loopback. Some would also call it more sensible to choose for loopback, but that was already given to BBC Packet Radio Network.

You can find more information and discussion about local host from Stack Exchange users here.

read more

Record Breaking 65Tbps Data Transmission Rate Via Cable Set By Nokia

Nokia sets new 65 terabit-per-second record for cable transmission capacity

Nokia demonstrates a whopping 65 terabit-per-second transmission which is equivalent to  streaming 10 million high-definition TV channels simultaneously

The yesteryears mobile phone king is doing quite well in the telecom sector that can be seen from its latest achievement. Nokia’s Alcatel Submarine Networks (ASN) Unit said on Wednesday it has set a new record for cable transmission capacity for communications traffic. It managed transmission speeds double its previous levels opening up new opportunities for wireless Internet transmission.

Nokia Bell Labs demonstrated in lab tests a whopping 65 terabit-per-second transmission using dual-band fiber amplifiers. The speed is equivalent of streaming 10 million high-definition TV channels simultaneously.

Nokia’s demonstration is going to help tech giants like Google, Microsoft, and Facebook, as well as ISPs and telecom companies due to increased bandwidth demand as Internet reaches more remote places. Research firm Telegeography estimates the market is poised to see an explosion of new cable deployments worth more than $8.1 billion over the next three years.

ASN Chief Technology Officer Olivier Gautheron said that the new transmission lines will be commercially ready to be deployed over the next two to three years. He also added that the new technique promises to help reduce transmission costs, increase network resilience and allow networks to dynamically adapt to changing traffic conditions.

Nokia used the new“Probablistic” modulation technique developed by Bell Labs that allows customers to maximize the capacity of high-speed optical networks or to trade off capacity by extending the distance needed between amplifiers to boost electronic signals in order to reduce network costs.

read more

Arduino’s new ESLOV IoT Invention Kit makes hardware makes hacking easy

Arduino reveals a serious Internet of Things foil for hardware hackers

Arduino reveals a serious Internet of Things foil for hardware hackers

A few years ago, it was impossible to imagine that there could be recreations of hardware inventions. As a result, hardware manufacturers felt protected and believed that no one can make their cold hardware.

However, this has been changed. Thanks to the smart yet crafty folks at Arduino, who have come up with a new concept that allows your device to accumulate data from around the globe and send out its own analytics, etc. Called the ESLOV IoT Invention Kit, it is a central Arduino product that adds Internet of Things (IoT) capabilities to your hardware products. In other words, you can build any hardware you want to know and connect it to the internet using the ESLOV IoT Invention Kit.

The ESLOV kit will let you add IoT capabilities to almost any hardware you can imagine. For instance, whether it is a washing machine, or a refrigerator, or a home-theatre system, Arduino’s new kit will make it come alive.

This self-described “plug-and-play toolkit” lets you connect mixed sensors and outputs together to develop several systems. Since, all these modules are controlled via Arduino’s online IDE, the percentage of failure is very less.

The basic system includes several ATmega328P processors, which are the same ones found on the Arduino Uno, a series of sensors as well as a Wi-Fi hub. These sensors are run by individual ATmegas. The kits are available at three levels. The entry-level $99 pack includes a Wi-Fi and motion hub, a button, a buzzer, and a LED. Next, the $249 intermediate kit includes all the important units to get started, while the Pro indication includes a Hall sensor, OLED Display, and GPS (22 modules in total) for $499.

The toolkit is expected to be available for purchase in next July.

read more

U.S. has just given away the control over internet

U.S. has just given away the control over internet

U.S. surrenders key role for internet

The U.S. government has officially given up the “address book” of the internet to Internet Corporation for Assigned Names and Numbers (ICANN) effective October 1, 2016. In other words, ICANN has become a self-regulating non-profit international organization managing the Internet Assigned Numbers Authority, the system for online “domains” such as .com. Effective October 1, 2016, ICANN is no longer under the watch of the US’ National Telecommunications and Information Administration.

While the U.S. and ICANN officials say that the change is part of a longstanding plan to “privatize” those functions, some critics complain about a “giveaway” that could threaten the internet’s integrity.

ICANN will take its input from academics, companies, governments and the public. While the American government didn’t really exercise its influence, it no longer has that option.

“This transition was envisioned 18 years ago, yet it was the tireless work of the global Internet community, which drafted the final proposal, that made this a reality,” said ICANN Board Chair Stephen D. Crocker. “This community validated the multistakeholder model of Internet governance. “It has shown that a governance model defined by the inclusion of all voices, including business, academics, technical experts, civil society, governments and many others is the best way to assure that the Internet of tomorrow remains as free, open and accessible as the Internet of today.”

Christopher Mondini, ICANN’s vice president for global business engagement, said the change will have no impact on day-to-day internet use, and will assure the global community that the system is free from government regulation and interference.

“This is a new kind of governance model,” he added.

The handover follows an unsuccessful last-minute attempt by four states’ Republican attorneys general to block the transition arguing that it would allow authoritarian regimes to have greater control over the internet.

However, their temporary injunction request, which centered around the idea that the U.S. was “giving away government property” and required Congressional approval to give up ICANN was rejected by a federal judge. The attorneys’ resonated their party’s concern that reducing U.S. control would open the internet to greater censorship by countries like China and Russia, who don’t value the freedom of speech. They were also worried that the shift could threaten U.S. government domains like .gov and .mil (for government and military-related websites, respectively) and could be tampered with.

On the other hand, supporters of the transition are of the opinion that the move is not only harmless, but might prevent a far worse outcome. Since ICANN will still operate out of Los Angeles, they say that censorship-heavy countries don’t have any more power over the internet than they did before. If anything, a privately-managed domain system reduces the pressure to hand over control to the United Nations, where China and Russia would have some influence. There’s also a fear that a fully independent ICANN may act unpredictably once free of US oversight and would encourage countries to set up their own domain systems and fragment the internet.

To sum up, while the U.S. no longer has the keys to the internet kingdom, the important thing to remember is that neither does anyone else.

read more

Nokia brings 1Tbps Data Transfer Speed, beats Google Fiber with 1000 times

Nokia brings 1Tbps Data Transfer Speed, beats Google Fiber with 1000 times

Nokia to show off optical technology that’s 1,000 times faster

In an optical technology field trial of a new modulation technique carried out by Nokia Bell Labs, Deutsche Telekom T-Labs and the Technical University of Munich, the researchers were able to reach 1 terabit per second (Tbps) data transmission speed over optical cable, according to a statement published on Friday. The scientific breakthrough could extend the capability of optical networks to meet the ever-increasing data traffic demands of consumers and businesses.

1-Tbps is close to the theoretical maximum transfer rate of a fibre channel, referred to as the Shannon Limit, explained Nokia.

“To guarantee a high customer experience for future services we need optical transmissions with increased capacities, reach and flexibility over deployed fibre infrastructures,” said Deutsche Telekom CTO Bruno Jacobfeuerborn, in a statement.

The companies used a new modulation technique called Probabilistic Constellation Shaping to reach the high data rate.

“The trial of the novel modulation approach, known as Probabilistic Constellation Shaping (PCS), uses quadrature amplitude modulation (QAM) formats to achieve higher transmission capacity over a given channel to significantly improve the spectral efficiency of optical communications,” Nokia explains.

“PCS modifies the probability with which constellation points, the alphabet of the transmission, are used. Traditionally, all constellation points are used with the same frequency. PCS cleverly uses constellation points with high amplitude less frequently than those with lesser amplitude to transmit signals that, on average, are more resilient to noise and other impairments. This allows the transmission rate to be tailored to ideally fit the transmission channel, delivering up to 30 percent greater reach.”

According to the researchers, the speed was fast enough to download an entire Games of Thrones series in high definition within one second.

Marcus Weldon, president Nokia Bell Labs and Nokia CTO, said “Future optical networks not only need to support orders of magnitude higher capacity, but also the ability to dynamically adapt to channel conditions and traffic demand. PCS offers great benefits to service providers and enterprises by enabling optical networks to operate closer to the Shannon Limit to support massive data centre interconnectivity and provide the flexibility and performance required for modern networking in the digital era.”

Nokia Bell Labs will present the results at an industry conference in Dusseldorf, Germany on September 19.

“Information theory is the mathematics of digital technology, and during the Claude Shannon centenary year 2016 it is thrilling to see his ideas continue to transform industries and society,” added Professor Gerhard Kramer of the Technical University of Munich. “PCS, an idea that won a Bell Labs Prize, directly applies Shannon’s principles and lets fibre optic systems transmit data faster, further, and with unparalleled flexibility.”

Earlier this year, researchers at the University College London were able to achieve speeds of 1.25 Tbps in their optical breakthrough.

read more

Reliance Jio storms India, offers unlimited Broadband Internet at 60Mbps for just $7 per month

Reliance Jio storms India, offers unlimited Broadband Internet at 60Mbps for just $7 per month

Reliance Jio Fiber Broadband Service plans unlimited Internet at 60Mbps for just $7 a month for Indians

While the world is surfing Internet at around 50 Mbps at an average, Indians fared poorly at 3 to 5Mbps download speeds. In fact, India ranks at the bottom of the ladder in the Gigabits race compared to the Western countries, South Korea, and Japan. Not anymore! After launching its free voice call mobile service, Reliance Jio has now announced Fiber Broadband Service for Indians.

Reliance Jio which is owned by India’s richest man, Mukesh Ambani has unleashed brutal price war for Internet market in India to take on the existing players.  The service from JIO is all set to be opened for all in coming days as the company finishes up with testing phase. The company is going to announce the tariff plans for its broadband service just as they did with 4G services.


The Fiber plans would be of three types: Silver, Gold and Platinum. while the base price range starts at just $7 per month, there would be a range of plans based on different types of speed and data cap volume. Do make a note that company would be offering 100GB of data free under the base plan. As for first 90 days, free data and access to Jio premium apps such as JioTV, JioCinema, JioBeats, etc. would be provided.


You can see in the table the plans for the Jio Fiber Broadband. We recommend that you take this information with a grain of salt as there is no official confirmation of the plans from Reliance Jio itself. Though, you can get an idea how they would be priced from the table.

Reliance will be starting its Fiber Broadband service on trial basis in the commercial capital of Mumbai in a months time. After testing it in Mumbai, the company plans to expand its service to other cities. People interested in Fiber plans would be easily able to sign up for.

Reliance has asked for costumers interested in Fiber plans would be easily able to sign up for Jio Fiber via online or from the nearest Jio Store. Reliance siad that first time users would be provided Jio Fiber-compatible router provided with a price tag of below $90. Subscribers would need to pay that as a one-time charge for the router, as no additional charge would be levied on users under Jio Fiber Preview Offer for first thirty days.

If Reliance’s foray into Internet goes well, soon India could break into the ranks of countries offering fastest data speed to its citizens. Will Reliance succeed in its disruptive practice of offering high quality service for free remains to be seen.

read more

1.9 Gigabit Per Second: Fastest Mobile 4G Network Speed Record Broken

'Record Breaking' Internet Speed of 1.9 Gbps over 4G Mobile Network achieved

Finnish company tests fastest mobile internet speed

In this age of internet, where we have moved from 3G to 4G network, a new record has been set for the world’s fastest 4G mobile internet speed, according to a network operator.

The firm Elisa, a mobile carrier provider in Finland, in collaboration with the Chinese telecommunication giant Huawei has broken the record for the fastest speeds reached over a 4G network. The two companies were able to achieve a 1.9 gigabit-per-second (Gbps) speed on a test network —said to be the fastest speed ever recorded on a commercial device.

“This new speed record is a step towards the 5G network and also an excellent indication of all the opportunities the 4G network still has to offer. The speeds that the 4G network offers are continuously increasing and, possibly in the next few years, we will even be able to offer mobile data connections of several gigabits per second to our customers,” said Sami Komulainen, Vice President of mobile network services at Elisa.

According to BBC, hypothetically, this 1.9Gbps speed could download a Blu-ray film in just 44 seconds. As a comparison, Elisa’s fastest commercial network is 300-MEGAbit-per-second, which is not even a sixth as fast.

Even though the speed may have been broken on a special Elisa-only network, technology commentators say the only true speed test is on a publicly available network.

Komulainen explained why the speed test was important: “A speed of almost two gigabits may seem unheard-of, and many people are wondering if such speeds are even needed in everyday use. However, there will be more and more demand for high-speed connections in the future as, for example, 360-degree videos and virtual reality applications become more popular.”

While speeds like that could revolutionize the tech industry, the speeds have little significance in real world applications as, “Deploying a network that can support 1.9Gbps doesn’t mean customers will get 1.9Gbps mobile broadband,” quoted Nick Wood, an assistant editor at Total Telecom.

And why is that “because that network capacity has to be shared among customers. In reality, customers are likely to experience a modest improvement in overall speed and reliability, which is great, but doesn’t make for exciting headlines the same way that 1.9 Gbps does,” he added.

Elisa has stated that it plans to bring a premium 1Gbps network in Finland within the next two years. In contrast, Vodafone Germany plans to get there quickly. The operator has said it plans to offer 1 Gbps on its 4G network by the end of 2016.

read more

MIT’s New MegaMIMO 2.0 Wi-Fi Technology Is Three Times Faster Than Normal Wi-Fi

New Wi-Fi Technology Is Three Times Faster Than Normal Wi-Fi

MIT’s MegaMIMO 2.0 provides 3x faster Wi-Fi and longer range to users

A team of researchers from the MIT Computer Science and Artificial Intelligence Lab (CSAIL) has created MegaMIMO, a novel Wi-Fi technology that is the double the range of old Wi-Fi tech and three times faster.

We’ve all experienced the frustration of trying to load a web page on our phone at a busy conference with an overloaded network. MegaMIMO 2.0, (MIMO standing for Multiple-Input and Multiple-Output) fixes the big problem at the heart of Wi-Fi spectrum: congestion.

Researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have addressed the congestion issues with MegaMIMO 2.0, which can transfer data over Wi-Fi more than three times faster than current options, with double the range.

Researchers explained the findings in a paper released this week. The key to the MegaMIMO 2.0 system is coordinating multiple access points at the same time, on the same frequency, without creating interference. The new system allows users to bounce from one router to another without interruption, expanding network capacity for enterprises, sports stadiums, and conference centers.

“In today’s wireless world, you can’t solve spectrum crunch by throwing more transmitters at the problem, because they will all still be interfering with one another,” Ezzeldin Hamed, PhD student and lead author of the paper, told MIT News. “The answer is to have all those access points work with each other simultaneously to efficiently use the available spectrum.”

According to the MIT News report, “MegaMIMO 2.0’s hardware is the size of a standard router, and consists of a processor, a real-time baseband processing system, and a transceiver board.” The software is a signal-processing algorithm that allows multiple independent access points to transmit data on the same piece of spectrum to multiple, independent mobile devices, without interfering with each other, the report stated.

The researchers tested MegaMIMO’s performance by building a fake conference room, and strapping four laptops to Roomba robots to wander around the space. They found that the system could increase the devices’ data-transfer speed by 330%.

The advent of MegaMIMO 2.0 is especially important given the rise of the Internet of Things, as well as AR and VR, MIT’s Sharony said. More connected devices means a need for more bandwidth, which distributed MIMO could provide.

“This will increase the total capacity of the network,” he added. “The user would not even know about it. The magic happens in the software and hardware, coordinating multiple routers and devices.”

This is an early iteration of the MegaMIMO 2.0 system. According to MIT News, in the future the team is hoping to “coordinate dozens of routers at once,” providing even more speed.

MIT’s work will likely lead to further improvements in hardware and software to coordinate routers together, Sharony said. “It will be a good push for the industry in increasing the capacity even further, especially in the enterprise, where you have many devices sharing the whole spectrum,” he added.

MIT hopes to bring MegaMIMO 2.0 to markets in couple of years.

read more

Internet Is Finally Free : United States To Handover DNS System From October 1, 2016

Internet Is Finally Free : United States To Handover DNS System From October 1, 2016

After years of demurral United States to hand over control of DNS to ICANN effective this October

We have been hearing for years that the DNS will be handed over to ICANN but that never fructified. However, after years of refusal, United States of America is finally handing over the reins of the Internet to ICANN.

In an announcement made on August 16, 2016, the U.S. National Telecommunications & Information Administration (NTIA) said it was ready to surrender control over the Internet domain name system (DNS) infrastructure to the Internet Corporation for Assigned Names and Numbers (ICANN), a non-profit organization effective October 1, 2016. The DNS is basically a directory for internet-connected devices that helps translate domain names to numerical IP addresses. In other words, it is responsible for holding and pairing web addresses or its URL to its respective servers.

DNS is a foundational piece of the internet, which has been under the control of the state for the last 20 years. Even though the terms of the transition of DNS to ICANN’s supervision had been agreed upon in 2014, it was only recently that the NTIA finally signed off on the agreement. The transition would still have both parties to be ready to take the necessary steps to make the transition. The U.S. government cannot make the transfer not until ICANN is ready with the transition.

“The IANA (Internet Assigned Numbers Authority) stewardship transition represents the final step in the US government’s long-standing commitment, supported by three Administrations, to privatize the Internet’s domain name system,” NTIA chief Lawrence Strickland said this week.

“For the last 18 years, the United States has been working … to establish a stable and secure multi stakeholder model of Internet governance that ensures that the private sector, not governments, take the lead in setting the future direction of the Internet’s domain name system,” he added.

“To help achieve this goal, NTIA in 1998 partnered with ICANN, a California-based non-profit, to transition technical DNS coordination and management functions to the private sector. NTIA’s current stewardship role was intended to be temporary.” However, it does have implications for how DNS is perceived internationally.

The handover won’t change anything for the 3.5 billion people connected to the internet. That’s because U.S. control has been largely administrative: it doesn’t get involved on a day-to-day basis.

“This is important because there has always been a bit of nervousness from the rest of the global community of one entity having considerable power. The United States has been very fair with that power and responsibility, with the exception of the U.S. blocking .xxx at ICANN based on prudish American values and political maneuvering,” said Joseph Lorenzo Hall, chief technologist for the Center for Democracy & Technology.

So, after 18 years, it is not a huge surprise that the supervision of the DNS is moving this way. While some politicians, like U.S. President Barack Obama, have shown support for the IANA transition to ICANN, senators like Ted Cruz, as well as Mike Lee, and Rep. Sean Duffy are known to have reached out their concerns to the administration in a form of a letter. The Republicans cite their negative sentiments over the transfer as a “planned Internet giveaway.”

The shift will likely go unnoticed by everyday users and businesses despite its political complications. However, businesses should remember the differences between the NTIA and ICANN, Hall said.

“For businesses, it should be business as usual; they need to understand that ICANN is (at least for the near term) a California corporation and under US legal jurisdiction,” Hall said.

Additionally, as the U.S. steps away from DNS supervision, the shift to ICANN could likely win them some support in the international community.

“I do think everyone will get benefits from the ICANN/IANA transition to a global stakeholder community, including the business community, as it is a solid sign that the US is serious about globalization of the internet and not trying to maintain what we might call digital colonialism,” Hall said.

read more