NVIDIA unveils the Quadro RTX 4000 GPU with 2,304 CUDA Cores, 8GB GDDR6 memory

NVIDIA unveils the Quadro RTX 4000 GPU

NVIDIA announces mid-range Quadro RTX 4000 with Turing GPU, 2304 Cores and 8 GB VRAM

NVIDIA announced its new Quadro RTX 4000 graphics card for workstation professionals at the annual Autodesk University Conference in Las Vegas yesterday. This is the company’s first mid-range professional GPU in Quadro RTX family that is powered by the NVIDIA Turing architecture and the NVIDIA RTX platform.

The Quadro RTX family already consists of the Quadro RTX 8000, RTX 6000, and RTX 5000. The single-slot design of the new Quadro RTX 4000 allows the GPU to fit in a variety of workstation chassis.

“Meet today’s demanding professional workflows with GPU accelerated ray tracing, deep learning, and advanced shading. The NVIDIA Quadro RTX 4000, powered by the NVIDIA Turing architecture and the NVIDIA RTX platform, delivers best-in-class performance and features in a single-slot PCI-e form factor. Design and create like never before with faster time to insight and faster time to solution,” NVIDIA said.

According to specifications provided by NVIDIA, the graphics card has 36 RT cores to enable real-time ray tracing of objects and environments with physically accurate shadows, reflections, refractions, and global illumination.

Further, it packs 2,304 CUDA cores, 288 Turing Tensor Cores for 57 TFLOPS of deep learning performance, and comes with 8GB of ultra-fast GDDR6 memory that provides over 40 percent more memory bandwidth than the previous generation Quadro P4000. It also supports video creation and playback for multiple video streams with resolutions up to 8K.

In addition, it provides hardware support for VirtualLink, improved performance with VR applications, including Variable Rate Shading, Multi-View Rendering and VRWorks Audio, and more.

In terms of performance, the Quadro RTX 4000 offers up to 43T RTX-OPS and 6 Giga Rays/s. The FP32 compute performance is estimated at 7.1 TFLOPS, which translates into a boost clock of around 1540 MHz. The TU106-based Quadro has a 160W rated TDP and has been enclosed into a single-slot cooling solution, which should easily fit in any workstation case. It draws power from a single 8-pin PCIe power connector.

Ernesto Pacheco, Director of Visualization at the global architectural firm, CannonDesign, which is one of the early users of the Quadro RTX 4000 said, “Our designers need tools that unleash their creative freedom to design amazing buildings. Real-time rendering with the new Quadro RTX 4000 is unbelievably fast and smooth right out of the gate — no latency and the quality and accuracy of the lighting is outstanding. It will enable us to accelerate our workflow and let our designers focus on the design process without the technology slowing them down.”

The Quadro RTX 4000 will be available starting December on at an estimated price tag of $900. It will also be available from leading workstation manufacturers, like Dell, HPE and Lenovo, as well as its authorized distribution partners in North America, Europe, and Asia-Pacific.

read more

Indian researchers develop country’s first microprocessor ‘Shakti’

Indian researchers develop country’s first microprocessor Shakti

‘Shakti’ is India’s first indigenous microprocessor

Researchers at Indian Institute of Technology-Madras (IIT-M) have designed India’s first Microprocessor that will help reduce the country’s dependence on imported systems in communications and defense sectors.

The processor, called Shakti, has been developed at an outlay of about Rs 11 crore and is said to be on par with its international counterparts. It can be used in various consumer electronic devices, mobile computing devices, low-power wireless systems, and networking systems.

“The microprocessor can be used by others as it is on par with International Standards”, a statement from IIT-M said. “The microprocessor will not get outdated as it is one of the few ‘RISC V Microprocessors’ in the world now.”

The ‘SHAKTI’ family of microprocessors was fabricated at Semi-Conductor Laboratory of Indian Space Research Organizations (ISRO) in Chandigarh, making it the first ‘RISC V Microprocessor’ to designed and completely made in India, IIT-M said.

The microprocessor’s design is based on an open Instruction Set Architecture (ISA) named RISC-V, noted Professor Kamakoti Veezhinathan, Lead Researcher, Reconfigurable Intelligent Systems Engineering (RISE) Laboratory, Department of Computer Science and Engineering at IIT-M. This means anyone can design, manufacture, and sell chips based on this architecture.

The most critical aspect of such an indigenous design, development and fabricating approach is reducing the risk of deploying systems which may be affected by the back-doors and hardware Trojans.

This achievement will get huge implication when systems based on Shakti processors are accepted by strategic sectors such as defense, nuclear power installations, government agencies, and departments.

“With the advent of Digital India, there are several applications that require customizable processor cores. The 180nm fabrication facility at SCL Chandigarh is crucial in getting these cores manufacturers within our Country,” Veezhinathan said.

The project originally began in 2011 and was afterward granted a funding of Rs.11 crore ($1.5 million) from the Government of India in 2017.

“We have proved that a microprocessor can be designed, developed and fabricated in India. This is important for the country. All the countries would like to own the design part. Even from the security point of view, indigenous design gains importance,” noted Veezhinathan.

“The impact of this completely indigenous fabrication is that India has now attained independence in designing, developing and fabricating end-to-end systems within the country, leading to self-sufficiency.

Veezhinathan and his team are also working on a microprocessor for supercomputers, named ‘Parashakti,’ which is likely to be released by the end of the year.

read more

IBM to buy Red Hat in a deal valued at $34 billion

IBM to buy Red Hat in a deal valued at $34 billion

IBM purchases software company Red Hat in a $34 billion deal

IBM has agreed to purchase U.S. open-source software company, Red Hat in a blockbuster deal of $33.4 billion in its largest deal ever, and the third-biggest in the history of U.S. tech. This deal is aimed to expand the company’s subscription-based software offerings in the fast-growing and profitable cloud market.

In an agreement expected to be finalized in the second half of 2019, IBM will purchase all of the issued and outstanding common shares of Red Hat for $190 per share in cash, which is a 63% premium over the company’s Friday’s closing share price.

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” said Ginni Rometty, Chairman and Chief Executive Officer of IBM, said in a press release announcing the deal.

“IBM will become the world’s number one hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.

“Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next 80 percent is about unlocking real business value and driving growth. This is the next chapter of the cloud. It requires shifting business applications to the hybrid cloud, extracting more data and optimizing every part of the business, from supply chains to sales.”

Paul Cormier, Red Hat’s vice president and president of products and technologies said, “Today is a banner day for open source. The largest software transaction in history and it’s an open source company. Let that sink in for a minute. We just made history.”

IBM said Red Hat will continue to build and improve Red Hat’s current partnerships, including those with major cloud providers, such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud, Alibaba and more, in addition to the IBM Cloud. Simultaneously, Red Hat will benefit from IBM’s hybrid cloud and enterprise IT scale in helping expand their open-source technology portfolio to businesses globally.

“IBM is committed to being an authentic multi-cloud provider, and we will prioritize the use of Red Hat technology across multiple clouds,” said Arvind Krishna, Senior Vice President, IBM Hybrid Cloud. “In doing so, IBM will support open source technology wherever it runs, allowing it to scale significantly within commercial settings around the world.”

The combination of IBM and Red Hat should quicken cloud adoption by large customers who still must connect old technology with new, added Arvind.

Once the acquisition deal is closed, Red Hat will become a unit of IBM’s Hybrid Cloud division, with the goal of maintaining the “independence and neutrality” of Red Hat’s open-source development heritage.

Jim Whitehurst, President and CEO of Red Hat along with its current management team will continue to lead the company. Whitehurst also will join IBM’s senior management team and report to Rometty.

“Open source is the default choice for modern IT solutions, and I’m incredibly proud of the role Red Hat has played in making that a reality in the enterprise,” said Whitehurst. “Joining forces with IBM will provide us with a greater level of scale, resources, and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience – all while preserving our unique culture and unwavering commitment to open source innovation.”

IBM has plans to maintain Red Hat’s headquarters, facilities, brands, and practices.

“IBM’s commitment to keeping the things that have made Red Hat successful – always thinking about the customer and the open source community first – make this a tremendous opportunity for not only Red Hat but also open source more broadly,” said Cormier.

“Since the day we decided to bring open source to the enterprise, our mission has remained unchanged. And now, one of the biggest enterprise technology companies on the planet has agreed to partner with us to scale and accelerate our efforts, bringing open source innovation to an even greater swath of the enterprise.”

read more

More top-performing CEOs now have engineering degrees than MBAs

More top-performing CEOs now have engineering degrees than MBAs

Most of the top-performing CEOs of 2018 are ones with engineering degrees and not MBA degrees

In an annual ranking released by Harvard Business Review (HBR) on Monday showed that the top-performing CEOs around the globe had an engineering degree as opposed to the finance-and strategy-focused MBA degree, reports Washington Post. This is the second time in a row that showed the best-performing CEOs in the world have an engineering degree.

For those unaware, HBR’s ranking examines the change in market capitalization and total shareholder return, adjusted by both country and industry, over the entire tenure of CEOs in the global S&P 1200, and adds in data that rates the company’s performance on environmental, social and governance issues during the CEO’s time in charge.

According to the HBR report, 34 of the top 100 CEOs in 2018 had an engineering degree, in comparison to 32 who had an MBA degree, while 8 of the top CEOs had both degrees.

HBR had started tracking the degree question from 2014. And the first time that fewer MBAs were found than engineers was in 2017 that had 29 CEOs with MBA degrees and 32 with engineering degrees.

While the top position does not belong to an engineer, this year’s survey had 10 of the top 20 ranked CEOs who have an engineering degree, including No. 2-ranked Nvidia CEO Jensen Huang, compared with four MBAs.

So, why is there a sudden increase in the number of technology CEOs? The most likely explanation is that the technology industry has seen tremendous growth in recent years. For instance, the list had only 8 technology CEOs in 2014, but this number increased to 22 in 2018. The other reason could also be the methodology used by HBR.

According to Dan McGinn, a senior editor at HBR, the company had made a change to its methodology in 2015, when it added the ESG (environmental, social and governance) component to its analysis and the list became somewhat more populated by European companies.

The data shows that European CEOs hold more engineering degrees, while a majority of the U.S. CEOs hold MBA degrees. This could contribute to the fact of differing educational backgrounds in different regions.

Holding an engineering degree doesn’t mean that its skills cannot be applied in management roles. When HBR first started asking about educational degrees, McGinn said that it spoke with management experts who said that engineering focuses on problem-solving, analytical skills and structural methods of thinking.

“That has obvious advantages if you’re running an I.T. company, but it probably also has advantages if you’re trying to problem-solve in everyday business situations,” McGinn said.

Jeffrey Sprecher, the CEO of Intercontinental Exchange, which owns the New York Stock Exchange, and one of the CEOs on this year’s list, holds both an MBA and an engineering degree. In a video posted on Facebook by his alma mater, University of Wisconsin, said that he has never worked on a job that is related to his chemical engineering degree.

Still, he said, it “taught me about problem-solving, and complex systems and the way things relate to each other, and business is really just that.”

Others were of the opinion that there could be a day where we could see more CEOs with that background, and have more people coming out of school with those degrees considering that technology and digitization have been become increasingly important for every IT as well as non-IT company.

Robert Sutton, a professor of management science and engineering at Stanford University, said that it’s becoming more important for executives to have some knowledge of computer science.

Also Read– Top 10 websites to learn Computer Science for Free

“This is not a surprise given that technology, especially computer science, is so important,” he said. “Every organization I talk to says they’re doing ‘digitization.’ ”

While HBR’s ranking looks at the educational backgrounds of only the 100 top-performing CEOs around the world, educational data for CEOs in the S&P 500 and Fortune 500 show a slightly different view.

An annual report generated by executive search firm Crist Kolder Associates shows that 26.4 percent of those CEOs had engineering degrees in 2018, which is somewhat less from 27.4 percent in 2017 and 28.4 percent in 2016.

Peter Crist, the firm’s chairman, said “it’s a terrible generalization, but I think boards will look at people with engineering degrees and basically make the assumption that they’re smart. I constantly caution them on ‘has that lent to their accomplishments?’ Have they grown a business? Don’t get hung up on the pedigrees.”

Steve Mader, a retired vice chairman of the search firm Korn Ferry, said he hasn’t noticed a huge change in sensitivity by boards toward CEOs with engineering backgrounds, even though it is openly accepted.

According to him, the relationships that are built during an MBA program are still more valuable. However, once the discussion turns to a CEO job, what’s learned in either degree really isn’t important.

“By the time you’re selecting a chief executive officer, there are far more relevant issues to compare and contrast,” he said. “The degree virtually never comes up.”

read more

Real-time Google Translate available on all Google Assistant headphones

Real-time Google Translate available on all Google Assistant headphones

Real-time translation is coming to all Google Assistant-optimized headphones and Android phones

When Google launched the Google Assistant-enabled Pixel Buds last year, one of the highlighted features of the headphone was its ability to translate in real-time using Google Translate. However, until now this feature was exclusive only to Pixel Buds paired with a Google Pixel smartphone.

Google is now opening up the feature to more users and bringing real-time translation capabilities to all Google Assistant-powered headphones, according to a report from Droid-Life.

Google has updated the support page for the Pixel Buds that reads “Google Translate is available on all Assistant-optimised headphones and Android phones. The Google Assistant on Google Pixel Buds is only available on Android and requires an Assistant-optimised Android device and data connection.”

Check out the video demonstration from the 2017 Made By Google event that shows how Google translate works on the Pixel Buds:

Some of the few headphones that are equipped with the Google Assistant include the Pixel Buds, Bose Quiet Control 35 II, Sony WI-1000X, Sony WH-1000XM2, Sony WH-1000XM3, JBL Everest 710GA, JBL Everest 110GA, OnePlus Bullets and a few more come with built-in Google Assistant.

If you are interested in trying out the feature, all you need to do is say “Hey Google, help me speak (name of the language)” to the Google Assistant on your earphone or device.

read more

World’s fastest camera captures images at 10 trillion frames per second

World’s fastest camera captures images at 10 trillion frames per second

‘World’s fastest camera’ that freezes images at 10 trillion frames a second is unveiled

Researchers from Quebec University’s Institute national de la recherche scientifique (INRS) and the California Institute of Technology (Caltech), have developed what they claim as the world’s fastest camera capable of capturing 10 trillion (1013) frames per second. On the other hand, an average smartphone camera manages to capture around 30 per second. The project was led by Caltech’s Lihong Wang along with Jinyang Liang, INRS professor, and ultrafast imaging specialist and his colleagues.

Dubbed as T-CUP, the new device is so quick that it literally makes it possible to freeze time to see phenomena such as light itself behave in extremely slow motion.

To build their camera, the researchers looked at compressed ultra-fast photography (CUP), a technique that can capture images at a speed of around 100 billion frames per second. T-CUP’s system is based on a femtosecond (one quadrillionth of a second) streak camera that also involves a data acquisition type used in applications such as tomography.

“We knew that by using only a femtosecond streak camera, the image quality would be limited,” said Professor Lihong Wang, the Director of Caltech Optical Imaging Laboratory (COIL) in a statement. “So to improve this, we added another camera that acquires a static image. Combined with the image acquired by the femtosecond streak camera, we can use what is called a Radon transformation to obtain high-quality images while recording ten trillion frames per second.”

According to the team, T-CUP has set a world record for real-time imaging speed and could be used to power a new generation of microscopes for biomedical, materials science, and other applications.

This camera represents a fundamental shift, making it possible to analyze interactions between light and matter at an unparalleled temporal resolution.

The very first time the ultrafast camera was used, it broke new ground by capturing the temporal focusing of a single femtosecond laser pulse in real time.

This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse’s shape, intensity, and angle of inclination.

“It’s an achievement in itself. But we already see possibilities for increasing the speed to up to one quadrillion (10 to the 15) frames per second. Speeds like that are sure to offer insight into as-yet undetectable secrets of the interactions between light and matter,” says Jinyang Liang, the leading author of this work, who was an engineer in COIL when the research was conducted.

The finding of the research was published in the journal Light: Science & Applications.

read more

Wi-Fi 6: The next generation of Wi-Fi connectivity to come next year

Wi-Fi 6: The next generation of Wi-Fi connectivity to come next year

Wi-Fi Alliance rebrands 802.11 Wi-Fi standards for easy understanding

Wi-Fi Alliance, the group that oversees the implementation of Wi-Fi, has announced rebranding of the “802.11” Wi-Fi standards for easy understanding of the Wi-Fi technology supported by the user’s device and used in a connection the device makes with a Wi-Fi network.

With the upcoming launch of the newest Wi-Fi standard, 802.11ax next year, Wi-Fi Alliance will adapt its certification program with the new naming. 802.11ax will be rebranded as “Wi-Fi 6”. Similarly, the earlier versions of the wireless data protocol too would be renamed.

“For nearly two decades, Wi-Fi users have had to sort through technical naming conventions to determine if their devices support the latest Wi-Fi,” said Edgar Figueroa, president and CEO of Wi-Fi Alliance. “Wi-Fi Alliance is excited to introduce Wi-Fi 6, and present a new naming scheme to help industry and Wi-Fi users easily understand the Wi-Fi generation supported by their device or connection.”

For those unaware, the current version is 802.11ac, and the versions prior to this were 802.11n, 802.11g, 802.11a, and 802.11b. So, if we start with the first version, which is 802.11b, the naming of the wireless data protocol would be as follows:

Wi-Fi 1 – 802.11b (1999)
Wi-Fi 2 – 802.11a (1999)
Wi-Fi 3 – 802.11g (2003)
Wi-Fi 4 – 802.11n (2009)
Wi-Fi 5 – 802.11ac (2014)

The rebranding of 802.11 Wi-Fi standards (above) will help us understand, which wireless data protocol offers faster data and improved efficiency. For instance, it will help Wi-Fi users to understand the difference between 802.11ax, 802.11ac, 802.11n, and so forth.

Wi-Fi 6 will introduce higher data rates, increased capacity, better performance in dense environments, and improved power efficiency. It is also said to deliver up to 11Gbps speeds across three or more devices. Devices with next-gen Wi-Fi 6 devices are expected to release next year.

Source: Wi-Fi Alliance (1), (2)

read more

Spotify cracks down on friends who share family plans

Spotify cracks down on friends who share family plans

Spotify send emails to Premium for Family subscribers asking for GPS verification

Music streaming giant Spotify has sent emails to some users of “Premium for Family” plan in the US and Germany asking them to prove their home address with GPS location. Apparently, the move by Spotify is an effort to reduce the number of friends sharing discounted plans intended for families rather than signing up for individual memberships.

According to Spotify’s Premium for Family Terms And Conditions, “all account holders must reside at the same address to be eligible for the Premium for Family Plan.” However, several Family subscriptions have reportedly been used by groups of friends.

Also Read- Spotify to let free users skip all ads more like Spotify premium

For those unaware, Spotify’s Family Plan in the US lets customers pay $14.99 per month for up to five premium accounts. According to the company’s website, two to five people on each family plan must live at the same address. In fact, Spotify’s small print does mention that the family plan is available for “you and up to five people who reside at your same address.”

In other words, as per the rules, everyone should be living together in the same home as a family, and everyone’s location should be the same.

Those users who don’t confirm their home addresses through GPS data will lose access to the plan, state Spotify’s email. “If you don’t confirm, you may lose access to the plan,” the email stated.

Some Spotify users took to Twitter to complain that many modern families do not live at the same address. On the other hand, some users also raised concerns specifically with privacy questioning that Spotify does not have the right to track individual listeners.

When reached for comment, a Spotify spokesperson said that the verification request was a test in four markets (including the US). Spotify is currently testing improvements to the user experience of Premium for Family with small user groups in select markets. We are always testing new products and experiences at Spotify, but have no further news to share regarding this particular feature test at this time,” the spokesperson said. Apparently, Spotify has now concluded this test.

Also Read– Spotify Web Player- Listen Music Online In Browser

read more

Demonoid Goes Down Due To Technical Problems, While The Owner Is Missing

Demonoid is a popular BitTorrent tracker and a website that includes file-sharing related discussion forums. This popular Semi-private BitTorrent tracker has been down for many days now. Moreover, the people working for this website haven’t heard from the website owners for around two months now.

ALSO READ: The Pirate Bay May Be Blocked In New Zealand By Sky TV

Demonoid Goes Down; Major Issues

More and more countries are classifying using and sharing pirated content as an illegal online activity. Demonoid is an old torrent community that had witnessed many problems like media industry pressure, blocking orders, hosting issues, lawsuits or police investigations and much more.

Also Read- Demonoid Alternatives -5 Best Torrent sites to Download free Movie

It is worth noting that earlier also Demonoid has been down for weeks and even months but the website came back as if everything was normal.

During the past few weeks, this torrent community has gone through some rough patches again. The website is facing many technical problems, that are eventually taking too much time to be fixed. Demonoid’s tracker sub-site has also gone offline, making the situations even worst.

Lastly, the site’s owner, Deimos is missing as well. In fact, for around two months the staff members haven’t heard from him. Though people working at Demonoid suggest that he is stuck in some personal circumstances.

ALSO READ: The Pirate Bay is Down- 3 Best Torrent Sites To Download Free Movie

Demonoid Goes Down; Will It Come Back? 

Demonoid is a very good torrent community, and the user base grows exponentially every year. Consequently, it’s relatively difficult to maintain the platform and there’s always one or the other technical problem. As of now, the downtime is a major setback for the platform but it will most likely return. That said, there still no official announcement as to when the website will be back to normal.

Do share your thoughts and opinions on the technical problems that websites like Demonoid and Pirate Bay face.

read more

Google admits third-party developers can access users’ Gmail inbox

Google admits third-party developers can access users' Gmail inbox

Google still allows third-party developers to scan your Gmail accounts

In July this year, we had reported how Google is allowing third-party app developers access to user’s private messages in Gmail.

Now, the search giant has officially admitted in a letter to US lawmakers that it allows third-party apps to access and share data from Gmail accounts, even though Google itself has stopped the practice for the purpose of ad targeting last year.

“Developers may share data with third parties so long as they are transparent with the users about how they are using the data,” wrote Susan Molinari, Vice President of Public Policy and Government Affairs for the Americas at Google in the letter sent to the US Senators in July, which was made public on Tuesday.

Also Read- Worried about privacy, forget Google and try these search engines

Molinari also reiterated that Google employees can read Gmail users’ email content only in cases where a user has given consent, or where the content is required to be inspected by the company for security purposes, such as investigating a bug or abuse.

She wrote that the company ensures that the relevant privacy policy is “easily accessible to users to review before deciding whether to grant access.”

In the letter, the company said that it thoroughly vets any third parties that are granted 9access, and also manually reviews privacy policies and uses computer tools to detect any significant changes to the behavior of the apps.

Suzanne Frey, Google’s director of security, trust, and privacy explained in a blog post in July that Google grants certain permissions to third-party apps and services in order to enhance the experience for Gmail users.

“We make it possible for applications from other developers to integrate with Gmail – like email clients, trip planners and customer relationship management (CRM) systems – so that you have options around how you access and use your email.” Ms Frey wrote.

Any non-Google app first goes through a “multi-step review process” before accessing a person’s Gmail messages that includes assessing the app’s privacy policy to ensure that it’s a legitimate app, Ms Frey said.

“We strongly encourage you to review the permissions screen before granting access to any non-Google application,” she added.

Those who do not wish third-party apps scan your emails, then it is suggested that you can either uninstall extensions that you don’t trust and use apps from reputed developers or choose not to install the apps at all.

Source: WSJ

read more