close

Hardware & Machines

Nvidia GeForce RTX 2080 Ti Release Postponed To September 27th

Nvidia GeForce RTX 2080 Ti Release Postponed To September 27th

Nvidia delays the launch date of GeForce RTX 2080 Ti by a week

Nvidia has decided to postpone the release date of its upcoming graphics card, GeForce RTX 2080 Ti to September 27th. The company originally had plans to release this graphics card on September 20th along with GeForce RTX 2080.

The delay in availability of the GeForce RTX 2080 Ti by a week was confirmed by a moderator on the official Nvidia GeForce forums. This means PC gamers will need to wait for an additional week to get their hands on the GeForce RTX 2080 Ti.

“Hi Everyone, Wanted to give you an update on the GeForce RTX 2080 Ti availability. GeForce RTX 2080 Ti general availability has shifted to September 27th, a one week delay. We expect pre-orders to arrive between September 20th and September 27th,” wrote the moderator on the forum post.

The delay in the release of the RTX 2080 Ti by a week could possibly be due to high demand and shortage of supply of the graphics card since it was available for pre-order from the time it was announced last month. Although nothing has been confirmed by Nvidia yet.

However, there is no change to the general availability of the GeForce RTX 2080, confirmed the moderator. “There is no change to GeForce RTX 2080 general availability, which is September 20th. We’re eager for you to enjoy the new GeForce RTX family! Thanks for your patience,” the moderator added.

In other words, the GeForce RTX 2080 is still on schedule and will be available on September 20th. This means those who have pre-ordered the RTX 2080 can expect to get their card at its initial launch date.

The GeForce RTX 2080 Ti Founders Edition is priced at $1199, while non-Founders Edition cards will be available for $999.

read more

Nvidia RTX 2080 Ti is 35% faster than GTX 1080 Ti reveals leaked 3DMark score

Nvidia RTX 2080 Ti is 35% faster than GTX 1080 Ti reveals leaked 3DMark score

Nvidia’s RTX 2080 Ti outperforms the GTX 1080 Ti in a leaked benchmark score

The leaked 3DMark Time Spy benchmark score of Nvidia’s upcoming RTX 2080 Ti reveals that the RTX 2080 Ti offers a 35% increase in performance compared to the GTX 1080 Ti Founders Edition, according to a tweet posted by Videocardz.

The RTX 2080 Ti scored 12,825 in 3DMark Time Spy, a demanding benchmark that tests DirectX 12 performance, as against GTX 1080 Ti that scored about 9,500. If we consider the image leaked is genuine, the 12825 score suggests a performance increase of 35%. If these numbers hold true, it will be impressive stuff from NVIDIA’s new generation of graphics cards.

In comparison, the GTX 1080 Ti outperformed the GTX 980 Ti by 79% in the same test when it came out last year. This also puts the RTX 2080 just under the TITAN XP.

Since this is just an early benchmark leak, we request you to take this information with a pinch of salt. Nvidia’s upcoming RTX 2080 Ti releases on September 20. Keep watching this space for more updates as this is a developing story!

read more

Nvidia launches GeForce RTX 2000 Series With 6x Performance Boost

Nvidia launches GeForce RTX 2000 Series With 6x Performance Boost

Nvidia RTX 2000 Series is 6 times more powerful than the GTX 10 series

The most awaited Nvidia RTX2000 series is here with us. Yesterday at Gamescom Nvidia launched its 6 times more powerful Geforce RTX 2080 GPU. (Nvidia Graphics card)

This Graphics card has its all-new turing architecture which eventually defeats its popular predecessor GTX 1080. Here RTX refers to real-time ray tracing which promises to deliver cinema-quality light and shadow reflections to gaming.

The secret behind Nvidia graphics card this is on-board deep learning processor which essentially give developers a switch to control ray tracing effects. Looks like Nvidia has become the first to crack this photo-realistic lighting effect.

Nvidia has also announced its two more GPU’s RTX 2070 and RTX 2080 Ti (Nvidia graphics card). The preorders have already been started but the actual availability of cards is from September 20. RTX2080 is initially designed for overclocking and RTX 2070 will offer more ray-tracing performance than Nvidia’s Titan XP card.

According to Jensen Huang (Nvidia CEO), they have introduced a new computing model which will be a monster in terms of performance. Huang showed off a number of demonstrations of existing cards and new rendering techniques with the RTX series which seems to be very much convincing to the audience.

But we have to look for the real world performance. Nvidia is promising real-time ray tracing in games such as Shadow Of The Tomb Raider. And also performance gains in lighting and other effects in Battlefield V and Metro Exodus. Nvidia is promising that even more games will get ray-tracing support.

The company has also revealed more 21 RTX supported games such as legendary PUBG, Hitman 2, Fortnite, Final Fantasy XV and much more.

Specs of RTX2080, RTX2080 TI and RTX2070 (Nvidia graphics card )

Of course, this spec sheet doesn’t tell the whole story, and Nvidia’s Turing presentation contained some other interesting nuggets as well. RTX graphics cards will also come with a USB Type-C VirtualLink port which will allow VR headsets to connect using a single cable, rather than the USB and HDMI.

Nvidia is also working with Microsoft to push ray-tracing, all thanks to the company’s new DirectX Raytracing (DXR) API in Windows 10 to complement Nvidia’s RTX work.

RTX 2070 cards will start at $499, with RTX 2080 at $699, and the RTX 2080 Ti starting at $999. Nvidia is also offering some founders edition with these graphics cards.

The GeForce RTX 2070 Founders Edition will be priced at $599, with the RTX 2080 Founders Edition at $799, and the RTX 2080 Ti Founders Edition at $1,199.

So these were the prices related to Newly launched Nvidia graphics card. For further information stay connected with us.

read more

Leaks reveal GeForce RTX 2080 Ti and GeForce RTX 2070 before launch

Leaks reveal GeForce RTX 2080 Ti and GeForce RTX 2070 before launch

Nvidia, the name is enough for gamers. This Company has a dominance over the graphics card industry from past few decades and the reason is their high-quality graphics card. It has also been a quite while since major graphics card launch.

But looks like the wait is over now. We’re less than 48 hours away from NVIDIA’s big GeForce 20 series launch event at Gamescom. Soon Nvidia will be launching its next-generation graphics card.

Just before the official launch, folks at VideoCardz have revealed some information regarding the upcoming beasts RTX 2080 and 2080 Ti (Nvidia Graphics Card).

One very interesting piece of gossip that has also come out is that the RTX 2070 will in fact feature 8GB of GDDR6 memory, rather than 7GB as was previously believed. This was claimed by two separate sources according to videocardz.

The leaked slide from Reddit

In terms of specs, the GeForce RTX 2080 Ti will be having an 11GB and 352-bit of memory, similar to its predecessor “the GTX 1080 Ti”.

Now What makes the RTX 2080 Ti different? it will have a GDDR6 memory that is clocked at 14Gbps, which should increase the maximum bandwidth. The upcoming card will also have 4,352 CUDA cores.

GeForce 20 series Nvidia graphics card lineup, according to AdoredTV

NVIDIA GeForce RTX 2080 – $500-$700 US (50% Faster Than 1080)

NVIDIA GeForce RTX 2070 – $300-$500 US (40% Faster Than 1070)

NVIDIA GeForce GTX 2060  – $200-$300 US (27% Faster Than 1060)

NVIDIA GeForce GTX 2050  – $100-$200 US (50% Faster Than 1050 Ti)

NVIDIA GeForce RTX 2070 – Performance, Price & Launch Date

The GeForce RTX 2070 is set to launch sometime in September, following the debut of the RTX 2080 Ti and RTX 2080 next Monday, at an estimated suggested price of ~$400.The card is said to slightly outperform NVIDIA’s current GeForce GTX 1080 by roughly 8%, but not quite match the GTX 1080 Ti.

The first model will be called the ‘GeForce RTX 2080 Ti GAMING X TRIO‘ and the second model simply named the ‘GeForce RTX DUKE‘. Naturally, More information to come as we get closer to its official release date. Till then stay connected for more information.

read more

Intel’s First Ever Dedicated Graphics Card Expected To Release in 2020

Intel's First Ever Dedicated Graphics Card Expected To Release in 2020

Intel teases its own gaming GPU ahead of Nvidia’s GeForce RTX 2080 launch

Intel is the biggest chip manufacturing company in the world. Processors from “I” series such as i3,i5,i7, and i9 provided a huge welfare to the consumers in terms of performance and power efficiency.

They made some of the most advanced and powerful chips for laptop, PC, and MacBook too. But they always lag behind in terms of graphics card ( graphics chips). Big companies like NVIDIA and AMD dominated the gaming market by producing a dedicated graphics card.

Whereas Intel was only offering an integrated graphics, inbuilt in their processors which were in a real world not even near about NVIDIA or AMD. But now looks like Intel has decided to step up the game by introducing their own dedicated graphics card which might be challenging other manufacturers.

According to rumors, Intel is about to make an entry with their own dedicated graphics card probably in 2020.

The company has already released its first teaser for it just a few days ahead of Nvidia’s GeForce RTX 2080 launch

This dramatic video clearly portrays their commitment towards the power and efficiency of this product. Based on the video, the card appears to be a single-slot one with a PCIe connector with a metal-fin blower fan as a cooling solution.

Intel is also working on their new Ice Lake line of processors, which will be based on a 10nm process. But the process has delayed until 2019. So the chances to have this graphics card in your hand is near about 2020. They haven’t revealed much for now but hoping for some more information in future. So stay tuned with us.

read more

128TB SD cards may replace HDD and SSD in Laptops

SDUC card

128TB SD cards with 985 Mbps transfer speed could very soon be a reality

SD Association (SDA), a non-profit organization responsible for setting SD card standards, has announced the latest iteration of SD cards at MWC (Mobile World Congress) Shanghai on 26th June.

The new type of category called SD Ultra-Capacity (SDUC) also known as SD Express, adds PCI Express and NVMe interfaces to current SD cards to offer high bandwidth and low latency storage. It aims to provide a staggering 985MB/s transfer speed, which is three times faster than today’s best-performing SD cards and 30x faster than what’s required to record 4K video. It also comes with a maximum storage size of a whopping 128TB, which is significantly higher compared to the earlier 2TB limit.

The new SD Express format will be initially found in SDUC (128TB), SDXC (2TB), and SDHC (32GB) memory cards, and will be available in SD and microSD sizes. Both the new standards will be added to the new SD 7.0 specification and these will be backward compatible with other devices, just like every other SD Card.

According to the SD Association, SSD and HDD storage in laptops could one day be effectively replaced by SD Express cards. “With SD Express we’re offering an entirely new level of a memory card with faster protocols turning cards into a removable SSD,” said Hiroyuki Sakamoto, SDA president. “SD 7.0 [the latest specification] delivers revolutionary innovations to anticipate the needs of forthcoming devices and content rich and speed hungry applications.”

“SD Express’ use of popular PCIe and NVMe interfaces to deliver faster transfer speeds are a savvy choice since both protocols are widely used in the industry today and create a compelling choice for devices of all types,” said Mats Larsson, Senior Market Analyst at Futuresource in a statement. “The SD Association has a robust ecosystem with a strong history of integrating SD innovations and has earned the trust of consumers around the world.”

SD Express cards can be used in UHS-I, UHS-II, UHS-III and SD Express host devices, but faster performance levels will only be achieved when matching the card to the host device, SDA said.

Since the progress towards a 2TB card was slow, it’s difficult to say how long it will take for the ultra-high 128TB card to actually hit the market.

Check out the video below released by the SD Association to more about SD Express.

read more

NVIDIA’s next-gen GPU to feature 12GB of GDDR6 RAM

NVIDIA's next-gen GPU to feature 12GB of GDDR6 RAM

NVIDIA’s next-generation prototype graphics card with GDDR6 memory spotted in a leaked photo

A new leaked image of a prototype version of NVIDIA’s next-generation graphics card has found its way to Reddit (via Videocardz) . The image was posted on Reddit by user ‘dustinbrooks’ who claimed his “buddy works for a company that is testing Nvidia boards and this is apparently their new line up”.

The very first picture of NVIDIA’s GDDR6-based graphics card indicate that the GPU in the center of the board is missing, which suggest that the board in question is designed to support a next-gen Nvidia GPU. Although the prototype lacks a GPU, it does feature 12GB of VRAMs from Micron, a 384-bit memory bus, three 8-pin PCIe power connectors, a second generation NVLINK connector and a huge VRM setup with four fans. The memory bandwidth at 14 GBps data rate suggest that this card has a total memory bandwidth of 672 GBps, which is much more than any consumer grade GeForce graphics card available right now. There is also a mini-DP connector on the bottom left of the PCB.

Given how NVIDIA recently tweeted about Alan Turing’s birthday, this graphics card may very well be suited for a GPU based on the Turing architecture. However, its worth noting that it is a prototype board and not a finished product, which means the specifications and hardware could undergo changes until the official product is ready. Keep watching this space for more updates!!!

Source: Reddit

read more

Scientists create world’s smallest computer ‘Michigan Micro Mote’

Micro-Mote-The-smallest-computer-in-the-world

Meet the world’s smallest computer that is tinier than a grain of rice

Researchers at the University of Michigan (UM) in the United States have created the world’s smallest computer that measures just 0.3 mm x 0.3 mm in size and is completely dwarfed by a grain of rice.

Previously, the smallest computer by the University of Michigan was the 2x2x4mm Michigan Micro Mote, which could retain their programming and data even when they were not externally powered. However, this is not the case with the new minicomputer, as they lose all prior programming and data the moment they lose power.

“We are not sure if they should be called computers or not. It’s more of a matter of opinion whether they have the minimum functionality required,” David Blaauw, a professor of electrical and computer engineering in the University of Michigan said in a press release.

The Michigan Micro Mote consists of a processor, system memory, a base station, wireless transmitters and it receives and transmits data with visible light. It uses photovoltaics, a method of converting light into electricity, for sending and receiving data. The base station provides light for power and programming, and it receives the data.

“We basically had to invent new ways of approaching circuit design that would be equally low power but could also tolerate light,” Blaauw said.

The light from the base station, and from the device’s own transmission LED, can make currents in its tiny circuits. The device was designed using a precision sensor that converts temperatures into time intervals, defined with electronic pulses.

The computer can sense temperatures in the smallest of areas, such as a cluster of cells, and can report them in minuscule regions — such as a cluster of cells — within 0.1 degrees Celsius accuracy.

The system is very flexible and the success of tiny computing means that it could be used for a variety of purposes, such as pressure-sensing inside the eye for glaucoma diagnosis, cancer studies, oil reservoir monitoring, biochemical process monitoring, audio and visual surveillance, and tiny snail studies.

“Since the temperature sensor is small and biocompatible, we can implant it into a mouse and cancer cells grow around it,” said Gary Luker, a professor at University of Michigan.

“We’re using the temperature sensor to investigate variations in temperature within a tumor versus normal tissue and if we can use changes in temperature to determine success or failure of therapy,” added Luker.

“When we first made our millimeter system, we actually didn’t know exactly all the things it would be useful for. But once we published it, we started receiving dozens and dozens and dozens of inquiries,” Blaauw said.

The new microdevice was invented by Blaauw along with Dennis Sylvester, also a professor of electronics and communication engineering at the university, and Jamie Phillips, an Arthur F. Thurnau professor. Further, the device was invented in collaboration with Mie Fujitsu Semiconductor Ltd. Japan and Fujitsu Electronics America Inc and was presented at the 2018 Symposia on VLSI Technology and Circuits on June 21.

In March this year, IBM had revealed their 1mm x 1mm computer that measured smaller than a grain of fancy salt at its Think 2018 conference. However, with the invention of this new device, the University of Michigan has regained its title of making the world’s smallest computer from IBM.

Source: Dailymail.co.uk

read more

Now U.S. has the most powerful supercomputer on the earth

IBM summit

IBM’s Summit helps the U.S. overtake China in supercomputing race

Sunway TaihuLight and Tianhe-2, the two supercomputers from China, have been helping the East Asian country maintain its top position in the TOP500 supercomputer listings for the last five years. However, on June 8, the U.S. Department of Energy showcased a new IBM-built supercomputer that has dethroned China from its top position.

“We know we’re in competition, and it matters who gets there first,” Secretary of Energy Rick Perry told several hundred people at the Friday afternoon ceremony at Oak Ridge National Laboratory (ORNL), a U.S. Department of Energy laboratory. “We reached a pinnacle today.”

The new supercomputer, dubbed Summit has been developed by IBM (International Business Machines) Corp. with the help of  Nvidia Corp., for the U.S. Department of Energy’s ORNL in Tennessee.

Summit can perform 200 quadrillion floating-point operations per second (FLOPS), making it the “most powerful and smartest scientific supercomputer,” said Dave Turek, vice president of high-performance computing and cognitive systems at IBM. It is 60 percent faster than China’s Sunway TaihuLight and almost eight times more powerful than the supercomputer called Titan, which was the lab’s previous top performer.

The Summit supercomputer has a staggering 9,216 core-22 IBM Power9 CPUs as its compute base. Further, the CPUs are enhanced by 27,648 Volta Tensor Core GPUs, providing even more computational power. It weighs nearly as much as a commercial jetliner and is connected by 185 miles of fiber optic cables.

It takes 4,000 gallons of water a minute to cool the system and Summit uses enough power to run 8,100 homes. Also, the interconnect is fast with 25 gigabits per second of interconnect between nodes, that are powered by Mellanox’s Infiniband. The system is attached to 250 petabytes of storage. The operating system running on top of all that power is Red Hat Enterprise Linux (RHEL).

Summit is meant to be used for machine learning and deep learning applications. Nvidia CEO Jensen Huang described Summit as the world’s largest AI (artificial intelligence) supercomputer, a machine that learns. “Its software will write software, amazing software that no human can write,” Jenson said.

Further, Summit’s AI optimized hardware “gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery,” added Jeff Nichols, ORNL associate laboratory director for computing and computational sciences. It will be used in energy, biology, human health, astrophysics, genetics, advanced materials, and artificial intelligence, among other areas.

“Today’s launch of the Summit supercomputer demonstrates the strength of American leadership in scientific innovation and technology development,” Perry stated. “It’s going to have a profound impact in energy research, scientific discovery, economic competitiveness, and national security.”

“I am truly excited by the potential of Summit, as it moves the nation one step closer to the goal of delivering an exascale supercomputing system by 2021,” Perry added. “Summit will empower scientists to address a wide range of new challenges, accelerate discovery, spur innovation, and above all, benefit the American people.”

It would be interesting to see how long can the U.S. maintain its top position in the supercomputing race!

read more

ASUS’s new motherboard for crypto-mining can hold 20 GPUs

ASUS's new motherboard for crypto-mining can hold 20 GPUs

ASUS’ H370 crypto-mining motherboard supports up to 20 GPUs over USB

Banking on the popularity of cryptocurrency mining, Asus, the Taiwan-based electronic manufacturer, has unveiled its own new monster motherboard specifically built for cryptocurrency miners that has the ability to support up to 20 GPUs. In other words, with the introduction of the new motherboard, ASUS is looking to simplify the process of connecting multiple GPUs to it.

Called as the ASUS H370 Mining Master motherboard, the device enables the users to effectively power an entire mining farm with one single board. The ASUS H370, which is a follow up to the B250 Mining Expert launched in September last year, also supports streamlined connectivity by allowing USB riser cables to plug directly into the PCB (Printed Circuit Board) to simplify connectivity.

According to the company, it will be easier to identify problems with the motherboard reducing the downtime and ensure fewer PCIe (Peripheral Component Interconnect Express) disconnects.

The H370 mining motherboard is so focused on optimizing crypto-mining that ASUS has made mining-specific tweaks, with one of them being the GPU state detection before the board boots, which identifies the location and status of each port and allocates alphanumeric codes for easy identification.

ASUS's new motherboard for crypto-mining can hold 20 GPUs

Let’s have a look at the full specifications of the motherboard in a glance:

Size: ATX, 12″x9.1″

Socket: LGA 1151 for Intel 8th Gen Core / Pentium / Celeron processors

Memory: 2 x DIMMs (max. 32GB), DDR4 2666 / 2400 / 2133 MHz , Non-ECC, unbuffered memory

PCIe: 1 x PCIe x16 slot

Storage: 2 x Serial ATA 6.0 Gb/s connectors

Networking: 1 x Intel Gigabit LAN

USB GPU Riser Ports: 20 x Vertical USB ports over PCIe

USB Ports: 6 x USB 3.1 Gen 1, 4 x USB 2.0 / 1.1 ports

Other Ports: 1 x COM header

The ASUS H370 Mining Master motherboard is expected to be available initially in the North American countries between July to September this year. However, there is no word on pricing from ASUS on the H370 yet.

read more