Jump to content
Sign in to follow this  
BadBoyTazz4Ever

the Power Behind Google and Facebook AI!

Recommended Posts

Here’s what powers Google and Facebook’s AI

 

Quote

Google and Facebook have open sourced the designs for the computing hardware that powers the artificial intelligence logic used in their products.

 

Google and Facebook have open sourced the designs for the computing hardware that powers the artificial intelligence logic used in their products.

These intelligent algorithms power Google’s search and recommendation functions,Facebook’s Messenger digital assistant, M – and of course both firms’ use of targeted advertising.

Facebook’s bespoke computer servers, codenamed Big Sur, are packed with graphics processing units (GPU) – the graphics cards used in PCs to play the latest videogames with 3D graphics.

So too is the hardware that powers Google’s TensorFlow AI. So why is artificial intelligence computing built from graphics processors instead of mainstream computer processors?

Originally GPUs were designed as co-processors that operated alongside a computer’s main central processing unit (CPU) in order to off-load demanding computational graphics tasks.

Rendering 3D graphics scenes is what is known as an embarassingly parallel task.

With no connection or interdependence between one area of an image and another, the job can be easily broken down into separate tasks which can be processed concurrently in parallel – that is, at the same time, so completing the job far more quickly.

It’s this parallelism that has led GPU manufacturers to put their hardware to a radically different use.

By optimising them so that they can achieve maximum computational throughput only on massively parallel tasks, GPUs can be turned into specialised processors that can run any parallelised code, not just graphical tasks.

CPUs on the other hand are optimised to be faster at handling single-threaded (non-parallel) tasks, because most general purpose software is still single-threaded.

In contrast to CPUs with one, two, four or eight processing cores, modern GPUs have thousands: the NVIDIA Tesla M40 used in Facebook’s servers has 3,072 so-calledCUDA cores, for example.

However, this massive parallelism comes at a price: software has to be specifically written to take advantage of it, and GPUs are hard to program.

Nvidia Tesla GPUs

 

Interesting how much stronger GFX processing has become than normal processors for calculations, all i'm thinking is how much bitcoins i can generate running a miner on those servers :p

 

Share this post


Link to post
Share on other sites
Pyro    571

It's still limited to specific calculations. One method is to crunch a bunch of possible permutations and get the set of outputs for the CPU to process.

As with the bitcoin mining. They basically process a BUNCH of numbers to get a checksum. If that checksum matches the one you're expecting it gets accepted, if not it tries another.

Even when things like physics can be accelerated by a GPU, but it sometimes run into bottlenecks when a lot of parts in the system all interact, then they can't be split into separate parts and solved in parallel. You need to structure the problem in a way that allows you to compute parts separately.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.

×