recentpopularlog in

charlesarthur : gpu   11

Nvidia grapples with cryptocurrency miners’ exit • WSJ
Sarah Needleman:
<p>At the height of the cryptocurrency boom, when even moms in British Columbia were stockpiling videogame graphics cards to generate digital currency, average gamers couldn’t get their hands on their favored hardware. Prices ballooned and inventory vanished.

Those days are over. But inflated prices have taken longer than expected to come down, says Nvidia Corp. , particularly for its moderately powerful chips built on an architecture it calls Pascal.

Nvidia misjudged how quickly prices for the graphics cards that those chips go into would normalize now that cryptocurrency mining isn’t as hot, and the company is now dealing with months of expensive inventory that price-conscious gamers won’t touch.

The company’s message to Wall Street: Videogaming is fine, and the crypto hangover is lasting longer than expected. Still, some analysts don’t see a quick fix.

“The real recovery won’t take place until the second, third and fourth quarters of fiscal 2020,” said Gary Mobley, analyst at Benchmark. “It’s 12 weeks of inventory out there we’re dealing with.”</p>


The cryptocurrency crash - presently underway, because we're just past the anniversary of the big runup in bitcoin's "value" - is going to ripple out in all sorts of interesting directions. Nvidia is just a first-order one.
economics  gpu  bitcoin  mining  cryptocurrency 
november 2018 by charlesarthur
SMT solving on an iPhone • James Bornholt
<p>Cross-compiling <a href="https://github.com/z3prover/z3">Z3</a> [a theorem prover from Microsoft Research] turns out to be remarkably simple, with just a few lines of code changes necessary; I open sourced the code to <a href="https://github.com/jamesbornholt/z3-ios">run Z3 on your own iOS device</a>. For benchmarks, I drew a few queries from my recent work on <a href="https://unsat.cs.washington.edu/projects/sympro">profiling symbolic evaluation</a>, extracting the SMT generated by Rosette in each case.

As a first test, I compared my iPhone XS to one of my desktop machines, which uses an Intel Core i7-7700K—the best consumer desktop chip Intel was selling when we built the machine 18 months ago. I expected the Intel chip to win quite handily here, but that’s not how things turned out.

The iPhone XS was about 11% <em>faster</em> on this 23 second benchmark! This is the result I tweeted about, but Twitter doesn’t leave much room for nuance, so I’ll add some here.

• This benchmark is in the QF_BV fragment of SMT, so Z3 discharges it using bit-blasting and SAT solving.<br />• This result holds up pretty well even if the benchmark runs in a loop 10 times—the iPhone can sustain this performance and doesn’t seem thermally limited. That said, the benchmark is still pretty short.<br />• Several folks asked me if this is down to non-determinism—perhaps the solver takes different paths on the different platforms, due to use of random numbers or otherwise—but I checked fairly thoroughly using Z3’s verbose output and that doesn’t seem to be the case.<br />• Both systems ran Z3 4.8.1, compiled by me using Clang with the same optimization settings. I also tested on the i7-7700K using Z3’s prebuilt binaries (which use GCC), but those were actually slower.</p>


OK, that's quite a niche application. A classic LOB - line of business, ie application-specific - app. It's what people used to love Windows for. The iPhone's GPU makes it terrific for this particular LOB app over Intel.
iphone  gpu  intel 
november 2018 by charlesarthur
Are external GPUs for Macs viable in macOS 10.13.4? We tested to find out • Ars Technica
Samuel Axon:
<p>When software support is complete and everything works as intended, the performance gains we've seen here paint a rosy picture for the future of this technology as a way to augment laptops for games and creative applications. We recorded more playable frame rates in games and significantly improved benchmark scores over what we got with the internal GPU—and that's with one of the fastest discrete GPUs in Apple's laptops.

But even though the potential is vividly clear, the implementation is not yet complete. The experience is hit-and-miss depending on which software you're using. Further, we experienced several crashes and unexpected behaviors, and while Metal performance is greatly improved, the performance gap isn't as big for apps built for OpenGL—and unfortunately, many consumer Mac applications still are.

eGPUs might be publicly supported now, but they're still not ready for primetime. The experience is too unstable, support isn't robust enough, there are too many caveats and limitations, and Boot Camp support will be necessary for eGPUs to be attractive to many consumers.

That said, I see where Apple is going with this, and I'm convinced that it could be viable if the company expands support in the right ways. Apple clearly intends this to be the upgrade and expansion path for its iMac Pro and MacBook Pro computers, and if the software support falls into place, I believe that can work out as the company and its users hope. After all, video editors are already accustomed to connecting their machines to various other equipment in their edit bays.</p>


Once developers (including Apple, it seems: Final Cut Pro doesn't yet support eGPUs) update their software, it should get there. It's hoping for a lot that you could seamlessly add an external GPU. (There's a good discussion, if you have a couple of hours, about this <a href="https://daringfireball.net/thetalkshow/2018/04/11/ep-219">when Matthew Panzarino appeared on John Gruber's The Talk Show recently</a>.)
gpu  macos 
april 2018 by charlesarthur
Cryptocurrency mining is fueling a GPU shortage • Motherboard
Daniel Oberhaus:
<p>until the Ether price explosion last month, mining on the Ethereum network cost more in electricity than it generated in revenue. Following the meteoric rise of the world's second favorite cryptocurrency, however, I decided it was finally time to become a miner. So I strapped on my hardhat and hit the internet in search of the graphics cards that are the workhorses in most Ethereum mining rigs.

Yet as I found on site after site, GPUs were SOLD OUT and wouldn't be shipping for several weeks. As PC Gamer recently reported, it appears as though the altcoin mining boom had created a global GPU shortage. The question, however, is whether this drought has just begun, or if gamers and would-be miners will be out of luck for the foreseeable future.

As their name implies, GPUs are logic chips specifically designed for rending pictures and videos on a computer screen. They're mostly used for gaming to render 3D graphics in realtime. Unlike a Central Processing Unit (CPU), which is responsible for coordinating and executing commands from a computer's hardware and software, GPUs were designed so that they would be really efficient at repeatedly performing the same operation very quickly.

GPUs work well for rendering 3D games but they work great for mining Ethereum.</p>


*Narrator's voice* <em>Now, in 2100, we can understand how the seeds of the Gamer-Miner Wars were sown.</em>
gpu  bitcoin  mining  gaming 
june 2017 by charlesarthur
Apple To Develop Own GPU, Drop Imagination's GPUs From SoCs
Ryan Smith:
<p>Apple’s trajectory on the GPU side very closely follows their trajectory on the CPU side. In the case of Apple’s CPUs, they first used more-or-less stock ARM CPU cores, started tweaking the layout with the A-series SoCs, began developing their own CPU core with Swift (A6), and then dropped the hammer with Cyclone (A7). On the GPU side the path is much the same; after tweaking Imagination’s designs, Apple is now to the Swift portion of the program, developing their own GPU.

What this could amount to for Apple and their products could be immense, or it could be little more than a footnote in the history of Apple’s SoC designs. Will Apple develop a conventional GPU design? Will they try for something more radical? Will they build bigger discrete GPUs for their Mac products? On all of this, only time will tell.

However, and these are words I may end up eating in 2018/2019, I would be very surprised if an Apple-developed GPU has the same market-shattering impact that their Cyclone CPU did. In the GPU space some designs are stronger than others, but there is A) no “common” GPU design like there was with ARM Cortex CPUs, and B) there isn’t an immediate and obvious problem with current GPUs that needs to be solved. What spurred the development of Cyclone and other Apple high-performance CPUs was that no one was making what Apple really wanted: an Intel Core-like CPU design for SoCs. Apple needed something bigger and more powerful than anyone else could offer, and they wanted to go in a direction that ARM was not by pursuing deep out-of-order execution and a wide issue width.

On the GPU side, however, GPUs are far more scalable. If Apple needs a more powerful GPU, Imagination’s IP can scale from a single cluster up to 16, and the forthcoming Furian can go even higher. And to be clear, unlike CPUs, adding more cores/clusters does help across the board, which is why NVIDIA is able to put the Pascal architecture in everything from a 250-watt card to an SoC. So whatever is driving Apple’s decision, it’s not just about raw performance.

What is still left on the table is efficiency – both area and power – and cost. Apple may be going this route because they believe they can develop a more efficient GPU internally than they can following Imagination’s GPU architectures, which would be interesting to see as, to date, Imagination’s Rogue designs have done very well inside of Apple’s SoCs.</p>


There isn't an immediate and obvious problem with current GPUs? Except that they're not powerful enough for the next set of problems such as augmented reality and virtual reality.
apple  imagination  gpu 
april 2017 by charlesarthur
October 2016: Apple poaching GPU designer Imagination Technologies' talent • Apple Insider
Mike Wuerthele:
<p>Among the departees now confirmed to be working at Apple from LinkedIn postings, notable high-level staff members are the ex-chief operating officer of Imagination Technologies John Metcalfe, Senior Design Manager Dave Roberts, Vice President of Hardware Engineering Johnathan Redshaw, and 17-year veteran of the company and Senior Software Engineering Manager Benjamin Bowman.

Metcalfe is now a senior director at Apple. Roberts is an engineering manager at Apple's iOS GPU software group, and Bowman is a GPU architect for the company. Redshaw is listed as a director at Apple, with no specific branch of the company declared.

Imagination Technologies has licensed high-performance GPU designs, known as PowerVR graphics series, for use in Apple's A-series system on a chip (SoC) dating back to the original iPhone in 2007. The hires may herald an internal project to develop an Apple-designed GPU for use in future iOS projects, rather than rely on third parties for the technology.

Apple issued a statement in March admitting it had "some discussions" with Imagination involving an Apple buyout, but that it did not "plan to make an offer for the company at this time." Apple owns a 10% stake in the company.</p>


You can see why Imagination might be a bit grumpy about the idea that Apple has developed this GPU tech without any reference to Imagination's intellectual property.
imagination  gpu  apple 
april 2017 by charlesarthur
How the SoC [system on a chip] is displacing the CPU • Medium
Pushkar Ranade:
<p>The present decade represents a period of strategic inflection in the evolution of the semiconductor industry — the next five years are likely to see a confluence of several technology and market forces which will collectively have a profound impact on the course of the industry. These trajectories are discussed below.

…Trajectory #2: A Central Role for the GPU

Usage models of the tablet and the smartphone indicate that the GPU is the most heavily used block within SoCs like the Tegra, Snapdragon and the A8X. Since the GPU is the largest block and also consumes most of the power on the chip, it is instructive that the silicon transistor be designed to optimize the performance and power of the GPU. It is likely that design houses and foundries will make the GPU the centerpiece for transistor design and manufacturing — historically all the blocks including the GPU had to adapt a transistor that had primarily been designed for the CPU. The rapid evolution of the SoC and the increasing role of the GPU are evident in successive generations of Apple A*x family processors. The GPU on the A8X processor occupies almost a third of the die area.</p>


The Intel-style CISC CPU has almost reached the end of its evolution.
gpu  arm  risc  soc 
november 2016 by charlesarthur
Apple poaches Imagination Technologies COO • Business Insider
Kif Leswing:
<p>The biggest hire is John Metcalfe, whose LinkedIn profile says he's been working as a senior director at Apple since July. He was Imagination Technology's COO for a decade before that, and was nearly a 20-year veteran of the company. Last October, Apple hired Imagination's VP of Hardware Engineering to be a director based in the United Kingdom.

The moves are notable as Apple is reportedly the third-largest shareholder in Imagination Technologies.  

Other recent hires from Imagination Technologies now work for Apple in London in positions like GPU Architect, Engineering Manager, FE Hardware Design, and Design Manager. The hires worked on Imagination Technology's PowerVR product, which is what is included in the iPhone. 

Six technical employees from Imagination Technologies have joined Apple since September. The hires may be working on GPU technology in Greater London. Apple has long been rumored to be working on its own GPU design but it has never been confirmed. The company has a GPU-focused office in Orlando, Florida as well.</p>


Neil Cybart (of Above Avalon) reckons this is Apple bringing GPU design expertise in-house for future designs. Pretty hard to read it any other way, after Apple declined to buy Imagination Technologies in March. It's basically picking people off now.
apple  gpu  design 
october 2016 by charlesarthur
Apple hiring in Orlando amid rumors company designing own GPU » Business Insider
Kif Leswing:
<p>Graphics experts who want to work for Apple might not need to move to California. The company is currently hiring several graphics-chip specialists in its Orlando, Florida, offices.

In the past week, Apple has posted seven new job listings for graphics-processing unit (GPU) engineers in Orlando. Four of the positions are listed as graphics-verification engineers and one listing is for a graphics-software engineer, and Apple is also looking for a graphics RTL (register-transfer level) designer.</p>


Virtual reality and augmented reality systems both need high-quality GPUs. Just sayin'.
apple  gpu 
may 2016 by charlesarthur
AMD Radeon 400 series 'Polaris' GPUs land major Apple design wins » WCCF Tech
Khalid Moammer:
<p>From what we’ve been hearing Polaris is no exception. In fact our sources have confirmed that the major OEM design win that we had reported on last year is indeed for Apple.

The Sunnyvale, California based chip maker secured wins for both of its upcoming Radeon 400 series 14nm FinFET graphics chips, Polaris 10 and Polaris 11. Previously known as “Ellesmere” and “Baffin”, both of which are Arctic Islands. The chips have since been renamed to Polaris 10 and 11 respectively, in line with AMD’s newly adopted Astronomy based architectural code naming scheme which Koduri had instated after the Radeon Technologies Group was established last year.

The Polaris 10 and 11 chips will go into new desktops and notebooks from Apple, which the company plans to bring to market later this year. And although these Apple design wins may not be significant volume contributors they are very profitable.</p>


That's going to make for an interesting WWDC in June, then. These Radeon GPUs would be capable of VR work, apparently.
apple  gpu  amd  radeon  polaris 
april 2016 by charlesarthur
Apple A8X’s GPU - GXA6850, even better than I thought >> AnandTech
Ryan Smith: <blockquote class="quoted">Working on analyzing various Apple SoCs over the years has become a process of delightful frustration. Apple’s SoC development is consistently on the cutting edge, so it’s always great to see something new, but Apple has also developed a love for curveballs. Coupled with their infamous secrecy and general lack of willingness to talk about the fine technical details of some of their products, it’s easy to see how well Apple’s SoCs perform but it is a lot harder to figure out why this is…

…as we have theorized and since checked with other sources, GFXBench 3.0’s fillrate test is not bandwidth limited in the same way, at least not on Apple’s most recent SoCs. Quite possibly due to the 4MB of SRAM that is A7/A8/A8X’s L3 cache, this is a relatively “pure” test of pixel fillrate, meaning we can safely rule out any other effects.

With this in mind, normally Apple has a strong preference for wide-and-slow architectures in their GPUs. High clockspeeds require higher voltages, so going wide and staying with lower clockspeeds allows Apple to conserve power at the cost of some die space. This is the basic principle behind Cyclone and it has been the principle in Apple’s GPU choices as well. Given this, one could reasonably argue that A8X was using an 8 cluster design, but even with this data we were not entirely sure.

GPUs are the new frontier for computing improvement. Equally, the niche-ness of this article is amazing.
apple  gpu 
november 2014 by charlesarthur

Copy this bookmark:





to read