Putting a private cloud inside the smartphone

First of all, let’s officially state that the new Apple iPhone being announced does NOT have the Epiphany multicore architecture IP inside.:-) However, for fun here is some analysis of the performance that could be available to end users if one of the large device vendors would decide to integrate a modest number of Epiphany cores within their next application processor.

The illustration above shows Apple’s A5x implemented in 40nm and includes two ARM A9 cores running at 1GHz and a large quad core GPU.  As shown by the picture above, adding 16 Epiphany cores to the already large A5X die would have a minimal effect on silicon area and cost.  The result would be a significant boost in on chip CPU performance from 2 x 1GHz to 18x1Ghz=18 GHz.  Imagine the applications you could run on your smartphone if you had immediate access to an order of magnitude more performance!  As a comparison, the latest Intel high end i7 processors offers 4 cores running at 3GHz for an equivalent performance of only 4 x 3Ghz = 12 GHz.  Sure, we understand that real application performance depends on a LOT more factors than just GHz, but raw GHz is still the best metric for determining the estimated performance of a a broad set of applications.

What about GPU with its massive datapaths? Well, there is no doubt that the GPU can do amazing things crunching parallel data sets, but it’s not a general purpose computer. It can’t run C/C++ code and it can’t really execute independent programs in parallel. Compare this to a cluster of microprocessors which can easily execute thousands of parallel programs simultaneously without the developers of those programs ever having to worry about the challenges of parallel programming.

An alternative solution used today to overcome the lack of performance inside the smartphones is to perform the majority of heavy duty processing in the cloud, but this is problematic for a variety of reasons:

  • Communication with the remote server is a serious battery drain
  • Latency of working with a server is too high for many real time applications
  • Network bandwidth is not sufficient for many media oriented applications
  • Privacy and security is a serious concerns when sending sensitive data to a cloud
  • High subscriber cost for anyone without an unlimited data-plan

 

The reason the cloud is a fundamental extension to most of today’s smartphone platforms is that the current embedded processor solutions are too inefficient to enable massive performance inside the device within a fixed power budget. In recent years we have clearly increasing battery sizes and decreasing battery lives and it’s obvious that the current application processor architecture approach will eventually hit the wall.

The Epiphany architecture offers an order of magnitude better energy efficiency compared to current processor solutions, enabling up to 64  RISC CPU cores to be easily and immediately integrated inside standard application processor SOCs. With incremental design improvements to our design and further CMOS scaling, the number of cores within mobile devices will grow to as many as 1000 cores in a few years.  We envision the Epiphany becoming a small but scalable offloading cloud within the smartphone . We think that’s pretty exciting.

Andreas Olofsson is the founder of Adapteva and the creator of the Epiphany architecture and Parallella Kickstarted open source computing project. Follow Andreas on Twitter.

 

Posted in Andreas' Blog.

One Comment

  1. Hi

    I was wondering: processing on a cloud is much more intelligent processing as cloud has huge internet bandwidth and can use web services to do intelligent processing in contrast to embedded system using just local data available to it.

Leave a Reply Using Facebook or Twitter Account