You want to choose a phone, but you’re not sure what system-on-chip (SoC) to buy. SOC typically combine both the central processing unit (CPU) and graphical processing unit (GPU) and other cool and funky processing units to enable your phone to function. When you start searching for a phone, you realise the amount of different phone manufacturers out there trying to get your attention to buy their phone. But you realise something… The SoCs are all kinda the same.
You see, there are a few major SoC players when it comes to mobile phones. The most commonly known brands are Qualcomm’s Snapdragon series and MediaTek Dimensity series chips. They power the majority of Android mobile phones for sale currently. Now you might be thinking, aren’t we missing a few? Indeed, we are! There are currently four manufacturers offering in-house chip designs. The most commonly known one is Apple Silicon’s A series SoC powering the iPhone lineup. The others are Samsung’s Exynos processors, Google’s Tensor processors (currently based off Exynos designs), and Huawei’s Kirin processors (still a thing despite the US ban).
We all like to compare things, it’s in our nature. There’s always a heated war between who gets the best mobile SoC every year. Apple, MediaTek, and Qualcomm fight it off with each iteration. Apple is renowned for their lineup of chips, equipped with power and performance as well as efficiency with each design. MediaTek is known for delivering amazing processing power for a fraction of the cost from its competitors. They are currently the largest SoC producer right now because of their price and performance benefits in the low to middle range mobile phone segments. Qualcomm has historically been the go-to processor for raw performance and power. They offer the solid combinations of computing power and graphics for their flagship products. Like always, designs are always meticulous tradeoffs with efficiency, heat, power constraints, and new features changes the balance of each flagship processor. Whilst there are other processors such as Exynos, Kirin or Tensor available, they don’t offer the flagship specs and compete on another field of future focused categories and target markets.
To appreciate the complexity and design of all the processors, we must also come to a realisation. All the SoCs are based off ARM designs. ARM is a Reduced Instruction Set Computer (RISC) Instruction Set Architecture (ISA). Traditionally RISC ISA are designed to leverage efficiency with its lower power draw at the cost of performance. Whilst this has significantly changed in recent years, it is the basis of how ARM was introduced as the preferred mobile processor when the concepts of smartphones were introduced. ARM licenses the designs of their cores to manufacturers to build processors. The business model can sound a bit confusing at times, but that’s essentially how ARM earns its revenue: licensing fees.
So in the end, you really could say it’s just the battle of the best ARM design for their respective price! The technology to build the chips are pretty much constant, with it coming from either Samsung or TSMC. It really comes down to which SoC has the best combination of efficiency cores, performance cores and graphics to meet the physical limitations of heat dissipation, power draw and price constraints.
But that isn’t quite so the case nowadays. With the evolution of generative AI, companies are shipping out more AI products than ever before. With the adoption of AI starting to spread more widely with the consumers, the current hardware on our phones may not cope as effectively, with more consumption on the GPU and battery. GPUs are general purpose processors that allow parallel processing. This is excellent for tasks such as AI, cryptocurrency and gaming, but is not as efficient. This is where Neural Processing Units (NPUs) come into play. NPUs are the accelerators designed for AI processing. NPUs are similar to GPUs in terms of parallel processing but are optimised for large scale data objects such as Large Language Models (LLMs) or requiring matrix multiplications. They do so with ease and with a lower power draw compared to GPUs for the same specific task.
On a side note, AI tasks will consume more memory. This has had a bonus side effect, with manufactures increasing their RAM to a sizeable amount, including iPhones. Apple has traditionally offered lower RAM configurations than compared to other flagship SoCs, a testament to their efficiency and integration of hardware and software products. With AI, it has bumped up all models of iPhone 16 to 8GB of RAM! It’s a win for everyone, even if you don’t intend to use AI on your phone.
So with NPUs poised to become important metrics for future AI benchmarks, it’s still important the foundations are still measured. In the end, we still are comparing the SoCs in terms of pure raw power and fundamentally, we are comparing the different ARM based processors.