In the cinema, often, the geek suffers. How many times, in how many films, have we had to remain silent in the face of the improbable, when an authoritarian and omnipotent hero asks a poor technician to zoom in and zoom in on an image until he sees the smallest detail of it? ‘a face. So, cinema magic requires, the photo appears perfectly sharp, instead of the mush of unreadable pixels that we should see.
Let the nerds rejoice, they will soon be able to enjoy their spy movie without cringe … Less old than the cinema, machine learning, a famous branch of artificial intelligence, is nonetheless a bit magical.
Bring in the AI
On the occasion of the introductory keynote of the new iPads, and particularly of theiPad Air, Pixelmator Team has been talking about her again. Founded by two brothers, Saulius and Aidas Dailide, the Lithuanian company has been developing the brilliant eponymous image editing software for more than thirteen years. On September 15, it therefore announced the arrival on Apple tablets of an image enhancement function, called ML Super Resolution, which, thanks to the machine learning, to increase the resolution of an image in an instant.
How does the magic work?
“When a classic algorithm enlarges an image, it will simply create new pixels of the same color as the closest pixel (s)”, explains Simonas Bastys, Technical Director of Pixelmator Team.
A little more elaborate, a bilinear algorithm, it will interpret the colors of two pixels and create a new pixel of an intermediate color. This sometimes works, but can also lead to aberrations, unwanted deformations, which make objects impossible to recognize.
ML Super Resolution takes another path to go further, and above all to do better. The tool “Uses computer vision to identify objects, borders, shapes, colors, gradients and more within large areas in the image”continued our interlocutor.
Then comes another model trained with Core ML, Apple’s solution for all machine learning tasks. A solution without which to integrate artificial intelligence into an application would be far too complicated or consume far too much time and energy, at least for a company the size of his company, comments the representative of Pixelmator. Anyway, it is this model that “Will create new pixels, not based on the values of existing pixels but based on the content of the image”.
Thus, the added pixels will be so that the leg of a person is not distorted, that a strand of hair does not disappear because of a cloud too close, in short, that the enlarged area is as faithful as possible to reality.
A featherweight for a heavyweight effect
And the beauty of this “smart” model is that it only weighs 9MB when integrated into the Pixelmator Photo app for iPad. This model is indeed the result of training on a dedicated server, where the algorithm has learned to deal with images that integrate transparency effects, compression artefacts, digital noise produced when the image is taken. shot by a camera, etc. It is even capable of working with RAW files, which is the digital negative equivalent of a photo. ML Super Resolution is then able to preserve the wide dynamic range data so as not to alter the displayed colors.
It took about a year and the attention of most of the 20 developers who work for Pixelmator Team to develop and test this new tool. If it is the third version of the algorithm that is embedded in the application, the limiting factor for the moment is not to be found in the software but in the computing power.
“As devices and graphics processors become more powerful, we can use more of that power and create more complex and intelligent algorithms.”, explains Simonas Bastys. “This particular algorithm can take several minutes to process a photo on an old computer. “
That’s why the Pixelmator Team waited for the arrival of the Apple A14. Because, without them being able or willing to talk about it, it is obvious that the developers of Pixelmator have forged a close bond with Apple and had access to the chip long before its announcement.
In this case, the graphics processor wasn’t the only thing that caught their eye. “The A14’s Neural Engine is about twice as fast to run machine learning models”, puts the technical director of Pixelmator Team in perspective. Along with this hardware advancement comes a software-supported advancement, as the latest versions of Apple’s mobile OS, iPadOS and iOS 14, process ML queries three times faster. And this is not a luxury, because this approach based on machine learning requires 8 to 62 times more computing power than the methods used until now …
In other words, on an older iPad and with a previous version of iPadOS, running ML Super Resolution will take much longer, and the maximum magnification of the image will be less. The side effect of reduced computing power. We therefore understand that “It would not have been very relevant to launch such a function when the experience offered to many users would have been disappointing”.
The best spy movie effects on your iPhone?
There is always this question of balance to find between the desire to take advantage of the best of machine learning and the capabilities of the mainstream platforms that run the algorithms. “Right now we could create more powerful algorithms, but they would be too slow to be used in consumer software”, tells us Simonas Bastys. We will therefore have to wait for a next generation of Apple Silicon chip to see Pixelmator become even smarter and more impressive.
But until then, it can be enjoyed on recent iPads and Macs, in the Pixelmator Pro app, where the feature has been present since the start of the year. “But when it comes to performance, if you need the tool to run fast, you’ll need a fairly powerful Mac”, concedes our interlocutor.
In fact, on a 2020 13-inch MacBook Pro, you have time to wait a few minutes and hear the fans working before the tool has finished its work on a single photo. By ripple effect, this is also where we realize the power and optimization of Apple chips.
What’s the next project for Team Pixelmator? “Currently, we are very excited to bring Pixelmator Photo with the ML Super Resolution feature on the iPhone. “, we are told. As if by chance, the iPhone 12s will carry the A14 and its Neural Engine …
It will then be possible to enlarge without too much loss of apparent quality the photos taken by the iPhone directly on the smartphone. The ML Super Resolution function will then be even more mobile and therefore even more impressive.
It is quite possible that we will hear about these Lithuanians again during the Apple keynote on October 13th.