Inside the Alice Camera
The large asterisk here is that ‘AI camera’, which combines a Micro Four Thirds camera with your phone, doesn’t actually exist yet in final production form. But if all goes to plan, the Indiegogo project (387% funded and counting) will ship in October 2021. Which means we’ll find out soon if it lives up to its lofty promises.
These include the claim that Alice Camera will offer “the experience of a phone, the quality of a DSLR” and that it is “the camera of the future”. Industrial-strength hyperbole, or does this mysterious project genuinely have a chance of succeeding where the lumbering camera giants have failed?
We chatted to the makers of Alice Camera, from the computational photography startup Photogram AI, to find out exactly how this plucky project plans to provide a modern alternative to existing cameras that are, in its words, “not fit for purpose”. Our early feeling? Cautious optimism…
Out with the old Alice Camera
So what problem is the Alice Camera trying to solve exactly? It’s not trying to replace the best phone cameras – the nose-diving sales of traditional cameras since 2012 show that phones are now point-and-shoot perfection for most people.
Instead, the issue it sees is that when vloggers, YouTubers, TikTokkers and Twitch streamers want to step up to a better camera, their only options are built by the traditional photography giants – whose control systems, interfaces and connectivity are still largely stuck in the 90s.
It’s a fair point. New Sony Alpha cameras are still launching without touchscreen menu systems, which is pretty incredible when you consider the iPhone arrived over 14 years ago. But the big question is whether or not Alice Camera can offer a genuinely compelling alternative
The Alice Camera’s main idea is combining the large sensors and optical powers of the best mirrorless cameras with the slick menus and connectivity of smartphones. We’ve seen variations on this before – see the Sony QX1 and DxO One – but the Alice Camera differs from those by providing a complete, standalone Micro Four Thirds camera that has a mount for your phone and connects to it via Wi-Fi.
The added secret sauce is also the promise of deep learning trickery, powered by Google’s edge TPU chip, which promises to apply phone-like computational photography smarts (think multi-image stacking or night modes) to a camera with a large, Four Thirds sensor.
Sounds good in theory, but why not just build a standalone camera around Android OS and leave out the potentially fiddly smartphone integration? “When we started out we wanted to just build a camera,” said Liam Donovan, Photogram AI’s CTO and Co-founder. “But there are some significant technical barriers to doing that. There are a whole bunch of things that a small company like us cannot do nearly as well as a company like Apple. Those are things like building really nice touchscreens and a really compact, well-connected communications device. So we really didn’t want to reinvent that wheel,” he added.
But the decision to build smartphones into the design was also about usability. “There’s a whole generation of people who have grown up with smartphones in their pockets and if you gave them a mirrorless camera they’d have no idea what to do with it. They’d put it down, and pick up their smartphones instead. So we wanted to bring a high-quality optical system to those people so they can actually benefit from it,” he said.
This decision to make your smartphone the Alice Camera’s ‘mission control’ certainly makes sense. But even though your phone provides the camera’s viewfinder, connectivity and interface, Alice Camera’s final recommended price (£750, around $1,035/AU$1,335) is still pretty high compared to rival vlogging cameras. The Sony ZV-1, for example launched for $749 / £699 / AU$1,299, while the DJI Pocket 2 arrived last year for $349 / £339 (around AU$623).
So why should creators buy the Alice Camera over those tried-and-tested alternatives? “First of all, it uses the Micro Four Thirds system, so it’ll have a larger sensor than those sorts of cameras, particularly the DJI one. And it’ll have the interchangeable lens mount, which is a huge advantage over many of those cameras. The Micro Four Thirds mount in particular is extremely flexible and you can adapt a whole bunch of other systems to it, including vintage lenses, which is really fun,” said Liam Donovan.
(Image credit: Photogram AI)
But there have been Micro Four Thirds vlogging cameras before too, so what else is new on the Alice Camera? “The optical system is one major advantage – the other is computational photography. We’re doing the sort of computational photography that you will find in a high-end smartphone, but we’re doing it on our professional quality optical system with a very large sensor,” he added.
“And we think this is going to offer some quite significant advantages to filmmakers in terms of the quality of the image and video they will get straight out of camera. Wrapping that up will be the kind of improvements in workflow that you get from the integration with a smartphone, it will provide a different experience to other cameras,” he claimed.
The theory is certainly good, but how will a startup manage to compete with computational photography giants like Google, Apple and Samsung? After all, Alice Camera doesn’t tap into your phone camera’s processing at all – instead, the images and video are cooked using a combination of Alice’s 10.7MP Four Thirds sensor, Google Edge TPU (Tensor Processing Unit) and deep learning pipeline, then served to the iOS or Android app on your phone for social media back-patting.
Well, the good thing about being a computational photography startup in 2021 is you can stand on the shoulders of trailblazing giants like Google. One of the main reasons why Google was the driving force in the field, on phones like the original Pixel in 2016, was because it published its research as it went along – and that helped it attract the brightest academic minds to work on what was then a new photographic frontier.
This left a trail of academic papers for both direct rivals, and smaller startups like Photogram AI, to take in new directions – like the Alice Camera. “We’re adapting techniques that have been published in academic literature – the Google Pixel, in particular, was very good at publishing high level overviews of their techniques. We’re constantly reading new papers about these new techniques and adapting them, because they were developed for smartphones – so they have different objectives to what we’re trying to do,” said Liam Donovan.
Another challenge for a small startup is getting enough ‘training data’ (the info needed to help train a machine learning model) to make powerful photographic algorithms. But the team behind Alice Camera is again being resourceful here: “We’re training these algorithms in the cloud in big GPUs (Graphics Processing Units) using these datasets – some are publicly available datasets, but we’re also collecting our own,” explains Donovan. “Once those algorithms are trained, we put them on the camera. There’s no need for them to be still connected to the cloud, they’re completely independent,” he added.
So that’s the fuel inside the Alice Camera, but what exactly can its photographic engine do? Somewhat ambitiously, Alice Camera is aiming to use computational techniques to drive almost every aspect of the camera, including photos, video, color science – and even autofocus.
The latter is particularly interesting because it’s a different approach to traditional cameras. Vlogging cameras like the Sony ZV-1 use Hybrid AF systems (a blend of phase-detect and contrast-based autofocus), which means their focus adjustments are determined by how photons hit the sensor. But the Alice Camera is promising to instead use AI techniques – in other words, training the camera to tell it what parts of a scene, like eyes, should be in focus – to help it decide what to focus on, and how to adjust the lens to get there
While the makers of the Alice Camera think this focusing technique has the potential to beat traditional AF systems, it won’t happen right away. “Because of the power of these AI algorithms and the way they are trained and learned, as opposed to dictated by programmers, we think they do have the potential to outperform phase and contrast-detection autofocus,” says Liam Donovan.
‘Potential’ is the key word here, though. Early backers shouldn’t expect a fully mature AI autofocus system out of the box. “I’m not guaranteeing that the Alice Camera will immediately be able to outperform a very high-end mirrorless camera in terms of autofocusing as soon as it’s released. But we do think that the techniques have the potential to and we are going to be using those techniques to do autofocusing”.
This is bold stuff, considering that even the flag-bearers for computational photography – like the latest Google Pixel 5 – still use traditional focusing techniques like phase-detection AF. How well will it work in practice? The proof will be in the photographic pudding later this year.
But aside from AI-based autofocus, one of Alice Camera’s other interesting features is that it’s going to be open source. In theory, this means that developers will able to tinker with most aspects of the camera, from the interface to its algorithms, to help owners make it more customizable than traditional cameras. Think about the difference between the best Android phone launchers and your average camera menu system, and you can start to imagine the possible benefits.
The caveat here is that it’s not yet clear when the Alice Camera’s code will be open for tinkering business. Photogram AI told us that “we will have released our source code when the camera ships, so it will be open source when it ships”. Which means there may not be much time, if any, for custom features to flourish before the camera lands. Still, the concept of an open source, AI-powered Micro Four Thirds camera is certainly a fascinating one.
For example, the Alice Camera team think it’ll ultimately be possible to train your camera to learn new styles of taking photos and share these with other people. “These AI algorithms can be retrained for a specific individual’s use case. It won’t be like it’s constantly learning all the time, and it won’t just automatically pick up your preferences as you’re using it,” Liam Donovan explained.
“But if you encountered a situation that Alice Camera hadn’t been specifically trained for, you could collect some data and retrain those algorithms so the camera would get better in that specific situation. And equally, we could say we’re going to train a specific wedding photography algorithm that performs much better at a wedding than the general image enhancement and autofocus algorithms.”
Of course, these possibilities fall very much into the realm of future-gazing. But it again sparks interesting discussions about what happens when you take a smartphone-like approach to building a camera with a larger sensor.
Naturally, this can also bring downsides too. The makers of Alice Camera told us that “there’ll be no subscription service for using the core features of the camera”, but that does leave open the possibility that some aspects of the camera might eventually come with another small addition to your monthly outgoings.
Snap to the future?
So is the Alice Camera the future of mirrorless cameras? It certainly has some very interesting ideas. While the traditional camera giants focus on either professional full-frame systems (see Canon, Nikon and Sony) or retreat into retro heritage (like Fujifilm and Olympus), there is a gap for a new camera that marries old-fashioned optics with computational smarts – and a 21st-century interface.
We simply don’t know enough about the Alice Camera to able to say whether it can fill that space – or even if there is enough demand for that kind of camera. After all, previous attempts to meld the two devices (like the DxO One and Sony QX1) have not taken off.
But this bold attempt at an AI camera certainly takes a sensible approach. By building on some strong existing foundations (the Micro Four Thirds system, Google’s Edge TPU and computational research from the likes of the Google Pixel), it can concentrate on making the software and algorithms needed to glue all those parts together.
Does that mean Super Early Bird backers can expect to get a camera as polished as the best vlogging cameras? Probably not. But could it give you the chance to test-drive some of the most interesting new ideas in camera software and hardware? Almost certainly. Only last month we argued that camera innovation is now mainly for pros, but we’ll be very happy if the Alice Camera fulfills some of its promises and proves us wrong.