I D C T E C H N O L O G Y S P O T L I G H T How Computational Photography Can Drive Profits in the Mobile Device Market February 2015 Adapted from 2014 U.S. Mobile Imaging Survey: Camera and Mobile Device Usage Trends by Christopher Chute, IDC #250348 Sponsored by Pelican Imaging Over the past decade, mobility has had a profound impact on the camera market. Today, seven out of ten images are captured with mobile devices, driving consumers to view photography as originating from the mobile device. While the market for array sensors and computational photography is still in the early stages, IDC predicts that premium device users, who often support a photography hobby via additional spending on related goods and services, will seek to transition their camera usage model into a mobile context, including the use of high-quality aftermarket lenses and other photography tools. By developing cutting-edge imaging capabilities in traditional form factors, mobile device OEMs can meet the requirements of these premium users, thereby driving growth in the most profitable piece of their portfolios. This Technology Spotlight examines computational photography enablers and how computational photography will drive new usage models in mobile imaging that can generate competitive advantage and sales of premium devices. This paper also discusses how array solutions such as the Pelican depth-sensing array can allow premium users to experience a seamless transition to a high-end imaging experience on the mobile platform without the learning curve that may be associated with a stereographic camera system. Introduction IDC predicts that in the next five to ten years, consumer imaging will revolve more around the ability to not simply capture and share video and images but also do so in a much more creative context. For instance, by using new software and hardware capabilities such as array sensors, computational photography will allow mobile devices to capture the depth data of a given scene, resulting in three-dimensional (3D) images that can be used in a variety of creative contexts. IDC predicts that this consumer mindset and behavior around creative image capture and sharing will continue to grow, relegating the traditional camera market to one centered around professionals and enthusiasts. At the same time, these enthusiast photographers will seek to utilize premium mobile devices in the same context as they have traditional cameras. Mobile photography users have been keen to adopt ancillary hardware and software that can allow them to deepen their creativity. Camera modes such as high dynamic range (HDR) and slow shutter are driving an increase in camera engagement. Those who dive into the camera menu capture more images and tend to want to purchase accessories that will enhance their photography. In the past two years, both established and start-up vendors have responded to this trend by creating a new set of aftermarket imaging hardware specifically designed for mobile phone or tablet usage, including lenses, lens systems, mounts and cases, and other accessories. This mobilized hardware provides the same enhancements that were delivered by aftermarket products for standalone cameras. IDC anticipates that the aftermarket imaging hardware opportunity will grow over the next five years. In 2013, $248.5 million of aftermarket lenses, cases and mounts, and other related IDC 1860 hardware revenue was generated worldwide, with an even split between the nascent lens market and the imaging-specific cases and mounts market. IDC predicts that over time, lens revenue will outstrip cases and mounts revenue because of the much larger average sales price (ASP) in the lens segment. We predict that declines in segment ASPs will drive growth, as enthusiast photographers and eventually consumers embrace aftermarket lenses that add value to their image/video capture experience. Total revenue will reach slightly under $1 billion in 2017 and surpass $1.2 billion in 2018. According to IDC's latest consumer survey, mobile device camera users who use the creative modes in the camera application, along with those who own and use digital single-lens reflex (DSLR) cameras to capture high-end images, are much more likely than the average smartphone owner to plan to purchase a variety of cutting-edge electronics and accessories. For instance, DSLR users are 52% more likely than the average smartphone user to plan to purchase a high-end phablet smartphone in the next year. And smartphone camera mode users are 64% more likely than the average consumer to plan to purchase high-quality aftermarket glass lenses for their smartphone cameras. Clearly, the market for a premium mobile device photography experience can mean sales of higher-margin devices and accessories. Computational Photography Market Trends Computational photography involves the ability to capture image depth with software in a way that was never possible before. The fact that mobile devices have increasingly more powerful processors, larger onboard storage capacities, and larger, higher-quality screens means that manufacturers can now offer features driven by computational photography that they weren't able to offer in devices with less processing power, such as traditional cameras. For instance, many mobile device OEMs use software to capture an image and then refocus the image in playback by allowing the user to place the focal point anywhere inside the picture. This brings new value to every image and allows the user to be more much creative. Computational photography has the potential to not only shift the photography market but also bring new creativity to video-centric markets. IDC predicts that directors of photography, video production designers, and videographers will start to look at computational photography as a new way to be creative. Experimental techniques are already being used to fuse reality with virtual reality sets. Computational photography could drive sales of next-generation television sets, monitors, and devices as well as other types of services in a way that was not possible with the 3D initiatives rolled out several years ago. With computational photography, premium users will be able to capture still pictures more creatively and also have an immersive video experience. The easiest and most cost-effective way for device OEMs to offer these innovative capabilities is by utilizing array sensor technology. Benefits of Array Sensor Technology Array sensor technology offers users the promise of a high-end photo experience on the mobile platform without the challenges that may be associated with building a stereographic camera system. Easy to design and build, a stereo camera system can be manufactured as identical camera pairs. However, there are issues with this approach as well. Stereo cameras yield poor depth resolution for near field objects unless a more expensive wide-angle lens is used, which may result in significant distortions and loss of image quality. 2 ©2015 IDC ISP pipelines can process separate streams but are generally not designed for disparity estimations. A stereo camera system that relies on autofocus actuators in one or more cameras is not as robust as a traditional single camera system when exposed to thermal or shock impacts. Most importantly, autofocus, the key value driving the premium photography experience, also presents significant challenges in the following areas: Calibration: It is difficult to get absolute depth value, there is no anchor point for depth computation, and there may be noise in the depth map due to focus change. Depth in video mode: Autofocus complicates video processing. Cost: Solution required for synchronizing focus change across cameras is more complex, which can result in higher costs. Array sensor technology fully addresses each of these challenges and provides the opportunity to use the higher-quality depth maps for new, more meaningful applications. The technology provides excellent depth resolution for near field objects and a robust framework for generating 3D point clouds (the set of data points used to represent and measure an object's external surface). A mobile device with an array sensor provides depth independent of the primary camera, and the array construction ensures stability in calibration and in everyday situations that involve bumps and shocks. Problems associated with implementing stereo pairs are solved for: Calibration: No actuator is required in the array sensor. Autofocus assist: The array sensor can significantly improve autofocus in the primary camera and reduce shutter lag. Depth in video mode: The array sensor can provide high-confidence depth in video mode. Depth accuracy: The array sensor provides multidirectional disparity cues, which results in less noise and increases overall depth accuracy. Occlusions: Independent stereo pairs that don't have autofocus result in occlusion areas. The array sensor has minimal occlusion zones. Cost: The array sensor can enable a robust, autofocus system that is cost effective. In many ways, array sensor technology represents a superior solution for incorporating imaging depth innovation into mobile devices and other product form factors. Consider Pelican Imaging Pelican Imaging, founded in 2008 and headquartered in Santa Clara, California, is the inventor of innovative array sensor technology for mobile devices. In 2013, the start-up secured $22 million in Series C funding from Qualcomm through its venture investment group, Qualcomm Ventures, from Nokia Growth Partners, and from Panasonic Venture Group. Other venture investors include Globespan Capital Partners, Granite Ventures, InterWest Partners, and IQT. Pelican is using this capital to invest in product innovation and the ability to execute in the mobile photography space. Pelican's depth array sensor augments an OEM's primary camera by acquiring the 3D scene information and colocating it to the primary camera's image and video, giving users the freedom to refocus after the fact, focus on multiple subjects, segment objects, take linear depth measurements, apply filters, change backgrounds, and create 3D models — from any device. ©2015 IDC 3 Challenges Pelican's technology clearly breaks new ground, and the company's investors believe it is positioned to lead the next wave in video and image capture. Yet despite the buzz around the potential benefits of the solution, it has been a long time coming to market. Pelican's move toward a depth array sensor solution from a primary camera array solution should address the core OEM concerns, namely image and video quality, bill-of-materials (BOM) cost, and computational load. Pelican is now in discussion with OEMs in multiple markets about this new approach, according to published reports. However, the nascent market for array cameras and computational photography is a competitive and rapidly evolving space. The imaging market is at a point where vendors can freely innovate by applying new technology to create completely new ways to interact with a camera and the content it produces. Other vendors besides Pelican already see the value in light field photography. Lytro markets a standalone light field camera targeted at high-end enthusiast, prosumer, and professional photographers. And Mobile World Congress 2014 saw smartphone vendors enter the field, with new models from Samsung, Microsoft, and LG offering a "refocus/living image" feature similar to what is found in Pelican's technology. All this activity suggests that the race is on to develop computational photography–led capabilities that will create demand for premium devices and brand loyalty. Conclusion IDC predicts that there will be substantial near-term developments in computational photography that will allow consumers to capture "living images," or images that can be constantly manipulated to form new impressions. The mobile device market will be the foundation for offering this innovative capability. Furthermore, computational photography can allow for next-generation capabilities such as 3D image and video capture without the need for two lens assemblies. Pelican Imaging provides this functionality in a high-resolution light field depth sensor array suitable for assimilation into a mobile form factor. The ability to use software-driven computational photography allows device OEMs to improve the photography user experience while maintaining the same slim, attractive device form factor that consumers have come to expect. Furthermore, array sensor–driven computational photography allows manufacturers to improve the camera's low-light sensitivity without having to increase the size (and cost) of the sensor and the device itself. Improved low-light sensitivity also gives users a much higher level of satisfaction, resulting in higher rates of device usage and image sharing, which in turn can drive average revenue per user (ARPU). To the extent that Pelican Imaging can address the challenges described in this paper, the company has significant opportunity for success. A B O U T T H I S P U B L I C A T I O N This publication was produced by IDC Custom Solutions. The opinion, analysis, and research results presented herein are drawn from more detailed research and analysis independently conducted and published by IDC, unless specific vendor sponsorship is noted. IDC Custom Solutions makes IDC content available in a wide range of formats for distribution by various companies. A license to distribute IDC content does not imply endorsement of or opinion about the licensee. C O P Y R I G H T A N D R E S T R I C T I O N S Any IDC information or reference to IDC that is to be used in advertising, press releases, or promotional materials requires prior written approval from IDC. For permission requests, contact the IDC Custom Solutions information line at 508-988-7610 or gms@idc.com. Translation and/or localization of this document require an additional license from IDC. For more information on IDC, visit www.idc.com. For more information on IDC Custom Solutions, visit http://www.idc.com/prodserv/custom_solutions/index.jsp. Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com 4 ©2015 IDC