Wireless Photography™ and CMOS Sensors-The Next Adoption Enablers for Consumer Imaging Belmont, CA May 10, 2001 -- For the past two years Imerge Consulting Group has been tracking and forecasting the ascension of Wireless Photography™ as one of the next big enablers for mass adoption of consumer digital imaging. Many pundits, (Imerge included), have had difficulties predicting just how the adoption of Wireless Photography™ embedded in devices will unfold. A year ago our questions took on a more general usage aspect such as: Will wireless digital cameras lead the charge of adoption or will digital camera users want to connect their cameras to cellular phones and use the phone as the transmission device? How long before cellular phones become image enabled in the U.S.? What about Wireless Photography™ enabled handhelds?
Suddenly, within the past six months we now have a clearer understanding of how Wireless Photography™ most likely will occur thanks to DoCoMo, the JPhone, Imode and Japanese carriers like KDIE. Japan has provided us a model to emulate for further development and adoption. Clearly Wireless Photography™ is being driven by the wireless phone industry, by virtue of the shear volume of its worldwide installed base. It is also clear that Nokia, Ericsson, Motorola and Sony are right now in development of imaging enabled wireless phones. Some will be developing on a platform of second gen WAP, some will be based on I-Mode, using GSM, GPRS (Global Pack Radio Service) and some will be utilizing G3 or G4 networks, depending on what part of the world you live in.
For all the hype surrounding development, there are still has some major hurdles to overcome and currently, for every problem solved, there are more questions than answers. Most primary to the U.S. is the question of bandwidth and whether the Wireless Photography™ phenomenon in wireless phones is exclusive to Japan because of their broader bandwidth provided by carriers such as KDEI? How long will it take before the U.S. has this broader bandwidth? What about digital copyright management in sharing images from device to device-device to data center and data center to device? How long will it take for CMOS sensors to be affordable in cell phones? And the bigger question: will U.S. users see the value proposition in a low-resolution CMOS capture device, built into their phone-at least enough to pay for an added subscription for this added service?
What about the problem of monetizing low-res images for print fulfillment? Is compression/de-compression the answer? And will users pay for this enhanced service that they will most likely use only 2% of the time-on a monthly/yearly subscription basis? Clearly these hurdles provide areas of opportunity for smaller technology developers who can turn on a dime.
Two such developers are LightSurf and Pixim, both located in the San Francisco Bay Area. LightSurf's focus is in developing what it calls Micro-transactions for Wireless Photography™ enabled cellular phone systems. Micro-transactions are small, singular consumer to business or business to business transactions that would take the place of a monthly or yearly subscription.
Philippe Kahn, CEO of LightSurf believes that the value proposition for transmitting images over wireless devices will not be perceived as high enough to support a full yearly or monthly subscription for this service, and users have no idea how much they will want to transmit images from device-to-device or device-to-data center. Rather than miss out on this opportunity altogether, Philippe has used his business to business software expertise, including databasing, as former founder of Borland, and found a way to marry it to the wireless phone industry.
From a user perspective, you would then pay only for what you transmit-at $0.15 a pop. If you don't think transmitting images adds enough value to warrant locking yourself into a monthly subscription, you could jump in a try it only on a pay-as-you-go basis. Great concept, right?
When I first heard about this during the PMA time frame, I thought this notion had merit. What a great way for users to get a taste for wirelessly transmitting images. As time progressed, the more I thought about it, the less convinced I became that it would be a profitable revenue stream for all parties in the chain.
Phillippe explained that his firm was working with "the big carriers" to try and solve the micro-transaction billing methodology, how to track each micro-transaction which is a daunting task, and it occurred to me that the big carriers are the only entities that will really see large revenues from this model.
With the device vendors getting only a small incremental percentage of the over-all revenue stream, and the carriers getting the lions-share, added to this LightSurf's cut of the transaction, what's left from the micro-transaction for device vendors is micro-revenue. Massive amounts of images would have to be exchanged for this to be a profit center for vendors. And if that were the scenario, the micro-transaction model would be replaced by monthly subscriptions and LightSurf would be out of the picture-so to speak.
A micro-transaction model of sorts is already seen in Japan where the most popular cell phone transaction are users uploading various tones to their phones which can be used to designate the type of call coming in, one for each family member, boss, stock broker, etc.
I am also a firm believer that IP and new technology cannot be "pushed" across platforms and embraced by competing vendors without it being all-inclusive and if it does not come with too big of a "hit" to a product's cost, speed, size or development time compared to the value it brings. We have to only look to the recent failure of FlashPoint and FlashPix to understand this obstacle. The micro-transaction will have to be more than a micro profit center to be adopted by all parties.
The second developer of note is Pixim, located in Silicon Valley. The technology behind Pixim dates back to 1996, when an unlikely triad of vendors, Kodak, HP and Canon, came together to mutually invest in a Stanford University project to develop 0.18 micron CMOS imaging sensor technology, spearheaded by Dr. David Yang, Dr. Brain Wandell and Dr. Abbas El Gamal. The result was a unique approach to the design of a CMOS imaging sensor.
Today, Stanford University, Pixim, HP, Kodak and Canon share the patents for the technology that was developed from this early investment at Stanford with Pixim driving licensing agreements and the technology even further. Kodak, HP and Canon get to use the technology that Pixim further develops as part of this agreement for their early joint investment.
Today, the remaining 30 global CMOS sensor developers are faced with a dilemma, whether to introduce their CMOS sensors into the low-end consumer digital camera space, (such as Conexant, Kodak/Motorola, Intel, Photobit, OmniVision), stick to the high-end (such as National Semiconductor) or try to go after the embedded device market such as cell phones, laptops, handhelds, medical (camera on a pill) automotive (rear view sensors) and desktop PCs. As I mentioned, the cell phone and laptop PC approach seems to currently be the most popular, simply because of shear volume of these opportunities.
Cahners In-stat forecast the imaging sensor market for all categories including automotive, industrial and consumer applications would reach 200M sensor units worldwide by 2004. This forecast was in 1999, before the recent economic situation. An Imerge Consulting Group forecast completed in 1999 has the current shipments (2001) at 10.4M units worldwide for all categories. Still, most vendors are aggressively going after the cell phone and laptop markets, (OmniVision has or will have design wins with Kyocera, Intel with Dell, Conexant with Nokia).
Currently mobile PCs are the only PCs that are selling in the PC channel, (Imerge/CompUSA). By Q-4 this year or Q-1, 2002, laptop PCs from Dell, Compaq, Apple and Gateway are likely to incorporate tiny CMOS image sensors in either CIF resolution (352 x 288) or VGA (640 x 480) as dual cam sensors. Cell phones and handhelds will follow shortly after in 2002. Dell alone intends to ship 2M units of CMOS sensor laptop PCs in 2001. We also believe this 200M unit forecast will more than double by 2007.
Most CMOS sensors have an obstacle that developers have not been able to overcome. Due to the readout time being so slow on a typical CMOS sensor, artifacts become visible. That is one reason why most low-end CMOS sensor cameras cannot provide "fill flash". The APS reset, exposure and readout travel serially through the frame and this causes lines in the image and artifacts. Most CMOS sensors cannot capture moving objects without distortion and this greatly limits it's viability as a point and shoot camera technology. Fixed pattern noise (FPN) caused by digital I/O and internal logic becomes visible in the image with normal CMOS technology. The result is objects or people that look like they are leaning when captured in motion. Lines in the image are seen and the red channel typically has problems rendering proper colors. That's why all current CMOS sensor images look magenta biased.
What is different about Pixim's technology from other 0.18 micron CMOS sensor technologies is that along with a microlens on each pixel, there also is an A/D (analog to digital) converter just behind the photo diode on each pixel. Pixim calls this DPS or digital pixel sensor. What this allows is a massively parallel design and digital data readout from each pixel and provides a digital signal through the read process.
It means that the inherent digital signal noise that is passed on from analog processes to the A/D converter is non-existent. The result is dramatic. A VGA image (relatively low-resolution) that would show fringing on the edges of contrast and transition areas looks smooth like a mega-pixel 10 or 12 bit image. Details become apparent in shadow areas like a 12 bit image. Highlight areas do not immediately blow-out but contain detail. It also means speed because of the massively parallel nature of this design.
Recent tests at Stanford have shown a capture rate capability from this DPS sensor of 14,000 captures per second. This means that this sensor deployed in a camera can allow the camera to do multisampling of the image with no "hit" to the click to click speed of the camera. Because of its capture speed, none of the of the artifacts you normally see with a CMOS sensor are visible. Fill flash becomes possible. Objects don't lean.
Another byproduct with the 0.18 micron design means lower power consumption. That means the entire camera or Wireless Photography™ system can run at 1.8 volts instead of the current 3.3 volts. The benefit is one or possibly two AA batteries will be required, not four, and power dedicated to the microprocessor for post image processing and buffering can now be dedicated to the LCD for viewing. Longer battery life and an enhanced usability for consumers are the benefits.
Currently, development of this technology is for VGA resolutions with a time-line for product viability in late 2001. By early 2002, 1.3 MP resolutions will be available and by late 2002, 2.1 resolutions. By 2003, 3.3 MP resolutions are expected to be market viable.
According to Pixim, a VGA image using their technology will have image quality capable of printing a 4" x 6" print. This goes a long way to proving a print revenue stream from these shared images without tremendous compression/decompression. The time is now for vendors and developers to start evaluating these opportunities because every barrier presents a potential opportunity. Already we're seeing CMOS and flat panel automotive applications introduced in high-end Japanese cars, (provided by Gentex and Johnson Controls).
www.imergeconsulting.com
Ron Tussy Principal Analyst � 2001 Imerge Consulting Group LLC |