Tuesday, June 27, 2017

Sony Integrates ToF-based Face Recognition in Xperia Smartphone

Techcrunch reports that Sony is to demo an integrated Softkinetic ToF camera inside Xperia smartphone paired with KeyLemon software for "3D face recognition system that could let users authenticate themselves with one photo." Softkinetic PR partially confirms this info.

Imatest on Challenges in Automotive Image Quality

Imatest publishes Norman Koren presentation "Challenges in Automotive Image Quality" presented in Autosens Detroit in May:

Thrown Camera Imaging

Ishikawa Watanabe Laboratory of Tokyo University shows how throwing a high-speed camera up can be useful to create images from unique points of view:

e2v 2.8um GS Pixel & Sensor Presentation

e2v publishes a presentation of its new Emerald image sensor featuring 2.8um global shutter pixels. Few slides from the presentation:

Omnivision Announces Dual Camera Set, Single Camera Sensors

PRNewswire: Omnivision's OV13A10 and OV13A1Q is a pair of 13MP stacked-die sensors with second-generation PureCel Plus 1.0um pixel. The new sensors are intended for 2x optical zoom to front- and rear-facing cameras in mainstream smartphones. The OV13A10 is a telephoto sensor designed specifically for dual-camera applications with a module z-height of less than 6mm, and features zig-zag HDR (zHDR) and PDAF. A customized chief ray angle (CRA) enables the OV13A10 to be used as a tele-sensor in a 2X optical zoom camera configuration.


PRNewswire: OmniVision also announces the OV16B10, a 16MP stacked image sensor designed for the next generation of flagship smartphones. Built on the second-generation, 1.12um PureCel Plus-S pixel. It includes PDAF and zHDR too. zHDR uses a long and short exposure in a single frame. When compared with traditional frame-based HDR techniques, the zHDR is said to have increased DR with minimum ghosting artifacts.

The OV16B10 has a built-in feature that synchronizes the frames and supports context switching when it is used in dual-camera configurations, supporting image fusion while simplifying camera system architecture. Additionally, the OV16B10 features a gyro interface that reads and synchronizes the motion data from an external gyroscope to enable precise image stabilization for video and still capture.

The new sensors are currently available for sampling and are expected to enter volume production in Q4 2017.

Monday, June 26, 2017

Multi-Resolution Image Sensor Paper

MDPI Sensors starts publishing a Special Issue on Image Sensors. The first paper in this issue is "A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems" by Daehyeok Kim, Minkyu Song, Byeongseong Choe, and Soo Youn Kim from Dongguk University-Seoul, Korea. Here is the paper's main idea explanation:

Sunday, June 25, 2017

Two Recent Theses

University of Michigan publishes PhD thesis "CMOS Sensors for Time-Resolved Active Imaging" by Jihyun Cho. After a quick overview of image sensor architectures and noise, the thesis describes a single shot FLIM imager and a ToF imager with background light suppression:


Lund and Linköping Universities, Sweden, publish MSc thesis "Demosaicing using a Convolutional Neural Network approach" by Karin Dammer and Ronja Grosz. The main intention of CNN approach is reduction of artifacts:


The results are somewhat mixed: "Using convolutional neural networks is a valid method for demosaicing images with good results and it could replace a method using linear interpolation. Our CNN method outperforms the multilayer perceptron by a difference of 7.14 dB in the peak signal to noise ratio. The convolutional neural network performs well when using L2 and PSNR as loss functions when training the network, however SSIM does not perform as well. Despite the relatively good result the network would benefit from using an error metric that is better at indicating the presence of image artifacts and color errors. The network did not significantly benefit from the residual layer nor a deconvolution layer."

Saturday, June 24, 2017

Light Field Sensor on a Chip

Light Field Forum notices a recently published SK Hynix patent application US20170179180 "Light Field Imaging Device and Method for Fabricating the Same" by Jong Eun Kim proposing the integration of the whole light field system in the microlens stack:

New Sony Products

Sony presents 4/3-inch 10.71MP IMX294CJK image sensor for 4K 120fps video and security applications. Thanks to the 4.63 µm large pixel achieves SNR1s of 0.14 lx, and use of a Quad Bayer pixel structure (see Figure 1) is claimed to realize an HDR with no time difference artifacts.


Sony also unveils 1-inch 20.48MP IMX183CLK-J / CQJ-J Monochrome/Color sensor pair. The sensor features 2.4 µm BSI pixel with a proven record in products for security camera and industrial applications.


Sony also announces
2MP, 6MP, and 12MP monochrome sensors for industrial camera applications based on BSI pixels: IMX290LLR, IMX178LLJ and IMX226CLJ.

Friday, June 23, 2017

Light Talks about L16 Camera Complexity while Gearing up for Mass Production

Light Co. is gearing up for mass production of the L16 multi-camera. There are more than 3,000 different parts in the L16 camera, which are supplied from at least six different countries around the world. Once they arrive at our factory in China, it takes 60 sets of hands to assemble each camera. Our manufacturing team follows an extensive 79-step process—and that doesn’t even include the 13 times we check each camera’s functionality.

The company is preparing to start shipping cameras to pre-order customers on July 14th, 2017, less than a month from now.


The company's video explains the camera design complexity:

Tesla Autopilot Management Changes

Techcrunch: Tesla gets rid of Apple Swift language manager Chris Lattner who was brought to the company 6 months ago to lead the autonomous driving division. The company says:

"Chris just wasn’t the right fit for Tesla, and we’ve decided to make a change. We wish him the best.

Andrej Karpathy, one of the world’s leading experts in computer vision and deep learning, is joining Tesla as Director of AI and Autopilot Vision, reporting directly to Elon Musk. Andrej has worked to give computers vision through his work on ImageNet, as well as imagination through the development of generative models, and the ability to navigate the internet with reinforcement learning. He was most recently a Research Scientist at OpenAI.

Andrej will work closely with Jim Keller, who now has overall responsibility for Autopilot hardware and software.
"

Dual Camera Patent Wars Looming?

Digitimes reports that Taiwan-based camera module Altek is suing Shenzhen O-film Tech, and Beijing Jingdong Century Information Technology, a sales agent in the China market, in the Beijing Intellectual Property Court for infringement of its dual camera module patent.

The dual-lens module, supplied by Shenzhen O-film, used in the Hong-mi Pro smartphone launched by China-based Xiaomi infringes upon its patent, Altek alleges. Beijing Jingdong acts as the China sales agent for the smartphone.

Altek was first to unveil smartphone dual-lens camera module in 2014 and is said to supply its dual camera modules for more than 30 smartphone models by HTC, Huawei, ZTE, Coolpad, Nubia, GiONEE and Smartisan.

InstantFlashNews reports that the patent depicts a camera module with its optical axes adjusted to provide different photography effects. As this dual camera module is manufactured by O-Film, and JD.com is selling the handset, Xiaomi is not being sued for now.

Analog Devices Demos HDR & Contextual Awareness

Analog Devices posts another demo video showing the capabilities of its SNAP sensor:

Thursday, June 22, 2017

Thesis on Open-Source Eye Tracking Database

Presenting PhD thesis for discussion a week before the defense seems to become a new trend in Finland. University of Eastern Finland presents PhD Thesis by Ana Gebejes entitled "Spectral video: application in human eye analysis and tracking" to be defended on June 30, 2017.

Using a 450nm-950nm spectral video device in eye-tracking, the researchers from the University of Eastern Finland have created a novel – first of its kind – combined spectral video/spectral image database: the SPectral Eye vidEo Database, SPEED.


A Youtube video explains the database creation work:

Caltech Presents OPA Camera

Caltech Professor Ali Hajimiri with graduate students present Optical Phase Array (OPA)-based camera: "What the camera does is similar to looking through a thin straw and scanning it across the field of view. We can form an image at an incredibly fast speed by manipulating the light instead of moving a mechanical object." Their OSA paper "An 8x8 Heterodyne Lens-less OPA Camera" explains the principle:

The OPA chip placed on a penny for scale.

2016 Gartner Image Sensor Report

InstantFlashNews publishes August 2016 Gartner report on image sensor market. A few quotes from the report:

Key Findings:
  • The growth in the biggest market for CMOS image sensors — smartphones — is ebbing. Leading suppliers are focusing on new markets, such as automotive, industrial and wearables, to drive revenue growth.
  • Capturing share in new markets requires new camera technologies, such as 3D cameras, dual cameras, software technologies, and algorithms for image processing and recognition.
  • New safety regulations for automobiles are emerging across key markets, such as North America, Europe, Japan and China. Compliance with these regulations will demand increased use of image sensors in a variety of automobile electronic equipment over the next three years.

CMOS image sensor product managers must:
  • Focus on developing specific products targeting new markets, such as industrial, automotive and wearable devices, and allocate resources to gain share quickly in these markets.
  • Develop computational camera software technologies and algorithms, such as high-speed processing, image recognition (face or object) and AI technologies, which will be adopted into, for example, the smartphone, automotive and industrial markets. And they should integrate them into one chip or a single package.
  • Consider acquiring packaging, lens and camera module suppliers and IP companies (image signal processors, connectivity, memory and artificial intelligence) to provide a complete product to the end customer.

Wednesday, June 21, 2017

TechInsights Survey of Stacked Image Sensors

TechInsights publishes a second blog post based on Ray Fontaine presentation at IISW 2017. This part covers stacked image sensor technology from TSV and oxide bonding to Cu-to-Cu direct bonding in the recent products:

IISW 2017 Awards

IISW 2017 held on May 30-June 2 in Hiroshima, Japan has announced a number of Awards:

2017 Walter Kosonocky Award for the best paper published in 2015-16 goes to

A 0.13 μm CMOS System-on-Chip for a 512 × 424 Time-of-Flight Image Sensor With Multi-Frequency Photo-Demodulation up to 130 MHz and 2 GS/s ADC
Cyrus S. Bamji, Patrick O’Connor, Tamer Elkhatib, Swati Mehta, Barry Thompson, Lawrence A. Prather, Dane Snow, Onur Can Akkaya, Andy Daniel, Andrew D. Payne, Travis Perry, Mike Fenton, and Vei-Han Chan
Microsoft, USA, IEEE JSSC, Vol. 50, No. 1, pp. 303-318, January 2015


Pioneering Achievement Award for Contribution to R&D and commercialization of high-performance and high-resolution CCD image sensors is presented to Tetsuo Yamada:


Exceptional Lifetime Achievement Award Significant Contributions to the Advancement of Solid-State Image Sensors Including the Development Of On-chip Microlens Technology and VOD Structures for anti-blooming is presented to Yasuo Ishihara:


The Best Poster Award goes to:

Fully Depleted, Monolithic Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias
Konstantin D. Stefanov, Andrew S. Clarke, James Ivory and Andrew D. Holland. The Open University, UK


Thanks to JN for the docs!

Tuesday, June 20, 2017

Sony Enhances IR Sensitivity by 80% with Pyramidal Structure

Sony publishes an open access paper in Nature Journal "IR sensitivity enhancement of CMOS Image Sensor with diffractive light trapping pixels" by Sozo Yokogawa, Itaru Oshiyama, Harumi Ikeda, Yoshiki Ebiko, Tomoyuki Hirano, Suguru Saito, Takashi Oinoue, Yoshiya Hagimoto & Hayato Iwamoto. From the abstract:

"We report on the IR sensitivity enhancement of back-illuminated CMOS Image Sensor (BI-CIS) with 2-dimensional diffractive inverted pyramid array structure (IPA) on crystalline silicon (c-Si) and deep trench isolation (DTI)... A prototype BI-CIS sample with pixel size of 1.2 μm square containing 400 nm pitch IPAs shows 80% sensitivity enhancement at λ = 850 nm compared to the reference sample with flat surface. This is due to diffraction with the IPA and total reflection at the pixel boundary."


The papers's conclusion:

"A novel BI-CIS with IPA on c-Si surface for light trapping pixel technology is proposed and the prototyping results are demonstrated. Both spectroscopic measurements and demo images show considerable NIR sensitivity enhancement with small spatial resolution degradation. BI-CIS with 400 nm pitch IPA surface and DTI shows 80% improvement in sensitivity, which corresponds to QE of more than 30% at 850 nm for a 3 μm thick c-Si photodetector. Furthermore, it is worth noting that there is still a lot of room for improvement toward the fundamental limit of 4n^2. Additionally, it is important to control surface passivation to minimize the degradation of thermal noise and also further improve pixel isolation to reduce lateral color crosstalk as small as possible."

Monday, June 19, 2017

Intel RealSense CTO Presentation

Augmented World Expo (AWE) publishes a presentation by Intel RealSense group CTO, Anders Grunnet Jepsen:



One of the new RealSense depth cameras

Thanks to ZR for the link!

Samsung CIS Business Update

Samsung Investor Presentation gives interesting details on the company CIS business progress.

The market share data shows the business expansion:


Samsung uses 28nm CIS process, while Sony extends 65nm process life:


Automotive and mobile are high-priority applications: