Sony tends to lead the way when it comes to image sensors. Two of its more recent developments have been new InGaAs sensors fabricated using its 3D stacking technology, and also now an event-based sensor through its collaboration with French firm Prophesee.
The InGaAs sensor was presented at the IEEE International Electron Devices Meeting in December 2019. Sony has been able to shrink the pixel pitch of the sensor down to 5µm by using an architecture in which each pixel in the InGaAs or InP photodiode array is connected to the readout circuit using copper-to-copper bonding, rather than the traditional microbumps.
Eric Fox, director of business development at Teledyne Dalsa, told Imaging and Machine Vision Europe that InGaAs pixel pitches are currently restricted to around 10µm and above using conventional indium bumping.
Indium gallium arsenide (InGaAs) is used to build shortwave infrared (SWIR) sensors because the material absorbs light in the 1µm to 2µm wavelength range, where silicon cannot. The photodiode array of a conventional back-illuminated InGaAs sensor is connected to a readout circuit on a silicon wafer using a microbump. But it’s difficult to scale these bumps, so the pixel pitch is relatively large thereby limiting image performance. Sony’s copper-to-copper bonding method effectively halves the pixel pitch, giving smaller pixels, which means greater image definition can be achieved.
The prototype sensor is a 1,280 x 1,024-pixel array with a 5µm pitch. Also, thinning the InP layer and process optimisation yielded a sensor that demonstrated high sensitivity and low dark current. The researchers say this work paves the way for high-definition SWIR imaging.
Fox at Teledyne Dalsa said that the firm has had customers asking for SWIR imaging for some time. Some of the applications using SWIR imaging that Fox listed include those in the healthcare market, such as OCT for ophthalmology, along with solar cell inspection. He also noted that SWIR imaging has advantages in general machine vision inspection, for sorting recycled plastic for example, and for food inspection.
Teledyne Dalsa has recently released its first SWIR camera, the Linea SWIR. The line scan camera is based on an InGaAs sensor, offering 40kHz line rate, horizontal resolution of 1,024 pixels on a 12.5µm pitch.
The Linea SWIR camera uses a third-party sensor, which is the starting point of Teledyne Dalsa’s SWIR strategy. The company plans to introduce its own SWIR sensors – Fox said potentially next year – for speciality applications where existing sensors are not suitable.
‘A whole host of different applications have been asking for that functionality and we’ve finally got to a point where we can invest in SWIR,’ Fox said. ‘Maybe more importantly, we think we can provide some differentiated value that others are not able to provide, mostly by virtue of our CMOS capabilities.’
The InGaAs photosensing material in a conventional InGaAs sensor is bonded to a CMOS readout IC. This means a lot of the sensor design is the same as for a visible sensor; all that is replaced is the front-end, where the photon absorption and pre-amplification is happening.
Some of the imaging performance is determined by the InGaAs material, but much of it is determined by the design of the CMOS, according to Fox. He said that read noise, dynamic range, linearity, image uniformity, speed, are all determined by the CMOS IC. It’s this area where Fox believes Teledyne Dalsa can provide some differentiation in SWIR sensors through its expertise in CMOS.
Potential disruptors of InGaAs are quantum dot material (Matthew Dale wrote about quantum dot SWIR sensors in the Dec19/Jan20 issue). The allure of quantum dot material is that it should be able to drive the cost of the sensors down by close to an order of magnitude, according to Fox. The penalty at the moment, he said, is that the quantum efficiencies are very low, five to 10 times lower than InGaAs, and dark currents are higher.
‘So far the imaging community has been unwilling to suffer those penalties in return for lower costs,’ Fox continued. ‘That might change in the next year or two. My expectation is that there will be some applications where, with enough illumination, the penalty in quantum efficiency can be overcome, and users will gladly take the 10 times lower cost. But it will initially be a subset of the applications that can do that.’
Belgian research institute Imec has developed a prototype thin-film monolithic infrared image sensor based on quantum dot materials, while US-based SWIR Vision Systems is now selling its Acuros CQD VIS-SWIR cameras that use a colloidal quantum dot sensor.
The other technology in the wings is type-II superlattice. This is based on MBE grown III-V detectors with engineered bandgaps. Sumitomo Electric introduced a device a year ago, a SWIR sensor using InGaAs/GaAsSb type-II quantum well structures.
The main event
‘In machine vision, we ride on the coattails of what is happening in other parts of the industry,’ Fox remarked. ‘Historically cell phones are what drove the transition from CCD to CMOS. Automotive is the same sort of thing. A lot of the 3D technologies that are being developed are being pioneered because of automotive and cell phone - and then machine vision is jumping on board.’
He noted that the event-based sensor announced by Sony and Prophesee is another case in point – this was developed with auto makers in mind to give cars better sensing capabilities, but machine vision could benefit from the work.
Sony and Prophesee presented the stacked event-based sensor at the International Solid-State Circuits Conference in San Francisco in February.
The new sensor detects changes in the luminance of each pixel asynchronously and outputs data, including coordinates and time, only for the pixels where a change is detected. The device offers 4.86μm pixel size and 124dB HDR performance.
The work combines the technical features of Sony’s stacked CMOS image sensor – the small pixel size and excellent low light performance are achieved using copper-to-copper connections – with Prophesee’s Metavision event-based vision sensing technologies, which give fast pixel response, high temporal resolution and high throughput data readout.
Some of the machine vision applications where it could be used include detecting fast moving objects in a wide range of environments and conditions.
Prophesee released its Metavision VGA event-based sensor in October 2019. The firm has been working with camera maker Imago Technologies, which has integrated Prophesee’s sensor inside its VisionCam smart camera. The new Sony-Prophesee sensor has a resolution of 1,280 x 720 pixels, a fill factor of 77 per cent, power consumption of 73mW for the 300MEPS version.
While a frame-based sensor outputs entire images at fixed intervals according to the frame rate, an event-based sensor selects pixel data asynchronously using a row selection arbiter circuit. By adding time information at 1μs precision to the pixel to address where a change in luminance has occurred, event data readout with high time resolution is ensured.
A high output event rate of 1.066Geps has been achieved by compressing the event data, i.e. luminance change polarity, time, and x/y coordinate information for each event. The sensor’s pixel chip (top) and the logic chip (bottom) are arrayed separately and incorporate signal processing circuits, which detect changes in luminance based on an asynchronous delta modulation method. Each pixel of the two individual chips is electrically connected using copper-to-copper connection in a stacked configuration.
The 124dB HDR performance is made possible by placing only back-illuminated pixels and a part of N-type MOS transistor on the pixel chip (top), thereby allowing the aperture ratio to be enhanced by up to 77 per cent. High sensitivity and low noise technologies from Sony enable event detection in low-light conditions (40mlx).
‘Event-based sensors have been talked about since the early days of CMOS,’ Fox said. ‘This technology has been demonstrated for 15 years, but these sensors haven’t made a commercial impact yet – if Sony and Prophesee bring those devices to market this will change. I expect that the initial focus will be on automotive.’
Driving development
The push for cars with greater autonomy has led to a tremendous amount of investment in different sensing modalities, not least the light ranging technique lidar. At the Image Sensors Europe conference in London from 11 to 12 March, lidar was one of the session tracks, and Smithers, the event organiser, also now runs dedicated automotive sensing conferences because of the amount of work being done in this area.
Lidar devices need sensitive detectors to give them the range and accuracy required to guide a vehicle, and one type of sensor that shows promise for lidar is the single-photon avalanche diode (SPAD). At the 2018 Image Sensors Europe conference, a member of the audience asked when would we see a megapixel SPAD sensor? At the time, SensL, now part of On Semiconductor, was finishing work on a 400 x 100 SPAD array. Fast forward to this year’s conference and Professor Edoardo Charbon of the Swiss institute EPFL presented MegaX, the first reported megapixel SPAD image sensor.
The advantage of SPADs is that they are fast – MegaX has a 3.8ns time gating and 24kfps frame rate – which makes them ideal for 3D imaging and lidar. They are also natively digital, making processing simple. However, up until now, large-format SPAD cameras have been difficult to produce because of the challenges relating to pixel pitch, reading out information from lots of pixels, power dissipation, and handling the data.
The group that developed MegaX has shown that the sensor is able to capture 2D or 3D scenes over 2m with an LSB of 5.4mm and a precision better than 7.8mm, according to the paper submitted to ArXiv.org (arXiv:1912.12910). Extended dynamic range is demonstrated in dual exposure operation mode. Spatially overlapped multi-object detection is experimentally demonstrated in single-photon time-gated time of flight for the first time.
Charbon said during his presentation at Image Sensors Europe that ‘3D stacking could multiply the impact of these detectors with parallelism and machine learning at the forefront’, and that ‘work will now focus on power reduction and miniaturisation’.
Charbon also pointed to new materials being investigated for producing SPAD arrays, including germanium-on-silicon and InGaAs to extend the detection range outside visible wavelengths. Professor Douglas Paul of the University of Glasgow spoke at the conference about Ge-on-Si SPADs for lidar that his group was developing.
It will probably be a while before SPAD sensors find their way into machine vision cameras, but all the work in the automotive space on lidar and 3D time of flight (ToF) in general could one day gain some traction in industrial applications. Imaging in 3D is an important area for machine vision; new time-of-flight sensors, again from Sony, but also now from Teledyne e2v, are becoming available for industrial camera manufacturers. Teledyne e2v recently released its Bora ToF sensor, which gives 1.3 megapixel resolution at 30fps.
Machine vision is still benefiting hugely from the advances being made in image sensors designed for consumer electronics, and now to a much greater degree automotive. And Sony still seems to have a big role to play.