A new computer vision method helps speed up the screening of electronic materials

Increasing the performance of solar cells, transistors, LEDs and batteries will require better electronic materials made from new compositions not yet discovered.

To accelerate the search for advanced functional materials, scientists use AI tools to identify promising materials from hundreds of millions of chemical preparations. In tandem, the engineers are building machines that can print hundreds of material samples at once based on chemical composition identified by AI search algorithms.

To date, however, there is no similarly quick way to confirm that these printed materials actually perform as expected. This last step of materials characterization was the main bottleneck in the advanced materials sorting process.

Now, a new computer vision technique developed by MIT engineers is greatly accelerating the characterization of newly synthesized electronic materials. The technique automatically analyzes images of printed semiconductor samples and quickly estimates two key electronic properties for each sample: band gap (a measure of electron activation energy) and stability (a measure of longevity).

The new technique accurately characterizes electronic materials 85 times faster compared to a standard benchmark approach.

Scientists intend to use this technique to accelerate the search for promising materials for solar cells. They also plan to incorporate this technique into a fully automated material sorting system.

“Ultimately, we envision incorporating this technique into the autonomous laboratory of the future,” says MIT graduate student Eunice Aissi. “The whole system would allow us to give a computer a material problem, have it predict potential compounds, and then 24/7 create and characterize those predicted materials until the desired solution is reached.”

“The application space for these techniques ranges from improving solar energy to transparent electronics and transistors,” adds MIT graduate Alexander (Aleks) Siemenn. “It really covers the whole range of areas where semiconductor materials can benefit society.”

Aissi and Siemenn detail the new technique in a study that appears today in The nature of communication. Their MIT co-authors include graduate student Fang Sheng, postdoc Basita Das, and mechanical engineering professor Tonio Buonassisi, along with former visiting professor Hamid Kavak of Cukurova University and visiting postdoc Armi Tiihonen of Aalto University.

Performance in optics

Once a new electronic material is synthesized, its properties are typically characterized by a “domain expert” who examines one sample at a time using a benchtop tool called UV-Vis, which scans different colors of light to determine where the semiconductor begins to absorb more strongly. This manual process is precise, but also time-consuming: An expert in the field typically characterizes about 20 material samples per hour—a snail’s pace compared to some printing tools that can create 10,000 different material combinations per hour.

“The manual characterization process is very slow,” says Buonassisi. “They give you a high degree of confidence in the measurement, but they don’t match the speed at which you can put mass on a substrate today.”

To speed up the characterization process and remove one of the biggest hurdles in materials screening, Buonassisi and his colleagues turned to computer vision, a field that uses computer algorithms to quickly and automatically analyze the optical elements in an image.

“There is power in optical characterization methods,” notes Buonassisi. “You can get information very quickly. There is a wealth of images in many pixels and wavelengths that a human simply cannot process, but a machine learning computer program can.”

The team realized that certain electronic properties—namely, bandgap and stability—can be estimated from visual information alone, if that information is captured in sufficient detail and interpreted correctly.

With this goal, the researchers developed two new computer vision algorithms to automatically interpret images of electronic materials: one to estimate the band gap and the other to determine stability.

The first algorithm is designed to process visual data from highly detailed hyperspectral images.

“Instead of a standard camera image with three channels – red, green and blue (RBG) – a hyperspectral image has 300 channels,” explains Siemenn. “The algorithm takes that data, transforms it and calculates the band gap. This process is extremely fast.”

The second algorithm analyzes standard RGB images and assesses material stability based on visual changes in material color over time.

“We found that color change can be a good indicator of the rate of degradation in the material system we are studying,” says Aissi.

Material composition

The team used two new algorithms to characterize the band gap and stability for about 70 printed semiconductor samples. They used a robotic printer to deposit samples onto a single slide like cookies onto a baking sheet. Each deposit was made with a slightly different combination of semiconductor materials. In this case, the team printed different ratios of perovskites – a type of material that is expected to be a promising candidate for solar cells, although it is also known to degrade quickly.

“People are trying to change the composition—add a little bit of this, a little bit of that—to make (perovskites) more stable and more powerful,” Buonassisi says.

Once they had printed 70 different compositions of perovskite samples onto a single slide, the team scanned the slide with a hyperspectral camera. They then used an algorithm that visually “segments” the image and automatically isolates the samples from the background. They ran the new band gap algorithm on isolated samples and automatically calculated the band gap for each sample. The entire process of extracting the forbidden zone took about six minutes.

“Normally, it would take a domain expert several days to manually characterize the same number of samples,” says Siemenn.

To test stability, the team placed the same slide in a chamber in which they varied environmental conditions such as humidity, temperature and light exposure. They used a standard RGB camera to take a picture of the samples every 30 seconds for two hours. They then applied a second algorithm to images of each sample over time to estimate the degree to which each drop changed color or degraded under different environmental conditions. The algorithm eventually produced a “stability index,” or measure of the durability of each sample.

As a check, the team compared their results with manual measurements of the same droplets by a domain expert. Compared to expert benchmark estimates, team band results were 98.5 percent accurate and 96.9 percent stable and 85 times faster.

“We were constantly shocked by how these algorithms were able to not only increase the speed of characterization, but also get accurate results,” says Siemenn. “We envision incorporating this into the current automated materials pipeline that we’re developing in the lab to run it in a fully automated way using machine learning to guide us where we want to discover these new materials, print them, and then actually characterize them, all with very by fast processing.’

This work was supported in part by First Solar.

Leave a Comment