Photolithography involves manipulating light to etch precise features onto a surface. It is usually used to make computer chips and optical devices akin to lenses. But small deviations through the manufacturing process often end in these devices falling in need of their designers’ intentions.

To bridge this gap between design and manufacturing, researchers at MIT and the Chinese University of Hong Kong used machine learning to develop a digital simulator that mimics a selected photolithographic manufacturing process. Their technique uses real data captured by the photolithography system, allowing it to more accurately model how the system would produce a design.

The researchers are integrating this simulator, together with one other digital simulator, right into a design framework that emulates the performance of the manufactured device in downstream tasks, akin to creating images with computer cameras. These connected simulators allow a user to make an optical device that higher suits their design and achieves the most effective task performance.

This technique could help scientists and engineers develop more accurate and efficient optical devices for applications akin to mobile cameras, augmented reality, medical imaging, entertainment and telecommunications. And since the digital simulator learning pipeline uses real data, it might probably be applied to a wide range of photolithography systems.

“This idea sounds easy, but people have never tried it because real-world data will be expensive and there are not any precedents for tips on how to effectively coordinate software and hardware to create a high-fidelity data set,” says Cheng Zheng . a graduate student in mechanical engineering who’s co-lead writer of 1 Open access article describing the work. “We took risks and conducted extensive research, akin to developing and testing characterization tools and data exploration strategies, to ascertain a working schema. The result’s surprisingly good and shows that real data works way more efficiently and precisely than data generated by simulators consisting of analytical equations. Even though it might probably be expensive and chances are you’ll feel clueless at first, it’s price it.”

Zheng wrote the paper with co-lead writer Guangyuan Zhao, a graduate student on the Chinese University of Hong Kong; and her advisor, Peter T. So, professor of mechanical and biological engineering at MIT. The research can be presented on the SIGGRAPH Asia Conference.

Printing with light

Photolithography involves projecting a pattern of sunshine onto a surface, triggering a chemical response that etch features into the substrate. However, as a result of slight variations in light diffraction and slight variations in chemical response, the device produced ultimately has a rather different pattern.

Because photolithography is complex and difficult to model, many existing design approaches are based on equations derived from physics. These general equations provide a way of the manufacturing process, but cannot capture all variations specific to a photolithography system. This may cause devices to underperform in the true world.

For their technique, which they call neural lithography, MIT researchers construct their photolithography simulator based on physical equations after which integrate a neural network that’s trained on real, experimental data from a user’s photolithography system. This neural network, a form of machine learning model loosely based on the human brain, learns to compensate for lots of the system’s specific deviations.

The researchers collect data for his or her method by creating many designs covering a big selection of feature shapes and sizes, which they fabricate using the photolithography system. They measure the ultimate structures and compare them to design specifications, couple this data and use it to coach a neural network for his or her digital simulator.

“The performance of learned simulators relies on the input data, and data artificially generated from equations cannot cover real-world deviations, which is why it is necessary to have real-world data,” says Zheng.

Dual simulators

The digital lithography simulator consists of two separate components: an optics model that captures how light is projected onto the surface of the device and a resist model that shows how the photochemical response occurs to create structures on the surface.

In a downstream task, they connect this learned photolithography simulator with a physics-based simulator that predicts how the manufactured device will accomplish this task, akin to how a diffractive lens diffracts the incident light.

The user specifies the outcomes they desire a device to attain. Then these two simulators work together in a bigger framework that shows the user tips on how to create a design that achieves these performance goals.

“Our simulator allows the manufactured object to attain the most effective possible performance in a downstream task akin to computer cameras, a promising technology to make future cameras miniaturized and more powerful. We show that even if you happen to attempt to get a greater result with post-calibration, it remains to be not pretty much as good as our closed-circuit photolithography model,” adds Zhao.

They tested this method by making a holographic element that creates a picture of a butterfly when exposed to light. Compared to devices designed using other techniques, their holographic element produced a near-perfect butterfly that was more consistent with the design. They also made a multi-stage diffraction lens that had higher image quality than other devices.

In the longer term, the researchers wish to further develop their algorithms for modeling more complex devices and likewise test the system with consumer cameras. In addition, they wish to expand their approach in order that it might probably be used with several types of photolithography systems, akin to systems that use deep or extreme ultraviolet light.

This research is supported partly by the US National Institutes of Health, Fujikura Limited and the Hong Kong Innovation and Technology Fund.

The work was partially carried out using MIT.nano’s facilities.

This article was originally published at news.mit.edu