The chrome material that has blocked the light on binary masks for a generation may finally have outlived its usefulness, according to Franklin Kalk, CTO of Toppan Photomasks, in an exclusive interview with SST.
One key to 32nm generation photomask technology is opaque-molybdenum-over-glass (OMOG) material, according to Kalk. The new absorber is actually a purer, highly attenuating version of the MoSi material widely used for attenuating phase-shift masks. However, at 193nm it is very opaque. It is also amorphous (not polycrystalline), low-stress, flatter, and easier to dry etch than chrome—just the properties needed for advanced binary photomasks. In hyper-NA immersion lithography, binary masks have been performing better than attenuated-PSMs in many applications. The OMOG mask substrates (co-developed with Shin-Etsu) do, however, come with a thin semi-transparent chrome overcoat that functions as a hard-mask in the dry etching process before being removed. The opaque MoSi itself has high enough conductivity to prevent charge build-up in e-beam writing, Kalk reports.
MoSi had previously been considered as a chrome replacement, but the required dry-etch technology had not been sufficiently developed for maskmaking. Wet-etched chrome was still adequate then, and the potential of the new material did not seem to outweigh the costs. Today, though, the maskmaking industry has become familiar with MoSi as a semi-transparent doped material for attenuated-PSMs. Dry etching became ubiquitous for Att-PSM production and the deficiencies of chrome became more and more glaring. So now, the time may have come for OMOG.
The other key to 32nm maskmaking is pattern-dependent modeling of e-beam exposure, according to Kalk. Such modeling had long been used for laser mask writers, but adjusting the geometrical CD and dose of an e-beam writer—based on historical experience with similar patterns—is new. “The simplest way to do that is to a make a test mask with the layout you like, measure it, and then write a new mask with the inferred corrections,”he told SST, “but that cuts throughput and yield by a factor of two. We have a better way.”
Kalk has long advocated improved modeling to capture seemingly random patterning distortions that actually correlated with writing tool conditions. This now seems to have been accomplished at the joint IBM and Toppan sites in Burlington, VT, and Osaka, Japan. The top-of-the-line 45nm tool set at Burlington was copied in Osaka, where the process development for 32nm took place. Now the process is being integrated at IBM Burlington, where Toppan hopes it will form the basis for future collaborations. Already, the Toppan-IBM program is looking ahead to 22nm optical masks, which will need placement accuracy sufficient for double patterning. With the new material and modeling, the always optimistic Kalk anticipates success. —M.D.L.
First issue for 450mm: Making 22nm-quality wafers is no slam dunk
While equipment makers and chipmakers with lower-volume runs are fussing loudly about the overall economics of moving to 450mm wafers, those actually working on wafer development note that just producing substrates that big while also meeting extreme levels of purity and smoothness demanded by 22nm generation devices and beyond is going to be neither easy nor cheap. And now that solar makers are buying more silicon than chipmakers, the effort has to compete for silicon research dollars with developing lower-cost solar-grade production technology.
SUMCO is now in the process of investigating whether it can actually produce 450mm wafers, executives in charge of the development reported to SST partner Nikkei Microdevices, but the company will also have to determine if commercial production is viable (i.e., an acceptable combination of high-quality and low-cost).
“For 450mm wafers to be viable, wafer makers have to resolve a number of problems beyond just growing larger diameter ingots,”noted Naoki Ikeda, Toshiyuki Fujiwara, and Kazushige Takaishi, in charge of the company’s evaluation, R&D and wafer technology divisions respectively.
So far, there seems to be serious cost and quality issues to resolve with making bigger wafers. Pulling the larger-diameter ingots from larger crucibles of melt will leave much costly unused polysilicon waste behind, unless the ingots are grown long enough to use up most of the melt. SUMCO calculates that the 450mm ingots will each need to be about one ton in size to keep the materials waste to about the same level as with 300mm wafers. That means huge crystal pulling equipment and a new support system will need to be devised to supplement the usual 3mm neck required to start the growth of the pure single-crystal ingot from which the entire ingot hangs, pulled up as it grows. Perhaps even more concerning, heating up and cooling down this larger mass of material will take much longer, which means lower throughput—and higher density of defects, concludes SUMCO.
These nearly half-meter wide wafers are also going to have to be thicker to maintain the same stability against bowing and sagging as current 300mm wafers. SUMCO figures they’ll need to be about twice as thick, or about 1.8mm. That drastically cuts down the output of wafers/ingot, and means more substantial handling systems will be needed. It also complicates wafer thinning and 3D structures, which are already demanding fewer defects not just on the wafer surface, but all the way through the wafer as well.
Moreover, new polishing or cleaning processes will need to be developed to achieve the more demanding degree of consistent nanoscale smoothness all across these larger diameter wafers, to use them for manufacturing 22nm devices. Current 300mm wafers aren’t smooth enough for these next-generation features, and making a surface with even less surface roughness all across a 50% larger substrate, with more edge roll off, will be markedly more difficult. Smaller slurry particles allow polishing to smoother surfaces, but the smaller particles tend to clump together, and the clumps tend to leave deeper scratches, leaving an uneven surface topology. A better way needs to be found to keep the tiny particles evenly dispersed.
Finally, both inspection and production processes will have to be significantly improved to reduce the cost of processing larger surface areas. —P.D.
Dow Corning compound eyes Intel’s multi-chip apps
This week, Dow Corning unveiled a thermally conductive compound, called “TC-5688,”at the Intel Developer Forum (8/19-8/21, San Francisco, CA), touting it for use with Intel’s newest mobile microprocessor, the Intel Core2 Extreme mobile processor QX9300.
The significance of the new non-curing thermal interface material (TIM) is its resistance to “pump out”—i.e., thermal resistance does not increase under power cycling—that has been seen with materials in the past. This makes it suitable for multi-chip packaging applications (see figure). The company says the material exhibits “extremely low thermal resistance”at 0.05°C-cm2/W and high thermal conductivity at 5.67 W/mK
Andrew Lovell, industry marketing specialist at Dow Corning, explained to SST that during power cycling, microprocessor die can flex due to coefficient of thermal expansion (CTE) mismatch, placing thermo-mechanical stress on a TIM. “Multi-chip packages may enhance these stresses due to potential die height offset and other factors,”he said. Dow Corning benchmarked its TC-5688 against two competing materials on a multi-chip tester that simulates a mobile processor. The power cycling consisted of the device being on for six minutes and then off for six minutes; the junction temperature reached ~85°C during the testing. The cycle was repeated ~2000 times.
Power cycling data on a multi-chip tester. (Source: Dow Corning)
“While the thermal grease and phase change material exhibit rapid and significant degradation of their thermal properties, TC-5688 shows almost no sign of change in performance,”said Lovell. In particular, the phase-change material that was tested showed breakdown after ~500 power cycles. —D.V.
KLA-Tencor pitches double-patterning with Prolith 11
In an interview in Milpitas, CA after SEMICON West, Edward Charrier, VP/GM of KLA-Tencor’s process control information division, described the latest improvements in Prolith, the venerable litho simulation tool.
Prolith 11 supports the most likely double patterning option for the 32nm node, “litho-etch-litho etch”(LELE), according to Charrier. “Prior to Prolith 11, computational lithography studies assumed that the two exposure steps could be considered independently,”he reported, “but embedded topography from the first pass can disrupt the second exposure.”
Prolith 11 calculates the electric fields inside the resist/hardmask stack using its extensive catalog of material parameters and the topography inputted by the user, preferably from scatterometry or CD-SEM measurements of the results of the first step. The exposure and development of the second resist film is simulated including the patterned hardmask, substrate topography, and reflectivity. For accuracy at 32nm, EMF effects at the wafer due to the nonuniform film stack have to be included (see figure). According to Charrier, there are major differences compared to assuming planarity that result in shifts in the patterns.
A comparison of the electric field in the resist film caused by a plane perpendicular wave at the second pass exposure clearly highlights the complexity of introducing topography. (Source: KLA-Tencor)
Prolith has long strived to provide portable resist models that can be plugged into new situations and predict CDs and profiles accurately. Charrier reported major progress in the last few years in resist characterization and modeling, making 1nm accuracy possible through focus and dose. Of course, the physical models used by Prolith are slower than the heuristics employed by OPC engines, but he insists they are now fast enough to use on the small clips of circuit patterns of interest to R&D and process engineers. LithoWare, the Linux version of Prolith intended for layout and OPC designers, runs the same models more quickly on clusters of up to 120 processors.
Double patterning undoubtedly leads to increases in complexity and cost, but now computational tools are emerging to help with the decision-making. One can only hope that they achieve the same “predictive accuracy”and ease of use that characterized Prolith in previous eras. —M.D.L.