Published: 2001
Total Pages:
Get eBook
In future, high power density fusion devices, the need to prevent excessive local deposition of the plasma energy efflux on the first-wall surfaces is a critical design consideration in order to maintain the integrity of such surfaces. This requirement must be met without significant impact on plasma purity or overall plasma confinement. For the International Thermonuclear Experimental Reactor (ITER), these constraints have led to the following design criteria[1] P[sub rad]/(P[sub input]+ P[sub[alpha]])= 83%, P[sub rad, core]/(P[sub input]+ P[sub[alpha]])= 33%, P[sub target]/P[sub loss]= 17%, Z[sub eff]1.8, and[tau][sub E]/[tau][sub E, ITER93H] 0.85. Here, P[sub loss] is the power flowing out of the core (i.e., P[sub input]+ P[sub[alpha]] - P[sub rad, core])and P[sub target] is the power conducted to the target plate. These criteria represent a compromise between obtaining sufficient radiation to reduce the target heat load to a tolerable level, minimizing core fuel dilution, and maintaining sufficient power flow through the edge plasma to maintain H-mode confinement. Past experiments have had difficulty achieving these conditions simultaneously when using seeded impurities, and therefore there has been some concern regarding the viability of the ITER design. However, recent experiments in DIII-D using the puff and pump technique with argon as the seeded impurity have demonstrated the compatibility of these design constraints. In particular, steady-state plasma conditions have been achieved with P[sub rad]/P[sub input]= 72%, P[sub rad, core]/P[sub input]= 16%, P[sub target]/P[sub loss]= 17%, Z[sub eff]= 1.85, and[tau][sub E]/[tau][sub E, ITER93H]= 1.05.