Synchrotron light sources are powerful facilities that produce light in a variety of “colours,” or wavelengths – from the infrared to X-rays – by  accelerating electrons to emit light in controlled beams.

Synchrotrons like the Advanced Light Source at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) allow scientists to explore samples in a variety of ways using this light, in fields ranging from materials science, biology and chemistry, to physics and environmental science.
Researchers have found ways to upgrade these machines to produce more intense, focused and consistent light beams that enable new, and more complex and detailed studies across a broad range of sample types.
But some light-beam properties still exhibit fluctuations in performance that present challenges for certain experiments.
Little tweaks to enhance light-beam properties can feed back into the overall light-beam performance across the entire facility. Synchrotron designers and operators have wrestled for decades with a variety of approaches to compensate for the most stubborn of these fluctuations.
Now, a large team of researchers at Berkeley Lab and UC Berkeley has successfully demonstrated how machine-learning tools can improve the stability of the light beams’ size for experiments via adjustments that largely cancel out these fluctuations – reducing them from a level of a few percent down to 0.4 percent, with submicron (below 1 millionth of a meter) precision. The machine-learning algorithms used at the ALS are referred to as a form of “neural network” because they are designed to recognise patterns in the data in a way that loosely resembles human brain functions.
In this study, researchers fed electron-beam data from the ALS, which included the positions of the magnetic devices used to produce light from the electron beam, into the neural network. The neural network recognised patterns in this data and identified how different device parameters affected the width of the electron beam. The machine-learning algorithm also recommended adjustments to the magnets to optimise the electron beam.
The successful demonstration at the ALS shows how the technique could also generally be applied to other light sources, and will be especially beneficial for specialised studies enabled by an upgrade of the ALS known as the ALS-U project. The machine-learning technique builds upon conventional solutions that have been improved over the decades since the ALS started up in 1993, and which rely on constant adjustments to magnets along the ALS ring that compensate in real time for adjustments at individual beamlines.
“That’s the beauty of this,” said Hiroshi Nishimura, a Berkeley Lab affiliate: “Whatever the accelerator is, and whatever the conventional solution is, this solution can be on top of that.”
Steve Kevan, ALS director, said, “This is a very important advance for the ALS and ALS-U. For several years we’ve had trouble with artifacts in the images from our X-ray microscopes. This study presents a new feed-forward approach based on machine learning, and it has largely solved the problem.”
For more information on this subject, click here.