Positron Emission Tomography (PET) is a medical imaging technique which is commonly used to map out metabolic activity in the body. Typically, a $\beta^+$-emitting (positron-emitting) radionuclide (such as ${}^{18}\textrm{F}$) attached to a glucose molecule is injected into the body where it is taken up by tissues in proportion to their metabolic activity. Positrons produced by the decay of the radionuclide usually travel less than 1 mm in human tissue before they bind with an electron, and quickly annihilate. Most of the positron-electron annihilations result in the emission of a back-to-back pair of 511 keV photons. These photon pairs leave the body and can be detected. Areas of the body where there is high metabolic activity – such as cancer cells or active regions in the brain – will be more intense emitters of these 511 keV photon pairs. In a PET scanner, arrays of scintillator+PMT detectors are used to measure the intensity of this radiation along well defined planes (referred to as slices) passing through the patient's body. Tomography is the process of reconstructing a three-dimensional image of positron emission intensity from these slices.
In this experiment, you will use PET to create a two-dimensional image of a sample box containing several positron sources of unknown strength at unknown locations.
![]() | ![]() |
A 2D PET scan of a human brain, indicating blood flow via ${}^{15}$O concentration. (source: Wikipedia) | A reconstruction of a smiley face made with discrete ${}^{22}$Na sources in our lab. |
In this experiment you will use positron emission tomography to identify the positions and relative intensities of two unknown positron emitters inside a sealed box. Specifically, your goals for this experiment include the following:
The last goal here represents a typical use case for PET scanning: a chemical called Fluorodeoxyglucose that contains Fluorine-18 is administered to a patient, which the body treats as regular glucose. Cells with rapid metabolisms (e.g. cancer) take in glucose much more rapidly than other tissues, and thus will end up with higher concentrations of beta-emitting ${}^{18}$F.
PET scans and other radioimaging techniques are often performed/overseen by medical physicists; for information on this career path look into the American Association of Physicists in Medicine or the University of Chicago Medical Physics Ph.D. Program. On another front, some relatively recent research has begun to look into PET imaging using Na22 laden nanoparticles. These have the upside of combining the (relatively) long 2.6 yr half-life of sodium-22 with the lesser bio-availability of nanoparticles. This both ensures that the radioactive material doesn't accumulate in the body and eliminates the need of on-site generation of radio-isotopes.
We're going to start out by thinking about a somewhat analogous situation in order to think about the basics of tomography.
Imagine you have a setup with a laser (the blue rectangle), a detector (the green rectangle), and a box that is covered from above.
If we turn on the laser, we measure some initial high intensity of light. As we sweep our laser and detector along, we find that the detector is blocked for some distance, and is then unblocked (see below)
We know that there's something in the shaded area that's blocking the laser, but we don't know anything about where it is on the $y$ axis. To fix that, we'll rotate our detector $90^\circ$ and measure the laser intensity again, shown below.
At this point, we can say that there's something in the center of our box. We can't quite tell if its a square, a circle, or something else. We just know that it is bounded by a square outline so far.
One major difference between our setup and the example above is that we're not relying on the detection or non-detection of single photons. We will be taking advantage of the fact that the most common decay path for Na22 involves emitting a positron. The positron will move a negligible distance before it is attracted to and annihilated by an electron. When this happens, the rest mass energy is converted into a pair of 511 keV photons. To conserve momentum, these photons will be emitted anti-parallel to one another.
Let's consider a few cases. We'll place a pair of detectors on opposite sides of our sample. The common line they lie on is called the Line of Response
Let's start with some events that won't be picked up as coincidences.
And now, some situations where coincidences can be detected
The first instance is exactly when we'd expect coincidence detections to happen.
As for the second situation, we'll probably still call it a coincidince. There are detectors that can capture Time of Flight(ToF) data; see figure 1 of this paper. However, you need both specialized scintillation crystals (to produce signals short enough to differentiate) as well as very fast electronics (to process the data quickly) which are quite expensive.
The third situation, a case of spurious coincidences, is an undesired case. That being said, we can mitigate the problem in a few ways. While using a weaker source would work, it also would make everything else take that much longer so it isn't a great idea. We could move our detectors further from the sample to make them less likely to pick up stray events, but this also reduces the speed of collecting data. Finally, we can tune the timing window for what counts as 'coincident'. The absolute minimum window we can use${}^{\dagger}$ is one clock cycle of our detection device, which in this case is around 8 ns for a 125 MHz clock. In practice we'll want a more permissive window to account for delays along signal lines, slower response from the scintillators, and whatnot. This is not a parameter that we'll be experimenting with in this experiment.
${}^\dagger$There are some ways to eek out better performance, see this paper for an example. To do this you're taking advantage of the physical propagation of electrical signals in the FPGA, which is not a trivial task. But you do get resolution in the 100s of ps!
Depicted below is a pair of detectors, as well as five possible locations for positron emitters. Indicate which (if any) of the sources could cause coincident detection events in the detectors.
Here, we have one positron emitter depicted (the white circle in the center) as well as five sensor pairs. Indicate which (if any) of the detectors could detect coincident gammas emissions.
Medical imaging PET scanners utilize a large number of gamma-ray detector pairs (1000s), typically arrayed in an annulus to produce high resolution scans. (See Fig. 3.) The reconstruction of an image from these scanners is can be somewhat computationally intensive, requiring the use of https://en.wikipedia.org/wiki/Tomographic_reconstruction#Reconstruction_algorithms. Furthermore, processing all of the signals from such a ring of detectors would be daunting to someone just learning how this technique works.
![]() |
Figure 3: PET scanner detector configuration. (Source: Wikipedia.) |
For the purposes of our experiment, we will use a single detector pair to create a two-dimensional image of a sample containing two sources. This simplified technique allows us to clearly illustrate the basic principles of PET while making use of a very simple tomographic reconstruction algorithm. The downside is that we are effectively collecting our individual scans in series (having to do one angle at a time) rather than parallel (using dozens of detectors). Nevertheless, a medical PET scanner operates on the same set of principles which apply to our scaled-down setup. A schematic of our PET scanner with the single pair of NaI+PMT detectors, a sample container and a sample guide is shown in Fig. 4.
A schematic of the apparatus is shown in Fig. 9. Individual components are described below.
The NaI(Tl)+PMT detectors are mounted in carriages. One detector is fixed in position (defined a $\theta = 0^\circ$ ) and the other is free to move around a circular track but should be set to 180º, directly opposite the fixed PMT. Anode signals from the PMTs are connected to the Red Pitaya for signal processing.
The Red Pitaya is a Field Programmable Gate Array (FPGA) that serves to process signals from the PMTs [A], send signals to control the motors via the Logic Level converter [C], and stop the motor at fixed positions determined by the Feedback Sensors [G]. We can access a Python program to let us interface with the Red Pitaya without needing sophisticated knowledge of its inner workings for now.
The motor driver acts both as a power supply for the motors as well as an interpreter for the incoming control signals from the Red Pitaya [B]. There are inputs that control the motor direction as well as one that will rotate a motor by a small angle for every individual pulse received from the Red Pitaya.
The clear acrylic disk will hold the sample that you are scanning for the various parts of this experiment. There is a motor beneath it that can rotate the sample, as well as a small brass flag that is used to ensure that the Feedback Sensors [G] are able to reset the platform to a consistent position.
This stage consists of another motor connected to a long worm gear, which holds the entire Sample Platform [E]. It translates the sample between the two detectors with better than mm precision thanks to the fine control from the Motor Driver [D].
There are three separate feedback sensors here: one photodetector that senses when the brass flag attached to Sample Platform [E] is aligned, and two switches that keep the linear stage from traveling too far in either direction. While the setup could operate without them, it would reliably resetting the position much more difficult to do.
The positron emitter that we will use in this experiment is sodium-22, which decays to an excited state of neon-22 by either electron capture or by positron emission. The neon later decays to its ground state by the emission of a 1.27 MeV gamma. (See Fig. 1.)
![]() |
Figure 1: Nuclear decay scheme for sodium-22. (Source: C. Michael Lederer, Jack M. Hollander, and Isadore Perlman, Table of Isotopes, 6th Edition, John Wiley & Sons, 1967.) |
The emitted positrons are slowed down and are captured by electrons in the source to form an electron-positron bound state called https://en.wikipedia.org/wiki/Positronium, a hydrogen-like “atom.” The ground state of positronium, which has a binding energy of 6.8 eV, has two possible configurations depending on the relative orientation of the electron and positron spins. The state with anti-parallel spins has net spin equal to 0, and is variously referred to as the singlet state, para-positronium, or, in spectroscopic notation, the state $1S^0$. This state decays into an even number of photons, with the most likely result being two back-to-back photons with equal energy and oppositely directed momentum. The state with parallel spins has net spin equal to 1, and is referred to as the triplet state, ortho-positronium, or the state $3S^1$. This state decays into odd numbers of photons, most commonly three.
For reasons having to do with the lifetime of the two states and with the likelihood of triplet states flipping into singlet states, the two photon decay is much more likely. Since the rest masses $m_0$ of the electron and positron are converted to energy in the annihilation process, each of the resulting two photons has energy $E = m_0c^2 = 511 \,\textrm{keV}$ , and are created simultaneously.
The general process of doing a PET scan is the following:
A single scan (moving of the sample between detectors) gives you essentially a 1d plot of coincidence rate versus position. By rotating the sample between scans, you gather information about the distribution of sources that could be obscured at some angles (e.g. if some of the sources are co-linear with the detectors they will show up as a single stronger source). By combining data from multiple scans using an algorithm we can reconstruct the spatial positions of the positron emitters without having to disturb the sample or have physical access to the interior. When the sample is, say, a person, they tend to appreciate not having their interior disturbed.
Before we even get to scanning, let's talk about how we're going to make this thing move. We have set up a motorized control stage and digital coincidence counting for this experiment using an instrument called a Red Pitaya. The Red Pitaya combines a Linux computer with a Field Programmable Gate Array (FPGA) and high-speed oscilloscope inputs in one package. One is currently monitoring the air quality on the international space station.
We'll start with a Python notebook that will let you send commands to the motors to move the platform around. The motors we're using are stepper motors, which require you to sequentially energize coils of wire in the motor to electromagnetically make it turn. Instead of worrying about the specific way to implement that, we've gone ahead and bought some driver modules that do most of the dirty work. We give it a signal to indicate the direction we want the motor to turn, and then a number of pulses that will turn the motor some fraction of a full rotation.
When moving your sample, you'll want to know how to convert between units of pulses and lateral displacement. Arbitrary units are fine for many things, but surgery is not one of them.
Open up the PET Basic
Notebook on your computer and run all the cells. You should have an interface that looks like the following:
You should tinker with the various commands some to get an idea of how the system behaves before you move on.
Next, let's take a look at how the detection portion of things works. The outputs from the PMTs connect to a circuit board on top of the Red Pitaya, and are processed through an amplifier circuit. These amplified signals are then passed to the Red Pitaya's Analog to Digital Converter (ADC). The digitized signals are then processed by the FPGA to determine pulse height and count coincidences.
Th same notebook you just opened also monitors the number of coincidences that are detected each second between the two detectors.
Currently the motors running can cause interference with the coincidence detection circuitry. While annoying, the setup is designed such that you'll never be counting coincidences while the setup is in motion.
Using the Python interface, your apparatus, and a ruler (or other measuring tool), determine the conversion factor between lateral motor pulse inputs and distance.
Next, use the coincidence counting portion of the interface to determine the following:
How many coincidences per second are typical for
This information should help you make sense of your results later. It should be noted though that while our sources have the same nominal starting activity, the actual activity can differ by up to 20% higher or lower. Thus, sources obtained at the same time might not register as identical.
To facilitate automation, we've developed a way for the Red Pitaya to take in a list of commands and run them in order. Unless you'd really like to send commands one at a time; you do you. The commands are:
File
BaseFileName
defines the save file prefix, a time/date will be appended to the end of the name.ResetLateral
ResetRotation
Move
Dir
is 0
for left and 1
(or anything else) for rightDist
is the # of motor stepsRotate
Dir
is 0
for clockwise and 1
for counter-clockwiseDist
is the # of motor stepsScan
Time
is the time in secondsLOOP
Times
is the number of times to repeatList
is a list of commands to loop throughDon't worry, we don't expect you to be writing command lists from scratch. We've provided several to perform basic tasks, but it will be up to you to tweak the settings to do what you want.
To use these commands, you'll open up the FPGA PET Command Interpreter
notebook bookmarked in Chrome.
Don't just run this notebook all at once, it needs you to input the file you'll use partway through. There's a link in the Jupyter notebook that skips to that cell; if you click the link, go to the run
menu, and select the option run all above selected cell
you'll get setup properly.
After this, there should be an upload button that will let you choose your script. After that you can continue running cells
To get acquainted with the scripting language, you are tasked with doing the following:
The Linear Scan
example is a good starting point to experiment with here.
In deciding how far apart to space one's measurements, several several factors need to be taken into consideration. Spreading out the scans too much will result in poor resolution of the location of the sources. More closely spaced scans will improve resolution, but at the cost of requiring more time to acquire the data as well as increased computing requirements. However an upper limit on how closely to space the scans can be set by measuring the spatial resolution of the detector system perpendicular to the LoR.
Your goal for this section is to use a single Na-22 button source translated orthogonally across the LoR as shown in Fig. 12, to determine the spatial resolution of the system.
Use the linearscan.json
file as a template for this part of the lab
We'll briefly go through the behavior of the command interpreter notebook here:
Counts
shows the number of coincidences that have been recorded in the active scan.Scan Time
shows a progress bar for the current scan (not the entire run).Your first task is to investigate the behavior of a single source passing through the detector.
From the plot, estimate the width of the distribution and what position corresponds to the center of the platform being directly between the detectors.
Reflect on how your prediction and results compare.
After you've established how wide the signal distribution is from a single source, you should also empirically determine how close a pair of identical Na22 sources can be before the signal becomes indistinct from a single source.
Again the linearscan.json
file will serve as a basis for this part of the lab
Specifically, we want you to determine how close together two sources of similar strength can be before the can't be easily distinguished from a single stronger source.
Now that you've got a good idea of the limitations of the system, it's time to scan a known sample from multiple angles and use the data to try out our reconstruction techniques. Set up a configuration like the one shown below:
The specifics aren't critical, but you should measure out the positions of the sources beforehand so that you can calibrate your data reconstruction. The resulting plot of intensities versus position and angle is known as a sinogram because of the tendency of point sources to form sine curves in this format
Now you can use the Three.json
file as a template for this part of the experiment
You will want to run at least two scans for the known configuration:
You may use either one for the exercise, but be aware that you may want to work on the plotting between lab sessions. The notebook for reconstructing configurations can be found in the next section
After you have a good idea of what parameters are needed to resolve our Na22 sources you'll move onto the core piece of the lab: scanning a sample with $\beta^+$ emitters in unknown locations. One such sample is shown below:
It is suggested that you take a quick, low-resolution scan first to have some idea of what you're dealing with, and then to set up a longer scan to run overnight to capture fine details in the setup.
We've done a fair bit of collecting data (coincidence counts) as a function of distance and angle, but how do we turn that into $x$ and $y$ position?
A good introduction to the basic techniques we'll use for processing data is this handout on Tomographic Image Reconstruction, from the 41st American Association of Physicists in Medicine conference. We will use what is essentially a discretized version of an unfiltered backprojection operation, but reading further on what other options exist may be interesting.
Between 5-11 minutes of the following video also do a decent job of introducing our basic tomographic reconstruction technique.
To create a 2-dimensional image from your data, we've created a Python script that will do the basic reconstruction, which can be accessed here:
And if you want the example datafile you can grab it here.
The notebook walks through the process step-by-step, but you will need to edit some cells manually to set the unit conversion between motor steps and cm.
If you're moving some before starting your scan, the reconstruction algorithm will do some odd things. One possible fix is to add a new cell before the rotation operation with the following:
pad_amt = 7 new_size = len(shifted_data[0])+pad_amt padded_data = np.zeros((len(shifted_data),new_size,new_size)) for index,item in enumerate(shifted_data): padded_data[index] = np.pad(item, (pad_amt,0)) fig,ax = plt.subplots() ax.imshow(padded_data[0])
and then change the rotation loop to
for index, item in enumerate(padded_data):
You'll have to tweak the pad_amt
manually until you get circles instead of rings.
There are instructions on how the code works and what it is doing at each step, and reading through it will be much more helpful than trying to run all of it at once.
You will be able to produce a few different sorts of plots with this code, and are free to modify it as needed.
Ultimately, you should identify the position of all sources within a sample as well as the relative intensities between them.
For the last day of the lab, we have an optimization task for you. We have another unknown sample prepared for you to scan today. This time, you have the following constraints:
You should write a short summary of the capabilities of the system used for this experiment. It should include:
You should include plots relevant to how you decided any of the above quantities.
Present your reconstruction & analysis of the unknown configuration. This should include:
Describe your performance in the challenge task. This should entail: