1 Introduction
This post is the second installment of the napari series. In the first blog post I introduced the napari-cosmx
plugin, how CosMx® Spatial Molecular Imager (SMI) data can be viewed as layers within napari, and a method for processing, or “stitching”, raw data that are exported from AtoMx™ Spatial Informatics Portal (SIP).
One of the things that I love about the napari-cosmx
plugin is its duality. It’s flexible enough to quickly explore SMI data in a Graphical User Interface (GUI) yet robust enough to script reproducible results and tap into the underlying python objects. In this post, I’ll walk through some of the basic ways in which we can use napari-cosmx
to view SMI data. I’ll make use of this duality by sharing a combination of GUI and programmatic tips.
This is not intended to be official documentation for the napari-cosmx
plugin. The tips herein are not an exhaustive list of features and methods.
- Section 2 shows how to preprocess the example dataset. If you are using your AtoMx-exported SMI data, this section is optional
- Section 3 shows basic GUI tips for interacting with SMI data
- Section 4 provides several examples of recapitulating the aesthetics seen with the GUI as well as advanced ways we can fine-tune images and more
2 The Example Dataset
The example dataset that I am using is the mouse coronal hemisphere FFPE that is available to download from NanoString’s website here. If you are following along with your AtoMx exported data, you can skip most of these pre-processing steps as they are not required (but see Section 2.1.2 if you would like to view metadata).
Large memory (RAM) might be required to work with raw images. Stitching the example data on a laptop might not work for everyone. The raw data size for this example is 183 GBs. Not all raw data are needed to stitch, however, and users can exclude the CellStatsDir/Morphology3D
folder if downloading locally. If excluding this folder, the raw data is closer to 35 GBs.
The computer I used to stitch was an M1 Macbook Pro. Processing this 130 FOV, mouse 1K data set took about 10 minutes, ~700% CPU, and a peak memory usage of about 12 GBs (swap space was also used). The size of the napari files combined was an additional 22 GBs of disk space.
2.1 Pre-processing example data
Once downloaded, unzip the HalfBrain.zip
file on your computer or external hard drive. The format for this dataset differs from the expected AtoMx SIP export so a preprocessing step is necessary.
When uncompressed, the raw data in the HalfBrain folder are actually nested like this:
Terminal
tree -L 4
├── AnalysisResult
│ └── HalfBrain_20230406_205644_S1
│ └── AnalysisResults <-- **Raw Data Folder**
| └── cp7bjyp7pm
├── CellStatsDir
│ └── HalfBrain_20230406_205644_S1
│ └── CellStatsDir <-- **Raw Data Folder**
│ ├── CellComposite
│ ├── CellOverlay
│ ├── FOV001
│ ├── FOV002
| ...
│ ├── Morphology2D
│ └── RnD
└── RunSummary
└── HalfBrain_20230406_205644_S1
└── RunSummary <-- **Raw Data Folder**
├── Beta12_Affine_Transform_20221103.csv
├── FovTracking
├── Morphology_ChannelID_Dictionary.txt
├── Run1000_20230406_205644_S1_Beta12_ExptConfig.txt
├── Run1000_20230406_205644_S1_Beta12_SpatialBC_Metrics4D.csv
├── Shading
├── c902.fovs.csv └── latest.fovs.csv
2.1.1 Expected Raw Data Format
In order for napari-cosmx
to stitch this non-AtoMx example dataset, we’ll need to rearrange the folders so that the nested raw data are at the top level. After rearrangement, the proper file structure should look like this:
Terminal
tree -L 2
.
├── AnalysisResults
│ └── cp7bjyp7pm
├── CellStatsDir
│ ├── CellComposite
│ ├── CellOverlay
│ ├── FOV001
│ ├── FOV002
...
│ ├── Morphology2D
│ └── RnD
└── RunSummary
├── Beta12_Affine_Transform_20221103.csv
├── FovTracking
├── Morphology_ChannelID_Dictionary.txt
├── Run1000_20230406_205644_S1_Beta12_ExptConfig.txt
├── Run1000_20230406_205644_S1_Beta12_SpatialBC_Metrics4D.csv
├── Shading
├── c902.fovs.csv └── latest.fovs.csv
There are a few ways to rearrange. The first method retains the original folder structure and simply makes symbolic links to the data in the expected format. Here’s how to do it in unix/mac (Windows not shown).
Terminal
# Terminal in Mac/Linux
# cd to folder containing HalfBrain. Then,
mkdir -p RawFiles && cd $_
ln -s ../HalfBrain/AnalysisResult/HalfBrain_20230406_205644_S1/AnalysisResults .
ln -s ../HalfBrain/CellStatsDir/HalfBrain_20230406_205644_S1/CellStatsDir . ln -s ../HalfBrain/RunSummary/HalfBrain_20230406_205644_S1/RunSummary .
Alternatively, we could manually move folders. Specifically, in your Finder window, create a folder named RawData
. Then, move:
HalfBrain/AnalysisResult/HalfBrain_20230406_205644_S1/AnalysisResults
toRawData/AnalysisResults
HalfBrain/CellStatsDir/HalfBrain_20230406_205644_S1/CellStatsDir
toRawData/CellStatsDir
HalfBrain/RunSummary/HalfBrain_20230406_205644_S1/RunSummary
toRawData/RunSummary
Once the file structure is properly formatted, use the stitching widget method from an earlier blog post to create the mouse brain napari files.
2.1.2 Adding metadata
We will also use the cell typing data from the Seurat file. Let’s include the following metadata columns:
RNA_nbclust_clusters
: the cell typing results (with abbreviated names)RNA_nbclust_clusters_long
: (optional) human-readable cell type namesspatialClusteringAssignments
: spatial niche assignments
Note that the Seurat file contains two sections of mouse brain samples. We need to filter the metadata to include only those cells from Run1000_S1_Half
. Note that when preparing the metadata for napari, the cell ID must be the first column (i.e., see the relocate
verb in the code below).
# This is R code
library(Seurat)
library(plyr)
library(dplyr)
# sem_path will be wherever you downloaded your Seurat object
<- "/path/to/your/muBrainRelease_seurat.RDS"
sem_path <- readRDS(sem_path)
sem <- sem@meta.data %>%
meta filter(Run_Tissue_name=="Run1000_S1_Half") %>%
select(RNA_nbclust_clusters,
RNA_nbclust_clusters_long,
spatialClusteringAssignments)
$cell_ID <- row.names(meta) # adds cell_ID column
metarownames(meta) <- NULL
<- meta %>% relocate(cell_ID) # moves cell_ID to first column position
meta write.table(meta, file="/path/to/inside/napari-ready-folder/_metadata.csv",
sep=",", col.names=TRUE, row.names=FALSE, quote=FALSE)
Now that the data are ready, drag and drop the slide folder into napari to launch the plugin.
3 Interacting with the GUI
This section focuses on features relevant to the napari-cosmx
plugin. Users new to napari may find napari’s general viewer tutorial helpful as well.
When we open a slide with napari-cosmx
, by default there will be a few napari layers visible (Initial View tab; Figure 2). These include FOV labels
and Segmentation
. Clicking the eye icon next to a layer will change its visibility. Let’s turn off those layers for a moment and visualize the cell types from the RNA_nbclust_clusters
column (Cell Types tab; Figure 3). We can also color cells by their spatialClusteringAssignments values (Niches tab; Figure 4). In the Color Cells
widget, we can control which cell types or niches we would like to view. When we activate the Metadata
layer, hovering over a given cell will display the metadata associated with it as a ribbon at the bottom of the application. To view the IF channels, use the Morphology Images
widget, In Figure 5 (IF Channels tab) I turned off the visibility of the cell types, added GFAP in red and DNA in cyan, and zoomed into the hippocampus. When I click on a layer, it becomes the activate layer and I can use the layer controls
widget to adjust attributes to that layer such as contrast limits, gamma, layer blending, and more. Finally, we can view raw transcripts (or proteins). Simply select the target and the color and click Add layer
. In Figure 6 (Transcripts tab), I zoomed in on a section of the cortex and plotted Calb1.
Like other programs that use layers, napari allows the layers to be moved up/down and to blend (not shown below).
To capture screenshots, simply click File > Save Screenshot...
. The images above are captured “with viewer” but that is optional.
4 Scripting with napari-cosmx
This section is for advanced users who want finer control of the aesthetics.
Most of the items we’ve covered can also be accessed through various methods in the gem
object that can found loaded in the >_
ipython
interpreter (i.e., yellow arrow in Figure 2). You may have noticed in the figures above that there was code being used to take the screenshots. Here’s the full script that can help reproduce the figures above. I use reproducible scripts often. This is because I may want to make slight changes to a figure down the road. For example, if a reviewer overall likes an image but asks for the cell colors to be different, I just need to change the colors in the code and the script will pan and zoom where needed, set the IF channels and contrasts, and reproduce other layers programmatically.
import imageio
= 'path/to/store/figures'
output_path
## Initial
gem.show_widget()
gem.viewer.window.resize(1650,1100)
= (0.0, 0.6830708351616575, -57.16103351264418)
gem.viewer.camera.center = 128.0
gem.viewer.camera.zoom
= output_path + "/fig-initial.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=False)
screenshot writer.append_data(screenshot)
## Cell types only
'FOV labels'].visible = False
gem.viewer.layers['Segmentation'].visible = False
gem.viewer.layers[
= output_path + "/fig-cell-types-short.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=False)
screenshot writer.append_data(screenshot)
## Niches
gem.color_cells('spatialClusteringAssignments')
= output_path + "/fig-niches.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=False)
screenshot writer.append_data(screenshot)
# IF only
gem.color_cells('RNA_nbclust_clusters')
'RNA_nbclust_clusters'].visible = False
gem.viewer.layers[gem.add_channel('GFAP', colormap = 'red')
= gem.viewer.layers['GFAP']
gfap
gem.add_channel('DNA', colormap = 'cyan')
= gem.viewer.layers['DNA']
dna = [208.39669421487605, 1328.5289256198346]
dna.contrast_limits = 1.1682758620689655
dna.gamma
= (0.0, -0.7181216373638928, -55.314605674992876)
gem.viewer.camera.center = 1095.856465340922
gem.viewer.camera.zoom 'Segmentation'].visible = True
gem.viewer.layers['Segmentation'].opacity = 0.371900826446281
gem.viewer.layers[
= output_path + "/fig-IF.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=False)
screenshot writer.append_data(screenshot)
# Transcripts
= gem.viewer.layers['RNA_nbclust_clusters']
cell_type_layer = 0.9
cell_type_layer.opacity = True
cell_type_layer.visible = (0.0, -0.009723512204714457, -59.25760232977486)
gem.viewer.camera.center = 1204.3755331686673
gem.viewer.camera.zoom 'Segmentation'].visible = True
gem.viewer.layers['Segmentation'].opacity = 0.6
gem.viewer.layers[gem.plot_transcripts(gene = "Calb1", color = 'white', point_size=50) # I
= output_path + "/fig-transcripts.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=False)
screenshot writer.append_data(screenshot)
In practice, I use the GUI to adjust the settings (e.g., zoom, opacity) and then “jot down” the results in my text editor. For example, when I zoom or pan to another location, that location can be found at:
gem.viewer.camera.zoom gem.viewer.camera.center
Similarly, the contrast limits and gamma values for IF channels can be saved as well.
= gem.viewer.layers['DNA']
dna = [208.39669421487605, 1328.5289256198346]
dna.contrast_limits = 1.1682758620689655 dna.gamma
Screenshots can be done programmatically with the napari’s screenshot
method and there are additional settings you can change (e.g., just the canvas, scale) that we won’t cover here.
= output_path + "/fig-transcripts.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=False)
screenshot writer.append_data(screenshot)
There are also methods available in napari-cosmx
that do not have the GUI equivalent. We won’t be able to touch on all of these methods in this post but I want to highlight a few.
4.1 Color cells with outlines
We can plot the cell colors as boundaries instead of filled in polygons (Figure 7).
# gem.viewer.camera.center = (0.0, -0.5375926704126319, -54.7415680001114)
# gem.viewer.camera.zoom = 1371.524539264374
= False
gfap.visible = False
dna.visible 'Calb1'].visible = False
gem.viewer.layers['Npy'].visible = False
gem.viewer.layers['Targets'].visible = False
gem.viewer.layers[
= (0.0, -0.6346878790298397, -54.95271110236874)
gem.viewer.camera.center = 2113.6387223301786
gem.viewer.camera.zoom gem.color_cells('RNA_nbclust_clusters', contour=2)
= output_path + "/fig-contours.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=True)
screenshot writer.append_data(screenshot)
4.2 Plot transcripts with an expanded color pallette
The GUI offers a handful of colors to plot transcripts. We can specify which color, by name or by hexcode, to plot. For example:
gem.plot_transcripts(gene = "Calb1", color = 'pink', point_size=20)
which is the same as
gem.plot_transcripts(gene = "Calb1", color = '#FFC0CB', point_size=20)
4.3 Plotting genes with list comprehensions
We can plot similar genes or targets with the same color. For example, the code that generated Figure 8 is here.
= (0.0, -0.6346878790298397, -54.95271110236874)
gem.viewer.camera.center = 2113.6387223301786
gem.viewer.camera.zoom
= gem.targets
df = df[df.target.str.contains("NegPrb")]
filtered_df
= filtered_df.to_pandas_df()
pandas_df = pandas_df.target.unique().tolist()
negatives gem.plot_transcripts(gene = x, color = "white", point_size=20) for x in negatives];
[
= output_path + "/fig-negatives.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=False)
screenshot writer.append_data(screenshot)
We can also supply of list of tuples where each tuple is a target and a color.
= [('Npy', "magenta"), ("Calb1", "white")]
genes gem.plot_transcripts(gene = x[0], color = x[1], point_size=20) for x in genes];
[
for x in negatives:
= False
gem.viewer.layers[x].visible
gem.color_cells('RNA_nbclust_clusters') # reset to filled contours
= gem.viewer.layers['RNA_nbclust_clusters']
cell_type_layer = 0.9
cell_type_layer.opacity = True
cell_type_layer.visible = (0.0, -0.026937869510583412, -59.20560304046731)
gem.viewer.camera.center = 3820.667999302201
gem.viewer.camera.zoom 'Segmentation'].visible = True
gem.viewer.layers['Segmentation'].opacity = 0.6
gem.viewer.layers[
= output_path + "/fig-crowded-tx.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=False)
screenshot writer.append_data(screenshot)
4.4 Changing transcript transparency
Sometimes transcripts can be stacked on top of each other to the point that it’s difficult to qualitatively determine the number of transcripts. Adjusting the transcript opacity of the layer in the GUI only changes the transparency of a single point. But it’s possible to change all points using the ipython
interpreter.
'Npy'].opacity = 0.5
gem.viewer.layers[= output_path + "/fig-tx-opacity.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=False)
screenshot writer.append_data(screenshot)
4.5 Center to a particular FOV
While zooming (gem.viewer.camera.zoom
) and panning (gem.viewer.camera.center
) can control the exact location of the camera, you can programmatically go to a particular fov with the center_fov
method.
# center to fov 123 and zoom in a little (i.e., buffer > 1).
gem.center_fov(fov=123, buffer=1.2)
= output_path + "/fig-center-to-fov.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=False)
screenshot writer.append_data(screenshot)
4.6 Plot all transcripts
This is not advised for resource-limited systems as it plots all transcripts. The method add_points
plots all the points for a given FOV. If no FOV is specified, it will plot all transcripts (this can be taxing on resource-limited computers).
gem.add_points(fov=123)
'Targets'].opacity = 0.4
gem.viewer.layers[= output_path + "/fig-tx-all.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=False)
screenshot writer.append_data(screenshot)
4.7 Changing background color
For some publication styles (e.g., posters), turning the background a lighter color might be useful. However, when changing the background, some items might be more difficult to see (compare Figure 7 with Figure 13).
= 'white'
gem.viewer.window.qt_viewer.canvas.background_color_override = output_path + "/fig-white.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=True)
screenshot writer.append_data(screenshot)
4.8 Scale Bar location
To reposition the scale bar to the bottom left:
= 'black'
gem.viewer.window.qt_viewer.canvas.background_color_override ='bottom_left'
gem.viewer.scale_bar.position= output_path + "/fig-scale_bl.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=True)
screenshot writer.append_data(screenshot)
4.9 Specify individual cell types
Here’s my last tip for this post. Using the color_cells
method, one can choose the color of the cell types and which cells to color by supplying a dictionary. If a cell type is not in the supplied dictionary, it will not be shown as a color.
= {
custom_colors "MOL":"#AA0DFE",
"GN":"#85660D",
"CHO_HB":"orange" # need not be hexcode
}
gem.color_cells('RNA_nbclust_clusters', color=custom_colors)
= output_path + "/fig-color_three.png"
fig_path imageio.get_writer(fig_path, dpi=(800, 800)) as writer:
with = gem.viewer.screenshot(canvas_only=True)
screenshot writer.append_data(screenshot)
5 Conclusion
In this post I showed you some of my go-to napari-cosmx
plugin features that I use when analyzing SMI data. In my workflow, I take advantage of the plugin’s interactivity as well as its underlying functions and methods. This comes in the form of “jotting down” settings for reproducibility or fine-tuning an image’s aesthetics ahead of publication. I couldn’t cover all the things this plugin can do but look for other tips in future posts.