Smoothing scalar data on the surface of a high resolution sphere can be a slow process. If the filter is not truncated, the distances between all the vertices or barycentres of faces need to be calculated, in a very time consuming process. If the filter is truncated, the process can be faster, but with resolutions typically used in brain imaging, it can still be very slow, a problem that is amplified if data from many subjects are analysed.

However, if the data for each subject have already been interpolated to a common grid, such as an icosahedron recursively subdivided multiple times (details here), then the distances do not need to be calculated repeatedly for each of them. Doing so just once suffices. Furthermore, the implementation of the filter itself can be made in such a way that the smoothing process can be performed as a single matrix multiplication.

Consider the smoothing defined in Lombardi, (2002), which we used in Winkler et al., (2012):

where is the smoothed quantity at the vertex or face , is the quantity assigned to the -th vertex or face of a sphere containining vertices or faces, is the geodesic distance between vertices or faces with coordinates and , and is the Gaussian filter, defined as a function of the geodesic distance between points.

The above formula requires that all distances between the current vertex or face and the other points are calculated, and that this is repeated for each , in a time consuming process that needs to be repeated over again for every additional subject. If, however, the distances are known and organised into a distance matrix , the filter can take the form of a matrix of the same size, , with the values at each row scaled so as to add up to unity, and the same smoothing can proceed as a simple matrix multiplication:

If the grid is the same for all subjects, which is the typical case when comparisons across subjects are performed, can be calculated just once, saved, and reused for each subject.

It should be noted, however, that although running faster, the method requires much more memory. For a filter of full-width at half-maximum (FWHM) , truncated at a distance from the filter center, in a sphere of radius , the number of non-zero elements in is approximately:

whereas the total number of elements is . Even using sparse matrices, this may require a large amount of memory space. For practical purposes, a filter with width = 20 mm can be truncated at twice the width ( = 2), for application in a sphere with 100 mm made by 7 subdivisions of an icosahedron, still comfortably in a computer with 16GB of RAM. Wider filters may require more memory to run.

The script `smoothdpx`

, part of the areal interpolation tools, **available here**, can be used to do both things, that is, smooth the data for any given subject, and also save the filter so that it can be reused with other subjects. To apply a previously saved filter, the `rpncalc`

can be used. These commands require Octave or MATLAB, and if Octave is available, they can be executed directly from the command line.

## Figures

The figures above represent facewise data on the surface of a sphere of 100 mm radius, made by recursive subdivision of a regular icosahedron 4 times, constructed with the `platonic`

command (details here), shown without smoothing, and smoothed with filters with FWHM = 7, 14, 21, 28 and 35 mm.

## References

- Lombardi M. Interpolation and smoothing.
*Astron. Astrophys.*2002; 395(2), 733–745. - Winkler AM, Sabuncu MR, Yeo BT, Fischl B, Greve DN, Kochunov P, Nichols TE, Blangero J, Glahn DC. Measuring and comparing brain cortical surface area and other areal quantities.
*Neuroimage*. 2012 Jul 16;61(4):1428-43.