Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python

In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including conf...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in neuroinformatics Vol. 8; p. 39
Main Authors Rey-Villamizar, Nicolas, Somasundar, Vinay, Megjhani, Murad, Xu, Yan, Lu, Yanbin, Padmanabhan, Raghav, Trett, Kristen, Shain, William, Roysam, Badri
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 29.04.2014
Frontiers Media S.A
Subjects
Online AccessGet full text
ISSN1662-5196
1662-5196
DOI10.3389/fninf.2014.00039

Cover

More Information
Summary:In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Edited by: Fernando Perez, University of California at Berkeley, USA
Reviewed by: Eleftherios Garyfallidis, University of Sherbrooke, Canada; Stefan Johann Van Der Walt, Stellenbosch University, South Africa
This article was submitted to the journal Frontiers in Neuroinformatics.
ISSN:1662-5196
1662-5196
DOI:10.3389/fninf.2014.00039