Wilsey, Philip III:Small: Partitioning Big Data for High Performance Computation of Persistent Homology <div><div>Persistent Homology (PH) is computationally expensive and cannot be directly applied on more than a few thousand data points. This project aims to develop mechanisms to allow the computation of PH on large, high-dimensional data sets. The proposed method will significantly reduce the run-time and memory requirements for the computation of PH without significantly compromising accuracy of the results. <br></div><div><br></div><div>This project explores techniques to map a large point cloud P to another point cloud P' with fewer total points such that the topology space characterized by P and P' is nearly equivalent. The mapping from P to P' will potentially hide some of the smaller topological features during the PH computation on P'. Restoration of accurate PH results is achieved by (i) upscaling data for the identified large topological features, and (b) partition the data to run concurrent PH computations that locate the smaller topological features.</div></div> NSF-CSSI-2020-Talk;Topological data analysis;Persistent Homology;Data Reduction and Partitioning;Paralllel and Distributed Computing;Computer Engineering 2020-01-31
    https://figshare.com/articles/presentation/III_Small_Partitioning_Big_Data_for_High_Performance_Computation_of_Persistent_Homology/11778285
10.6084/m9.figshare.11778285.v1