{ "cells": [ { "cell_type": "markdown", "id": "e52ccbb6", "metadata": {}, "source": [ "# TDA: TP 2" ] }, { "cell_type": "markdown", "id": "65a2575b", "metadata": { "heading_collapsed": true }, "source": [ "# Statistics, Unsupervised, and Supervised Machine Learning on 3D shapes with Topological Data Analysis" ] }, { "cell_type": "markdown", "id": "9a6f50aa", "metadata": { "hidden": true }, "source": [ "In this practical session, we will use the various TDA tools presented in class in order to run data science tasks (inference, clustering, classification) on a data set of 3D shapes. As in the first practical session, we will use [`Gudhi`](https://gudhi.inria.fr/) (see first practical session for installation instructions). The different sections of this notebook can be run independently (except Section 0 which is mandatory), so feel free to start with the project that sounds the more interesting to you :-)" ] }, { "cell_type": "markdown", "id": "057ad8a6", "metadata": { "hidden": true }, "source": [ "Note also that if you choose to switch from a section to another, make sure to clear all variables first (and run Section 0 again) since some variable names are shared between sections." ] }, { "cell_type": "code", "execution_count": null, "id": "614ef656", "metadata": { "hidden": true }, "outputs": [], "source": [ "import gudhi as gd\n", "print(gd.__version__)" ] }, { "cell_type": "code", "execution_count": null, "id": "909c3698", "metadata": { "hidden": true }, "outputs": [], "source": [ "import gudhi.clustering.tomato as gdt\n", "import gudhi.representations as gdr" ] }, { "cell_type": "markdown", "id": "23ab63d7", "metadata": { "hidden": true }, "source": [ "The `TensorFlow` module of `Gudhi` is only required in Section 4." ] }, { "cell_type": "code", "execution_count": null, "id": "2c8b89a8", "metadata": { "hidden": true }, "outputs": [], "source": [ "import gudhi.tensorflow as gdtf" ] }, { "cell_type": "markdown", "id": "ae88a522", "metadata": { "hidden": true }, "source": [ "Other than that, you are free to use whatever other Python package you feel comfortable with :-) We make some suggestions below (these dependencies are also required to run our solutions to the exercises). " ] }, { "cell_type": "code", "execution_count": null, "id": "d827fe41", "metadata": { "hidden": true }, "outputs": [], "source": [ "import os\n", "import sys" ] }, { "cell_type": "markdown", "id": "13435a69", "metadata": { "hidden": true }, "source": [ "We will use three standard Python libraries: `NumPy`, `Scipy` and `Matplotlib`." ] }, { "cell_type": "code", "execution_count": null, "id": "2109902e", "metadata": { "hidden": true }, "outputs": [], "source": [ "import numpy as np\n", "import scipy as sp\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "id": "046d0c5e", "metadata": { "hidden": true }, "outputs": [], "source": [ "%matplotlib notebook" ] }, { "cell_type": "markdown", "id": "39b51480", "metadata": { "hidden": true }, "source": [ "In order to visualize 3D shapes, we will use [`meshplot`](https://skoch9.github.io/meshplot/tutorial/)." ] }, { "cell_type": "code", "execution_count": null, "id": "0d15badb", "metadata": { "hidden": true }, "outputs": [], "source": [ "import meshplot as mp" ] }, { "cell_type": "markdown", "id": "2dc263f7", "metadata": { "hidden": true }, "source": [ "Finally, some dependencies are section-specific: we list those below." ] }, { "cell_type": "markdown", "id": "522a5143", "metadata": { "hidden": true }, "source": [ "In Section 1, when running bootstrap tests on ToMATo, we will use the `statistics` Python module. These tests will be based on the Laplace-Beltrami operator, which can be computed with [`robust_laplacian`](https://pypi.org/project/robust-laplacian/)." ] }, { "cell_type": "code", "execution_count": null, "id": "4f307b5e", "metadata": { "hidden": true }, "outputs": [], "source": [ "import statistics\n", "import robust_laplacian as rlap" ] }, { "cell_type": "markdown", "id": "98fac184", "metadata": { "hidden": true }, "source": [ "In Section 2, we will use the [`networkx`](https://networkx.org/) package to visualize and run computations on Mapper graphs." ] }, { "cell_type": "code", "execution_count": null, "id": "938b04d8", "metadata": { "hidden": true }, "outputs": [], "source": [ "import networkx as nx" ] }, { "cell_type": "markdown", "id": "3ab3d316", "metadata": { "hidden": true }, "source": [ "In Sections 3 and 4, when computing vectorizations and performing supervised machine learning and deep learning tasks, we will use various modules of [`Scikit-Learn`](https://scikit-learn.org/stable/index.html) and [`TensorFlow`](https://www.tensorflow.org/). " ] }, { "cell_type": "code", "execution_count": null, "id": "f02f2d80", "metadata": { "hidden": true }, "outputs": [], "source": [ "import sklearn.preprocessing as skp\n", "import sklearn.neighbors as skn\n", "import sklearn.model_selection as skm\n", "import sklearn.decomposition as skd\n", "import sklearn.manifold as skf\n", "import sklearn.pipeline as skl\n", "import sklearn.svm as sks\n", "import sklearn.ensemble as ske" ] }, { "cell_type": "code", "execution_count": null, "id": "303cced1", "metadata": { "hidden": true }, "outputs": [], "source": [ "import itertools\n", "import tensorflow as tf" ] }, { "cell_type": "markdown", "id": "d2aeee6f", "metadata": { "heading_collapsed": true }, "source": [ "# Section 0: Data set manipulation" ] }, { "cell_type": "markdown", "id": "c462e833", "metadata": { "hidden": true }, "source": [ "We are good to go! First things first, we have to download the data set. It can be obtained [here](https://people.cs.umass.edu/~kalo/papers/LabelMeshes/labeledDb.7z). Extract it, and save its path in the `dataset_path` variable." ] }, { "cell_type": "code", "execution_count": null, "id": "1918d4a5", "metadata": { "hidden": true }, "outputs": [], "source": [ "dataset_path = './3dshapes/'" ] }, { "cell_type": "markdown", "id": "4e744fb7", "metadata": { "hidden": true }, "source": [ "As you can see, the data set in split in several categories (`Airplane`, `Human`, `Teddy`, etc), each category having its own folder. Inside each folder, some 3D shapes (i.e., 3D triangulations) are provided in [`.off`](https://en.wikipedia.org/wiki/OFF_(file_format)) format, and face (i.e., triangle) labels are provided in text files (extension `.txt`). " ] }, { "cell_type": "markdown", "id": "dfb6ccc8", "metadata": { "hidden": true }, "source": [ "Every data science project begins by some preprocessing ;-) " ] }, { "cell_type": "markdown", "id": "fd9fc3e6", "metadata": { "hidden": true }, "source": [ "Write a function `off2numpy` that reads information from an `.off` file and store it in two `NumPy` arrays, called `vertices` (type float and shape number_of_vertices x 3---the 3D coordinates of the vertices) and `faces` (type integer and shape number_of_faces x 3---the IDs of the vertices that create faces). Write also a function `get_labels` that stores the face labels of a given 3D shape in a `NumPy` array (type string or integer and shape [number_of_faces]. " ] }, { "cell_type": "code", "execution_count": null, "id": "0f582f79", "metadata": { "hidden": true }, "outputs": [], "source": [ "def off2numpy(shape_name):\n", " with open(shape_name, 'r') as S:\n", " S.readline()\n", " num_vertices, num_faces, _ = [int(n) for n in S.readline().split(' ')]\n", " info = S.readlines()\n", " vertices = np.array([[float(coord) for coord in l.split(' ')] for l in info[0:num_vertices]])\n", " faces = np.array([[int(coord) for coord in l.split(' ')[1:]] for l in info[num_vertices:]])\n", " return vertices, faces" ] }, { "cell_type": "code", "execution_count": null, "id": "45e80ee8", "metadata": { "hidden": true }, "outputs": [], "source": [ "def get_labels(label_name, num_faces):\n", " L = np.empty([num_faces], dtype='|S100')\n", " with open(label_name, 'r') as S:\n", " info = S.readlines()\n", " labels, face_indices = info[0::2], info[1::2]\n", " for ilab, lab in enumerate(labels):\n", " indices = [int(f)-1 for f in face_indices[ilab].split(' ')[:-1]]\n", " L[ np.array(indices) ] = lab[:-1]\n", " return L" ] }, { "cell_type": "markdown", "id": "9022e9d4", "metadata": { "hidden": true }, "source": [ "You can now apply your code and use `meshplot` to visualize a given 3D shape, say `61.off` in `Airplane`, and the labels on its faces." ] }, { "cell_type": "code", "execution_count": null, "id": "36899aed", "metadata": { "hidden": true }, "outputs": [], "source": [ "vertices, faces = off2numpy(dataset_path + 'Airplane/61.off')\n", "label_faces = get_labels(dataset_path + 'Airplane/61_labels.txt', len(faces))" ] }, { "cell_type": "code", "execution_count": null, "id": "af8b3cdd", "metadata": { "hidden": true }, "outputs": [], "source": [ "mp.plot(vertices, faces, c=skp.LabelEncoder().fit_transform(label_faces))" ] }, { "cell_type": "markdown", "id": "ddd0d81c", "metadata": { "heading_collapsed": true }, "source": [ "# Section 1: 3D robust segmentation with ToMATo" ] }, { "cell_type": "markdown", "id": "085bc4e1", "metadata": { "hidden": true }, "source": [ "In this section, our goal is to use the ToMATo algorithm to compute segmentations of 3D shapes, i.e., to assign labels to 3D shape vertices in an unsupervised way, that is, without training on known labels. This task was initially explored in [this article](https://www.lix.polytechnique.fr/~maks/papers/pers_seg.pdf). " ] }, { "cell_type": "markdown", "id": "ffef8f62", "metadata": { "hidden": true }, "source": [ "Overall, the main idea is to run ToMATo on the neighborhood graph given by the triangulation, with the so-called Heat Kernel Signature (HKS) as the filter. This is motivated by the fact that the HKS function typically takes higher values on the parts of the 3D shape that are very curved (such as, e.g., the tips of fingers in human hand shapes). " ] }, { "cell_type": "markdown", "id": "d80ce77c", "metadata": { "hidden": true }, "source": [ "The HKS was defined in [this article](https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-8659.2009.01515.x). It is related to the heat equation on a given 3D shape $S$:\n", "\n", "$$\\Delta_S f = -\\frac{\\partial f}{\\partial t}.$$\n", "\n", "More formally, the HKS function with parameter $t >0$ on a vertex $v\\in S$, and denoted by ${\\rm HKS}_t(v)$, is computed as:\n", "\n", "$${\\rm HKS}_t(v) = \\sum_{i=0}^{+\\infty} {\\rm exp}(-\\lambda_i\\cdot t)\\cdot \\phi_i^2(v),$$\n", "\n", "where $\\{\\lambda_i, \\phi_i\\}_i$ are the eigenvalues and eigenvectors of $\\Delta_S$.\n", "Intuitively, ${\\rm HKS}_t(v)$ is the amount of heat remaining on $v$ at time $t$, after unit sources of heat have been placed on each vertex at time `t=0`." ] }, { "cell_type": "markdown", "id": "10388299", "metadata": { "hidden": true }, "source": [ "Let's first pick a 3D shape. For instance, use `Hand/181.off` (or any other one you would like to try)." ] }, { "cell_type": "code", "execution_count": null, "id": "a9ce02d4", "metadata": { "hidden": true }, "outputs": [], "source": [ "vertices, faces = off2numpy(dataset_path + 'Hand/181.off')" ] }, { "cell_type": "markdown", "id": "cee27d7e", "metadata": { "hidden": true }, "source": [ "Now, use `robust_laplacian` to compute the first 200 eigenvalues and eigenvectors of its Laplacian (you can use the `eigsh` function of `SciPy` for diagonalizing the Laplacian)." ] }, { "cell_type": "code", "execution_count": null, "id": "3e8542d5", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "a3523bf6", "metadata": { "hidden": true }, "source": [ "Write a function `HKS` that uses these eigenvalues and eigenvectors, as well as a time parameter, to compute the HKS value on a given vertex." ] }, { "cell_type": "code", "execution_count": null, "id": "9bc7a346", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "516ebe1f", "metadata": { "hidden": true }, "source": [ "Visualize the function values with `meshplot` for different time parameters." ] }, { "cell_type": "code", "execution_count": null, "id": "97e7f046", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "e53e6e84", "metadata": { "hidden": true }, "source": [ "Recall that ToMATo requires, in addition to the filter, a neighborhood graph built on top of the data. Fortunately, we can use the triangulations of our 3D shapes as input graphs! Write a function `get_neighborhood_graph_from_faces` that computes a neighborhood graph (in the format required by ToMATo) from the faces of a triangulation. " ] }, { "cell_type": "code", "execution_count": null, "id": "36ea2a33", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "a572d8f7", "metadata": { "hidden": true }, "source": [ "Finally, apply ToMATo (with no prior on the number of clusters or merging threshold) on the neighborhood graph and the HKS function associated to a given time parameter." ] }, { "cell_type": "code", "execution_count": null, "id": "477734e1", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "1fd27b30", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "caa50a38", "metadata": { "hidden": true }, "source": [ "Visualize the persistence diagram produced by ToMATo." ] }, { "cell_type": "code", "execution_count": null, "id": "7d4b98a2", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "2ae752a2", "metadata": { "hidden": true }, "source": [ "How many points do you see standing out from the diagonal? Use this number to re-cluster." ] }, { "cell_type": "code", "execution_count": null, "id": "9152c0b4", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "aa698e3c", "metadata": { "hidden": true }, "source": [ "Visualize the 3D shape with the ToMATo labels." ] }, { "cell_type": "code", "execution_count": null, "id": "88e353f0", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "9c52788a", "metadata": { "hidden": true }, "source": [ "Does our segmentation make sense? Can you interpret the boundaries between labels?" ] }, { "cell_type": "markdown", "id": "6a81fa8e", "metadata": { "hidden": true }, "source": [ "Since the boundaries are driven by the elder rule, they can seem a bit shaggy. In order to fix this, we can use bootstrap-like smoothing. The idea is to first save the current ToMATo clustering obtained with filter $f$ (let's call it the initial clustering), and then perturb $f$ a little bit into another function $\\tilde f$, and finally recompute clustering with ToMATo using $\\tilde f$. Since clusters are now created with the maxima of $\\tilde f$ (which will be different in general from those of $f$), we can use the initial clustering to relate the clusters of $\\tilde f$ to those of $f$, by simply looking at which (initial) clusters do the maxima of $\\tilde f$ belong to. If we repeat this procedure $N$ times, we will end up with a distribution (of size $N$) of candidate clusters for each vertex $v$. It suffices to pick the most frequent one for each vertex to get a smooth segmentation for the 3D shape. See also Section 6 in [the article](https://www.lix.polytechnique.fr/~maks/papers/pers_seg.pdf)." ] }, { "cell_type": "markdown", "id": "076c8404", "metadata": { "hidden": true }, "source": [ "In order to implement this, write first a function `get_indices_of_maxima` which computes the indices of the maxima associated to a set of ToMATo clusters." ] }, { "cell_type": "code", "execution_count": null, "id": "e7bb0f76", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "bf3a10c1", "metadata": { "hidden": true }, "source": [ "Compute and plot these maxima on the 3D shape to make sure your code is working." ] }, { "cell_type": "code", "execution_count": null, "id": "2d944cca", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "d95c8c1a", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "5e2675b6", "metadata": { "hidden": true }, "source": [ "Now, use this function to write another function `bootstrap_tomato` that perform a bootstrap smoothing of a set to ToMATo labels. This function will also take as arguments a number $N$ of bootstrap iterations, and a parameter $\\epsilon$ controlling the amplitude of the uniform noise used to perturb the filter." ] }, { "cell_type": "code", "execution_count": null, "id": "4397b267", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "5ccb652f", "metadata": { "hidden": true }, "source": [ "Apply the bootstrap smoothing and visualize the segmentation." ] }, { "cell_type": "code", "execution_count": null, "id": "94729783", "metadata": { "hidden": true, "scrolled": false }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "50728c2b", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "e4b04216", "metadata": { "hidden": true }, "source": [ "Is the segmentation any better? How does the result depend on the noise amplitude?" ] }, { "cell_type": "markdown", "id": "918bedd8", "metadata": { "heading_collapsed": true }, "source": [ "# Section 2: 3D shape skeletonization with Mapper" ] }, { "cell_type": "markdown", "id": "3b3d6541", "metadata": { "hidden": true }, "source": [ "In this section, our goal is to use Mapper to produce 1-skeletons (i.e., graphs) of 3D shapes. We will also see how to partition this graph into different parts and run statistical tests to decide whether these parts should be considered as numerical artifacts or true signal." ] }, { "cell_type": "markdown", "id": "a13d5a56", "metadata": { "hidden": true }, "source": [ "Let's first pick a 3D shape. For instance, use `Human/3.off` (or any other one you would like to try)." ] }, { "cell_type": "code", "execution_count": null, "id": "a55f2b9d", "metadata": { "hidden": true }, "outputs": [], "source": [ "vertices, faces = off2numpy(dataset_path + 'Human/4.off')" ] }, { "cell_type": "code", "execution_count": null, "id": "fba08756", "metadata": { "hidden": true }, "outputs": [], "source": [ "mp.plot(vertices, faces, c=vertices[:,1])" ] }, { "cell_type": "markdown", "id": "03a228b5", "metadata": { "hidden": true }, "source": [ "In `Gudhi`, Mapper is implemented as a specific case of Graph Induced Complex (GIC), see Definition 2.1 in [this article](https://web.cse.ohio-state.edu/~dey.8/paper/GIC/GIC.pdf). Indeed, given a fixed vertex cover, Mapper computed with hierarchical clustering with parameter $\\delta > 0$ is (roughly) the same as GIC computed with neighborhood graph with parameter $\\delta$." ] }, { "cell_type": "markdown", "id": "67ee6b5e", "metadata": { "hidden": true }, "source": [ "Initiate a `CoverComplex` from `Gudhi`, and set its type to `\"GIC\"`." ] }, { "cell_type": "code", "execution_count": null, "id": "9108ec04", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "37b72b5f", "metadata": { "hidden": true }, "source": [ "Define the point cloud on which Mapper is computed with the array `vertices`, and the filter function as the height coordinate." ] }, { "cell_type": "code", "execution_count": null, "id": "f1ebacc8", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "201cb356", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "a90d88dc", "metadata": { "hidden": true }, "source": [ "Define the node color function (used only for visualization) as the height coordinate as well." ] }, { "cell_type": "code", "execution_count": null, "id": "d21a5d18", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "cc2acf9c", "metadata": { "hidden": true }, "source": [ "Define the clustering algorithm by automatically tuning the $\\delta$ parameter. This can be done by setting the neighborhood graph automatically with the `set_graph_from_automatic_rips` function. " ] }, { "cell_type": "code", "execution_count": null, "id": "cfffabc7", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "a53e25e7", "metadata": { "hidden": true }, "source": [ "Finally, define the cover parameters: 20 intervals for the range of the filter (this parameter is called resolution), and 30% overlap (this one is called gain). Then, compute the cover using preimages of the intervals." ] }, { "cell_type": "code", "execution_count": null, "id": "23f9e4e1", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "b0c18315", "metadata": { "hidden": true }, "source": [ "We can now compute Mapper!" ] }, { "cell_type": "code", "execution_count": null, "id": "04250d57", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "4e8afeb0", "metadata": { "hidden": true }, "source": [ "During computations, the pairwise distances are saved in a binary file `matrix_dist` in order to save time for further computations. Hence, if you want to apply Mapper again on a different shape, make sure to remove this file!" ] }, { "cell_type": "markdown", "id": "76ab6a37", "metadata": { "hidden": true }, "source": [ "The simplicial complex produced by Mapper can be obtained with the `create_simplex_tree` function. However, its vertices are given integer IDs associated to the cover used to compute Mapper. For convenience, rename the vertices from 0 to number_of_vertices in increasing order of the initial IDs. " ] }, { "cell_type": "code", "execution_count": null, "id": "4f3cb9b8", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "d85de2b5", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "15e673e4", "metadata": { "hidden": true }, "source": [ "`Gudhi` also computes the mean of the midpoint of the interval associated to each Mapper vertex, and store it as a filtration value. Check that you have correct filtration values in your simplex tree (at least by eye ;-))." ] }, { "cell_type": "code", "execution_count": null, "id": "79a5e100", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "bd8bd89e", "metadata": { "hidden": true }, "source": [ "With the `write_info` function, Gudhi can produce a `.txt` file containing information about the 1-skeleton of the Mapper, that can be processed by an utility function, available [here](https://github.com/GUDHI/gudhi-devel/blob/master/src/Nerve_GIC/utilities/KeplerMapperVisuFromTxtFile.py). Download and apply this utility function. This will produce an `.html` file that you can visualize in your browser." ] }, { "cell_type": "code", "execution_count": null, "id": "dba580d2", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "c4be9ec8", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "0860d1af", "metadata": { "hidden": true }, "source": [ "Another (more convenient) way to visualize our complex is to plot its 1-skeleton in a Python figure with `networkx`. Using `networkx` documentation, write a function `get_networkx` that turns a simplicial complex into a `networkx` graph corresponding to the 1-skeleton. Make it so the `networkx` graph has two attributes, `\"color\"` and `\"size\"` that contain the mean of the filter values of the points associated to the Mapper vertices, and the number of these points (i.e., the size of the preimages) respectively. For this, you can use `subpopulation` method of Mapper, which returns the point IDs corresponding to every Mapper vertex." ] }, { "cell_type": "code", "execution_count": null, "id": "66842fd3", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "ba9e380a", "metadata": { "hidden": true }, "source": [ "Apply your function and plot your graph with `networkx.draw`. " ] }, { "cell_type": "code", "execution_count": null, "id": "467d3f69", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "23ad0d94", "metadata": { "hidden": true, "scrolled": false }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "838e7ffd", "metadata": { "hidden": true }, "source": [ "As seen in class, we can now compute a bag-of-feature descriptor for Mapper, defined as the extended persistence diagram of the Mapper complex associated to the filter. Compute and visuzalize this descriptor with the `compute_PD` function." ] }, { "cell_type": "code", "execution_count": null, "id": "f5bd520b", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "4b87be7f", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "79935135", "metadata": { "hidden": true }, "source": [ "Can you guess the parts of the 3D shape that are associated to each persistence diagram point?" ] }, { "cell_type": "markdown", "id": "f638e92a", "metadata": { "hidden": true }, "source": [ "In order to understand the parts that are relevant, we can use the (empirical) bootstrap to generate a distribution of bottleneck distances (computed as the distances between our current persistence diagram and a distribution of persistence diagrams obtained from bootstrapped Mapper complexes), and use this distribution to derive confidence regions. Compute first such a distribution with the `compute_distribution` function." ] }, { "cell_type": "code", "execution_count": null, "id": "907e3479", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "e6fa87ee", "metadata": { "hidden": true }, "source": [ "Now, fix a confidence threshold, say 90%, and retrieve the bottleneck distance value $d_b^*$ such that 90% of distances are below this value. You can use the `compute_distance_from_confidence_level` function for that." ] }, { "cell_type": "code", "execution_count": null, "id": "a9153a73", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "d49f7f4f", "metadata": { "hidden": true }, "source": [ "Finally, retrieve the points of our current persistence diagram whose distance to the diagonal is larger than $d_b^*$." ] }, { "cell_type": "code", "execution_count": null, "id": "2b2b74fa", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "a490b986", "metadata": { "hidden": true }, "source": [ "Some points were assessed as non robust, can you guess why?" ] }, { "cell_type": "markdown", "id": "bdc9821a", "metadata": { "hidden": true }, "source": [ "Finally, one might ask whether there is a direct map from the points of the persistence diagrams to the parts of the 3D shape. It is actually a non-trivial question for Mappers of dimension greater than 2, but for Mappers in dimension 1, it is easier. Indeed, connected components and loops (corresponding to persistence diagram points in ${\\rm Ext}_0$ and ${\\rm Ext}_1$ respectively---see class) are standard graph features. " ] }, { "cell_type": "markdown", "id": "be4b489a", "metadata": { "hidden": true }, "source": [ "Compute and visualize the connected components with the `connected_components` function of `networkx`." ] }, { "cell_type": "code", "execution_count": null, "id": "70897755", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "904815ee", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "32e66379", "metadata": { "hidden": true }, "source": [ "Compute and visualize the loops with the `cycle_basis` function of `networkx`." ] }, { "cell_type": "code", "execution_count": null, "id": "2561a35b", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "39b6d179", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "682de95e", "metadata": { "hidden": true }, "source": [ "Now, concerning branches, i.e., points in ${\\rm Ord}_0$ and ${\\rm Rel}_1$, the question is a bit more tricky, but fortunately one can use ToMATo as an approximate solution. This is because ToMATo keeps track of the points forming connected components that are merged later on, wich correspond to branches! Hence, one can apply ToMATo with the filter (resp. the opposite of the filter) to obtain the upward (resp. downward) branches." ] }, { "cell_type": "markdown", "id": "3883c8a2", "metadata": { "hidden": true }, "source": [ "Since ToMATo requires neighborhood graphs as inputs, write a function `get_neighborhood_graph_from_simplex_tree` that computes the neighborhood graph associated to the 1-skeleton of a simplex tree, in a format that is acceptable for ToMATo." ] }, { "cell_type": "code", "execution_count": null, "id": "7732eb89", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "7ea35227", "metadata": { "hidden": true }, "source": [ "Now, compute this neighborhood graph and apply ToMATo using both the filter function and its opposite (with no prior on the number of clusters or merging threshold). " ] }, { "cell_type": "code", "execution_count": null, "id": "8c54e779", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "b77aab55", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "c60b502a", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "a2820e3b", "metadata": { "hidden": true }, "source": [ "Finally, visualize the ToMATo labels on the graph." ] }, { "cell_type": "code", "execution_count": null, "id": "027e27e1", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "1f081ce2", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "1608311a", "metadata": { "hidden": true }, "source": [ "The branches should be detected and highlighted with different colors!" ] }, { "cell_type": "markdown", "id": "2171b721", "metadata": { "heading_collapsed": true }, "source": [ "# Section 3: 3D shape statistics with persistence diagrams" ] }, { "cell_type": "markdown", "id": "6bd1b5e0", "metadata": { "hidden": true }, "source": [ "In this section, our goal is to compute confidence regions associated to the persistence diagram of a given 3D shape. We will study such regions for both the persistence diagram, and one of its representation, the persistence landscape. " ] }, { "cell_type": "markdown", "id": "9aec534d", "metadata": { "hidden": true }, "source": [ "Let's first pick a 3D shape. Let's first pick a 3D shape. For instance, use `Hand/181.off` (or any other one you would like to try)." ] }, { "cell_type": "code", "execution_count": null, "id": "6e2f4759", "metadata": { "hidden": true }, "outputs": [], "source": [ "vertices, faces = off2numpy('3dshapes/Vase/361.off')" ] }, { "cell_type": "code", "execution_count": null, "id": "d35b14b3", "metadata": { "hidden": true }, "outputs": [], "source": [ "mp.plot(vertices, faces, c=vertices[:,1])" ] }, { "cell_type": "markdown", "id": "14927c11", "metadata": { "hidden": true }, "source": [ "The first standard way of obtaining confidence regions for (geometric) persistence diagrams is through the stability theorem (see class):\n", "\n", "$$\\mathbb{P}(d_b(D_{\\rm Rips}(X),D_{\\rm Rips}(\\hat X_n)) \\geq \\delta)\\leq \\mathbb{P}(d_H(X,\\hat X_n)\\geq \\delta/2),$$\n", "$$\\mathbb{P}(d_b(D_{\\rm Cech}(X),D_{\\rm Cech}(\\hat X_n)) \\geq \\delta)\\leq \\mathbb{P}(d_H(X,\\hat X_n)\\geq \\delta),$$\n", "\n", "where $d_H(\\cdot,\\cdot)$ is the Hausdorff distance, defined, for any two compact spaces $X,Y\\subset \\mathbb{R}^d$, as \n", "\n", "$$d_H(X,Y)={\\rm min}\\{{\\rm max}_{x\\in X}{\\rm min}_{y\\in Y}\\|x-y\\|, {\\rm max}_{y\\in Y}{\\rm min}_{x\\in X}\\|y-x\\|\\}.$$\n", "\n", "Hence, it suffices to estimate $\\mathbb{P}(d_H(X,\\hat X_n)\\geq \\delta)$ in order to derive confidence regions for persistence diagrams. There exists an upper bound for this probability when $\\hat X_n$ is drawn from an $(a,b)$-standard probability measure, however this bound depends on $a$ and $b$. In the following, we will rather use the subsampling method, that allows to estimate the probability solely from subsampling $\\hat X_n$ with $s(n) =o\\left(\\frac{n}{{\\rm log}(n)}\\right)$ points, and computing $d_H(\\hat X_n, \\hat X_{s(n)})$. The exact procedure is described in Section 4.1 in [this article](file:///user/mcarrier/home/Downloads/14-AOS1252.pdf)." ] }, { "cell_type": "markdown", "id": "6b587540", "metadata": { "hidden": true }, "source": [ "Write a function `hausdorff_distance` that computes the Hausdorff distance between the vertices of our 3D shape and a subset of these vertices." ] }, { "cell_type": "code", "execution_count": null, "id": "b6fd4083", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "5c234795", "metadata": { "hidden": true }, "source": [ "Now, write a function `hausdorff_interval` that computes this Hausdorff distance many times and uses the corresponding distribution of Hausdorff distances in order to output the bottleneck distance value associated to a given confidence level (by computing the quantile---corresponding to this confidence level---of the distribution)." ] }, { "cell_type": "code", "execution_count": null, "id": "fe2d5f1d", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "d1e1e3dd", "metadata": { "hidden": true }, "source": [ "Apply your code to obtain a bottleneck distance associated to, say, 90% confidence." ] }, { "cell_type": "code", "execution_count": null, "id": "73d0b35c", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "a6b6ab7f", "metadata": { "hidden": true }, "source": [ "All right, now let's see which points of the persistence diagram are we going to label non-significant and discard. Compute the Rips and Alpha persistence diagrams of the points. " ] }, { "cell_type": "code", "execution_count": null, "id": "59e284c7", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "70e013f6", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "bb3c3b3f", "metadata": { "hidden": true }, "source": [ "Now, visualize the persistence diagrams with a band of size the previously computed bottleneck distance times 2 (for Alpha filtration) and 4 (for Rips filtration)." ] }, { "cell_type": "code", "execution_count": null, "id": "0b56f441", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "cc52669d", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "b51cc64e", "metadata": { "hidden": true }, "source": [ "Are you discarding many points? If yes, this could be because the confidence region is computed only from the stability property of persistence diagrams: subsampling the Hausdorff distance can sometimes be quite conservative. It would be more efficient to bootstrap the persistence diagrams themselves---this is the approach advocated in Section 6 of [this article](https://www.jmlr.org/papers/volume18/15-484/15-484.pdf). However, this method was only proved for persistence diagrams obtained through the sublevel sets of kernel density estimators... But let's try it anyway! ;-)" ] }, { "cell_type": "markdown", "id": "721e5066", "metadata": { "hidden": true }, "source": [ "Similarly than before, write `bottleneck_distance_bootstrap` and `bottleneck_interval` functions that compute the bottleneck distances between our current persistence diagram (in homology dimension 1) and the persistence diagrams of many bootstrap iterates." ] }, { "cell_type": "code", "execution_count": null, "id": "3f59f364", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "2767a86f", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "8017c2ff", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "390ed7a6", "metadata": { "hidden": true }, "source": [ "Compute the bottleneck distance associated to a confidence level and visualize it." ] }, { "cell_type": "code", "execution_count": null, "id": "ecda39f6", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "2a5f477f", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "f8fdcedc", "metadata": { "hidden": true }, "source": [ "Are you discarding less points in the persistence diagram now?" ] }, { "cell_type": "markdown", "id": "3f169cfa", "metadata": { "hidden": true }, "source": [ "Another approach with more theoretical guarantees is to use the persistence landscapes associated to the persistence diagram. Indeed, valid confidence regions can be easily obtained using, e.g., algorithm 1 in [this article](https://geometrica.saclay.inria.fr/team/Fred.Chazal/papers/cflrw-scpls-14/cflrw-scpls-14.pdf). In the following, we will fix a subsample size $s(n)$, and estimate $\\mathbb{E}[\\Lambda_{s(n)}]$, where $\\Lambda_{s(n)}$ is the landscape of a random subsample of size $s(n)$ (i.e., drawn from a probability measure $\\mu$ such as, e.g., the empirical measure). " ] }, { "cell_type": "markdown", "id": "221c7fe1", "metadata": { "hidden": true }, "source": [ "Let's first make sure that we can compute landscapes ;-) Use `Gudhi` to compute and plot the first six persistence landscapes associated to the persistence diagram computed above in homology dimension 1. Landscapes (and other vectorizations) are implemented with the API of `Scikit-Learn` estimators, which means that you have to call the `fit_transform` method on a list of persistence diagrams in order to get their landscapes. " ] }, { "cell_type": "code", "execution_count": null, "id": "872feb75", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "5851813e", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "3ec3759d", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "2579c026", "metadata": { "hidden": true }, "source": [ "Write a function `landscape_interval` that implements the landscape bootstrap procedure, that is, drawing many subsamples of size $s(n)$, computing their Alpha persistence diagrams and landscapes, computing the distribution of distances between each single landscape and their mean (multiplied by a random normal variable), and finally using the quantiles of this distribution in order to obtain confidence regions for the mean landscape." ] }, { "cell_type": "code", "execution_count": null, "id": "9b1e5e29", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "46fca9f7", "metadata": { "hidden": true }, "source": [ "Apply and visualize the confidence interval around the different landscapes." ] }, { "cell_type": "code", "execution_count": null, "id": "ce3cc3e4", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "a5c571a2", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "4dfc04e0", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "4ba89d3a", "metadata": { "hidden": true }, "source": [ "The confidence regions are much better now!" ] }, { "cell_type": "markdown", "id": "e7608552", "metadata": { "hidden": true }, "source": [ "Another interesting property of mean landscapes is their robustness to noise:\n", "\n", "$$\\|\\mathbb{E}[\\Lambda_{s(n)}^X]-\\mathbb{E}[\\Lambda_{s(n)}^Y]\\|_\\infty\\leq 2 \\cdot s(n) \\cdot d_{GW}(\\mu,\\nu),$$\n", "\n", "where $d_{GW}$ is the 1-Gromov-Wasserstein distance between probability measures. See Remark 6 in [this article](https://geometrica.saclay.inria.fr/team/Fred.Chazal/papers/cflmrw-smph-15/ICMLFinal.pdf). We will now confirm this by adding outlier noise to the 3D shape and looking at the resulting mean landscape. " ] }, { "cell_type": "markdown", "id": "656b86ff", "metadata": { "hidden": true }, "source": [ "Create a noisy version of `vertices` with some outlier noise." ] }, { "cell_type": "code", "execution_count": null, "id": "c55892c6", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "c0967abb", "metadata": { "hidden": true }, "source": [ "Let's first compare the persistence landscapes of the two sets of vertices. Compute and visualize these landscapes on the same plot." ] }, { "cell_type": "code", "execution_count": null, "id": "2decb061", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "af03a274", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "660bba75", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "32fe4158", "metadata": { "hidden": true }, "source": [ "As one can see, they are quite different. By contrast, computing the mean landscape with subsamples is much more robust, as we will now see." ] }, { "cell_type": "markdown", "id": "e5ffca87", "metadata": { "hidden": true }, "source": [ "Compute the mean persistence landscape of the noisy point cloud, and visualize it next to the mean persistence landscape of the clean point cloud." ] }, { "cell_type": "code", "execution_count": null, "id": "c25d18e8", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "1a64c8fb", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "c8febaca", "metadata": { "hidden": true }, "source": [ "Now, these landscapes looks much more in agreement!" ] }, { "cell_type": "markdown", "id": "8432718c", "metadata": { "heading_collapsed": true }, "source": [ "# Section 4: 3D shape classification with persistence diagrams" ] }, { "cell_type": "markdown", "id": "e4f92d86", "metadata": { "hidden": true }, "source": [ "In this section, our goal is to use persistence diagrams for classifying and segmenting 3D shapes with supervised machine learning. " ] }, { "cell_type": "markdown", "id": "f5d815b2", "metadata": { "hidden": true }, "source": [ "Let's start with classification. We will compute persistence diagrams for all shapes in different categories, and train a classifier from `Scikit-Learn` to predict the category from the persistence diagrams. Since `Gudhi` requires simplex trees from the persistence diagram computations, write a `get_simplex_tree_from_faces` function that builds a simplex tree from the faces of a given 3D shape triangulation." ] }, { "cell_type": "code", "execution_count": null, "id": "a115a6f8", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "7c921034", "metadata": { "hidden": true }, "source": [ "Now, compute all the persistence diagrams (in homology dimension 0) associated to the sublevel sets of the third coordinate from a few categories, and retrieve their corresponding labels." ] }, { "cell_type": "code", "execution_count": null, "id": "0be8049f", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "91e6fa26", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "4bee73af", "metadata": { "hidden": true }, "source": [ "As discussed in class, it is not very convenient to use persistence diagrams directly for machine learning purposes (except for a few methods such as $K$-nearest neighbors). What we need is to define a vectorization, that is, a map $\\Phi:\\mathcal{D}\\rightarrow\\mathcal{H}$ sending persistence diagrams into a Hilbert space, or equivalently, a symmetric kernel function $k:\\mathcal{D}\\times \\mathcal{D} \\rightarrow \\mathbb{R}$ such that $k(D,D')=\\langle \\Phi(D),\\Phi(D')\\rangle$. Fortunately, there are already a bunch of such maps and kernels in `Gudhi` :-)" ] }, { "cell_type": "markdown", "id": "4a660a18", "metadata": { "hidden": true }, "source": [ "In the following we will compute and visualize the most popular kernels on some persistence diagrams. Pick first a specific persistence diagram and use `DiagramSelector` to remove its points with infinite coordinates." ] }, { "cell_type": "code", "execution_count": null, "id": "833bf89e", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "ffda132f", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "b8a3b925", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "2432e101", "metadata": { "hidden": true }, "source": [ "Now, let's see what `Gudhi` has to offer to vectorize persistence diagrams with `Scikit-Learn` estimator-like classes, that is, with classes that have `fit`, `transform`, and `fit_transform` methods, see [this article](https://arxiv.org/pdf/1309.0238.pdf) for more details. For each vectorization mentioned below, we recommend you to play with its parameters and infer their influence on the ouput in order to get some intuition. " ] }, { "cell_type": "markdown", "id": "cb161d95", "metadata": { "hidden": true }, "source": [ "The first vectorization method that was introduced historically is the persistence landscape. A persistence landscape is basically obtained by rotating the persistence diagram by $-\\pi/4$\n", "(so that the diagonal becomes the $x$-axis), and then putting tent functions on each point. The $k$th landscape is then defined as the $k$th largest value among all these tent functions. It is eventually turned into a vector by evaluating it on a bunch of uniformly sampled points on the $x$-axis." ] }, { "cell_type": "markdown", "id": "e9fed064", "metadata": { "hidden": true }, "source": [ "Compute and visualize the first landscape of the persistence diagram for various parameters." ] }, { "cell_type": "code", "execution_count": null, "id": "7dc45615", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "01f58f94", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "85ffa273", "metadata": { "hidden": true }, "source": [ "A variation, called the silhouette, takes a weighted average of these tent functions instead. Here, we weight each tent function by the distance of the corresponding point to the diagonal." ] }, { "cell_type": "code", "execution_count": null, "id": "37f65bb3", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "803d519c", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "eafd6239", "metadata": { "hidden": true }, "source": [ "The second method is the persistence image. A persistence image is obtained by rotating by $-\\pi/4$, centering Gaussian functions on all diagram points (usually weighted by a parameter function, such as, e.g., the squared distance to the diagonal) and summing all these Gaussians. This gives a 2D function, that is pixelized into an image." ] }, { "cell_type": "code", "execution_count": null, "id": "f004c82d", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "b157cc3f", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "43ae1572", "metadata": { "hidden": true }, "source": [ "`Gudhi` also has a variety of metrics and kernels, which sometimes perform better than explicit vectorizations such as the ones described above. Pick another persistence diagram, and get familiar with the bottleneck and the Wasserstein distances between them. Note that you can call them in different ways in `Gudhi`, there are `bottleneck_distance` and `wasserstein_distance` functions for instance, but there are also wrappers of these functions into estimator classes `BottleneckDistance` and `WassersteinDistance` (with `fit` and `transform` methods). These classes are especially useful when doing model selection with `Scikit-Learn`, see below." ] }, { "cell_type": "code", "execution_count": null, "id": "432ed077", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "b589659e", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "2ab52919", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "7b17700e", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "77a2b124", "metadata": { "hidden": true }, "source": [ "`Gudhi` also allows to use standard kernels such as, among others, the persistence scale space kernel, persistence Fisher kernel, sliced Wasserstein kernel, etc. Try computing the kernel values for your pair of diagrams." ] }, { "cell_type": "code", "execution_count": null, "id": "a61e3fe8", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "0a43d70d", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "0f954b5c", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "4e55a929", "metadata": { "hidden": true }, "source": [ "Before trying to classify the persistence diagrams, let's do a quick dimension reduction with PCA. Apply `PCA`, `KernelPCA` or `MDS` (available in `Scikit-Learn`) on the explicit maps (landscapes, images, etc), kernel matrices (Fisher, sliced Wasserstein, etc) and distance matrices (bottleneck, Wasserstein, etc) respectively." ] }, { "cell_type": "code", "execution_count": null, "id": "36b5b25f", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "35ff782d", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "48799480", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "953bc171", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "0a3eec7d", "metadata": { "hidden": true, "scrolled": false }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "12210c60", "metadata": { "hidden": true }, "source": [ "Is there any method that looks better in separating the categories, at least by eye?" ] }, { "cell_type": "markdown", "id": "7466c984", "metadata": { "hidden": true }, "source": [ "All right, let's try classification now! Shuffle the data, and create a random 80/20 train/test split." ] }, { "cell_type": "code", "execution_count": null, "id": "4534ceb5", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "d436cd07", "metadata": { "hidden": true }, "source": [ "Here is the best thing about having estimator-like classes: they can be integrated flawlessly in a `Pipeline` of `Scikit-Learn` for model selection and cross-validation! A `Pipeline` is itself an estimator, and is initialized as with a list of estimators. It will just sequentially apply the `fit_transform` methods of the estimators in the list." ] }, { "cell_type": "markdown", "id": "47a1ba83", "metadata": { "hidden": true }, "source": [ "Define a `Pipeline` with four estimators: one for selecting the finite persistence diagram points, one for scaling (or not) the persistence diagrams (with `DiagramScaler`), one for vectorizing persistence diagrams, and one for performing the final prediction. See the [documentation](https://scikit-learn.org/stable/modules/compose.html#combining-estimators)." ] }, { "cell_type": "code", "execution_count": null, "id": "83bd80b7", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "580de988", "metadata": { "hidden": true }, "source": [ "Now, define a grid of parameter that will be used in cross-validation." ] }, { "cell_type": "code", "execution_count": null, "id": "3b80a849", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "b5b9c623", "metadata": { "hidden": true }, "source": [ "Define and train the model." ] }, { "cell_type": "code", "execution_count": null, "id": "822ca794", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "7fd04ff3", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "55c223e9", "metadata": { "hidden": true }, "source": [ "Check the parameters that were chosen during model selection, and evaluate your model on the test set." ] }, { "cell_type": "code", "execution_count": null, "id": "acf7eb2b", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "05a9aba2", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "1ee167b5", "metadata": { "hidden": true }, "source": [ "How is your score? If it is bad, you can always increase the parameter and/or classifier search, but this can quickly become quite cumbersome. Moreover, a potential source of error comes from the fact that the third coordinate do not necessarily correspond to the height (i.e., the 3D shapes are not embedded in a consistent way). This is where persistence differentiation can come to the rescue! Indeed, instead of picking a specific coordinate, we can try to optimize a linear combination of the coordinates:\n", "\n", "$$f_\\alpha: x\\mapsto \\sum_{i=1}^d \\alpha_i x_i,$$\n", "\n", "such that the persistence diagrams of the same category are close, while persistence diagrams from different categories are far away from each other. This means minimizing a loss of the form:\n", "\n", "$$\\alpha^* = {\\rm min}_\\alpha \\sum_l \\frac{\\sum_{y_i=y_j=l}d(D_{f_\\alpha}(x_i),D_{f_\\alpha}(x_j))}{\\sum_{y_i=l,y_j}d(D_{f_\\alpha}(x_i),D_{f_\\alpha}(x_j))},$$\n", "\n", "where $d$ is any (pseudo)-distance between persistence diagrams, that can be differentiated through a deep learning library (such as `TensorFlow` or `PyTorch`). For instance, the sliced Wasserstein distance is quite easy to compute with standard deep learning libraries since it only involves projecting points onto lines. See [this article](http://proceedings.mlr.press/v70/carriere17a/carriere17a.pdf)." ] }, { "cell_type": "markdown", "id": "e64d901f", "metadata": { "hidden": true }, "source": [ "Write a `deep_swd` function that computes the sliced Wasserstein distance between persistence diagrams with `TensorFlow` or `PyTorch` operations." ] }, { "cell_type": "code", "execution_count": null, "id": "25e05686", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "0876ad36", "metadata": { "hidden": true }, "source": [ "Now, just as before, split the data into train/test, but this time, collect the vertices, simplex trees and labels (it is useless to compute persistence diagrams since they will be recomputed after each gradient descent iteration and update of $\\alpha$)." ] }, { "cell_type": "code", "execution_count": null, "id": "e46489f5", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "fdc87a8f", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "c9f93611", "metadata": { "hidden": true }, "source": [ "Initialize the alpha values, as well as the angles used for computing the sliced Wasserstein distances (and make sure these angles are not optimized during training). Define also the iteration number, batch size, learning rate and optimizer." ] }, { "cell_type": "code", "execution_count": null, "id": "d3a8687f", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "6b2ccbb6", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "42ce6fa5", "metadata": { "hidden": true }, "source": [ "Run gradient descent! For this, you can use the `LowerStarSimplexTreeLayer` class from `Gudhi`, which computes persistence diagrams from simplex trees in a differentiable way with `TensorFlow` operations. Make sure to save the loss value at each iteration." ] }, { "cell_type": "code", "execution_count": null, "id": "c97e97ab", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "9f1aa014", "metadata": { "hidden": true }, "source": [ "Visualize the losses. Is it decreasing? What are the final alpha values?" ] }, { "cell_type": "code", "execution_count": null, "id": "cb441db4", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "9527f23f", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "3035f93f", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "0f3d0a2e", "metadata": { "hidden": true }, "source": [ "We can now use these values to train a model again with this new filtration, and check whether the accuracy is better now!" ] }, { "cell_type": "code", "execution_count": null, "id": "5fe943b1", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "738fa47e", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "48b8135c", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "3b52ddff", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "84413ae7", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "cd4d008a", "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "66bae1cd", "metadata": { "hidden": true }, "source": [ "Yay! That's definitely better!" ] }, { "cell_type": "markdown", "id": "22b90bc4", "metadata": { "hidden": true }, "source": [ "If you managed to go that far, congrats, you are basically a TDA expert now ;-) Do not hesitate to reuse these pieces of code for your own research, and let us know if you have any comment/question/suggestion!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.5" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 5 }