Randomized Multi-Pulse Time-of-Flight Mass Spectrometry

This is code is used to simulate an idealized TOFMS system with random pulse times. For further details, see the paper "Randomized Multi-Pulse Time-of-Flight Mass Spectrometry," by M. G. Moore, A. K. Massimino, and M. A. Davenport.

The code can be downloaded here; see the included readme file for a detailed description of the contents and for usage instructions.

1-Bit Matrix Completion Toolbox

This software package implements a penalized maximum-likelihood optimization approach to the problem of matrix completion for the extreme case of noisy 1-bit observations, where instead of observing a subset of the real-valued entries of a matrix M, we obtain a small number of binary (1-bit) measurements generated according to a probability distribution determined by the real-valued entries of M. For further details, see the paper "1-Bit Matrix Completion," by M.A. Davenport, Y. Plan, E. van den Berg, and M. Wootters.

The toolbox can be downloaded here; the Matlab file entitled "demo.m" provides an example of how to use the software package. Please e-mail mdav-at-gatech-dot-edu if you find any bugs or have any questions.

Signal Space CoSaMP Toolbox

The bulk of the Compressive sensing (CS) literature has focused on the case where the acquired signal has a sparse or compressible representation in an orthonormal basis. In practice, however, there are many signals that cannot be sparsely represented or approximated using an orthonormal basis, but that do have sparse representations in a redundant dictionary. Standard results in CS can sometimes be extended to handle this case provided that the dictionary is sufficiently incoherent or well-conditioned, but these approaches fail to address the case of a truly redundant or overcomplete dictionary.

This software package implements a variant of the iterative reconstruction algorithm CoSaMP for this more challenging setting. In contrast to prior approaches, the method is "signal-focused"; that is, it is oriented around recovering the signal rather than its dictionary coefficients.

For further details, see the paper "Signal Space CoSaMP for Sparse Recovery with Redundant Dictionaries," by M.A. Davenport, D. Needell, and M.B. Wakin.

Signal Space CoSaMP can be downloaded here; the Matlab files entitled "demo_*.m" provide several examples of how to invoke Signal Space CoSaMP and compare the results to traditional CS algorithms. The Matlab data (.mat) files necessary to reproduce the figures presented in the paper can be downloaded here. Please e-mail mdav-at-gatech-dot-edu if you find any bugs or have any questions.

DPSS Approximation and Recovery Toolbox (DART)

Compressive sensing (CS) has recently emerged as a framework for efficiently capturing signals that are sparse or compressible in an appropriate basis. While often motivated as an alternative to Nyquist-rate sampling, there remains a gap between the discrete, finite-dimensional CS framework and the problem of acquiring a continuous-time signal.

This software package provides a set of tools for bridging this gap through the use of Discrete Prolate Spheroidal Sequences (DPSS's), a collection of functions that trace back to the seminal work by Slepian, Landau, and Pollack on the effects of time-limiting and bandlimiting operations. DPSS's form a highly efficient basis for sampled bandlimited functions; by modulating and merging DPSS bases, we obtain a dictionary that offers high-quality sparse approximations for most sampled multiband signals. This multiband modulated DPSS dictionary can be readily incorporated into the CS framework.

For further details, see the paper "Compressive Sensing of Analog Signals Using Discrete Prolate Spheroidal Sequences," by M.A. Davenport and M.B. Wakin.

DART contains all of the software necessary to reproduce the results presented in this paper. It can be downloaded here. Please e-mail mdav-at-gatech-dot-edu if you find any bugs or have any questions.

SVMs for minimax and Neyman-Pearson classification

In traditional classification there are two types of errors we can make (associated with the two classes). Traditionally, algorithms have focused on minimizing the probability of making an error. However, in many practical settings the two types of errors have very different costs. For example, in tumor classification the probability of error is not the most appropriate criterion. The Neyman-Pearson (NP) approach seeks instead to minimize false negatives while constraining false positives to be below a certain significance level. The theory of learning classifiers with respect to the NP criterion has recently begun to emerge and there is now a need for the development of practical algorithms. We show that support vector machines (SVMs) - one of the most powerful family of algorithms for classification - can be adapted to this setting in a natural manner. Cost-sensitive SVMs for NP Learning

There are two immediate options for adapting SVMs to the NP criterion. Since SVM-based classifiers can be interpreted as hyperplanes in an appropriate feature space, one option is to simply shift the hyperplane to achieve the desired trade-off between false positives and false negatives. Another approach is to use a "cost-sensitive" SVM which introduces class-specific weights, penalizing training errors from one class more than the other. The key challenge in this approach lies in appropriately setting the free parameters in the SVM (in particular those which determine the relative costs for the two error types). With this in mind we have developed the theory of the 2nu-SVM, providing a characterization of the feasible parameter set for this method. Further contributions include a novel heuristic for improved error estimation and a strategy for efficiently searching the parameter space of this method. In the work below we have shown that the 2nu-SVM is much more effective for NP classification than simply shifting the decision boundary.

M.A. Davenport, R.G. Baraniuk, and C.D. Scott, "Tuning support vector machines for minimax and Neyman-Pearson classification," IEEE Trans. on Pattern Analysis and Machine Intelligence, 32(10) pp. 1888-1898, October 2010.

We have modified the LIBSVM package to implement the 2nu-SVM (described in the work above). Our modifications can be downloaded here. The version posted is a Matlab interface for LIBSVM 2.8 (based on one written by Jun-Cheng Chen, Kuan-Jen Peng, Chih-Yuan Yang, and Chih-Huai Cheng from National Taiwan University). If you would prefer to use your own version, we have instructions for how to modify LIBSVM to implement the 2nu-SVM here. E-mail mdav-at-gatech-dot-edu if you find any bugs or have any questions.