Sertac Karaman

Associate Professor of Aeronautics and Astronautics,

Laboratory for Information and Decision Systems,

Institute for Data, Systems and Society,

Massachusetts Institute of Technology

Featured Publication

Efficient High-Dimensional Stochastic Optimal Motion Control using Tensor-Train Decomposition

A. Gorodetsky, S. Karaman, Y. Marzouk

Abstract: Stochastic optimal control problems frequently arise as motion control problems in the context of robotics. Unfortunately, all existing approaches that guarantee arbitrary precision suffer from the curse of dimensionality: the computational effort invested by the algorithm grows exponentially fast with increasing dimensionality of the state space of the underlying dynamic system governing the robot. In this paper, we propose a novel algorithm that utilizes compressed representations to efficiently solve stochastic optimal control problems with arbitrary precision. The running time of the new algorithms scale linearly with increasing dimensionality of the state space! The running time also depends polynomially on the rank of the value function, a measure that quantifies the intrinsic geometric complexity of the value function, due to the geometry and physics embedded in the problem instance at hand. The new algorithms are based on the recent analysis and algorithms for tensor decomposition, generalizing matrix decomposition algorithms, e.g., the singular value decomposition, to three or more dimensions. In computational experiments, we show the computational effort of the new algorithm also scales linearly with the discretization resolution of the state space. We also demonstrate the new algorithm on a problem involving the perching of an aircraft, represented by a nonlinear non-holonomic longitudinal model with a seven-dimensional state space, the full numerical solution to which was not obtained before. In this example, we estimate that the proposed algorithm runs more than seven orders of magnitude faster, when compared to the naive value iteration.

Featured Preprint

Function-Train: A continuous analogue of the tensor-train decomposition

A. Gorodetsky, S. Karaman, Y. Marzouk

Abstract: We describe a new nonparametric function approximation framework based on a continuous extension of the tensor-train decomposition. The approximation, termed a function-train (FT), results in a tensor-train structure whose cores are univariate functions of each variable. An advantage of the FT over discrete approaches is that it produces an adaptive approximation of tensor fibers that is not tied to any tensorized discretization procedure. Furthermore, the representation of low-rank functions in FT format enables efficient continuous computation: we can add, multiply, integrate, and differentiate functions in polynomial time with respect to dimension. Our approach is in the spirit of other continuous computation packages such as Chebfun, and yields an algorithm which requires the computation of "continuous" matrix factorizations such as the LU and QR decompositions of vector-valued functions. Our contributions include creating a maxvol algorithm for finding the maximum volume submatrix of a matrix-valued function. We also introduce a maxvol cross-approximation algorithm to obtain skeleton decompositions of vector-valued functions, a cross-approximation algorithm for converting black-box functions into FT format, and a continuous rounding algorithm that re-approximates an FT by an FT of lower ranks. We demonstrate the benefits of our approach by integrating high dimensional and discontinuous functions. We also show how the algorithm performs on a variety of approximation tasks.