David W. Romero

David W. Romero

PhD. Student in Efficient Deep Learning

Vrije Universiteit Amsterdam


I am a final-year PhD Student at the Vrije Universiteit Amsterdam supervised by Erik Bekkers (UvA), Jakub Tomczak and Mark Hoogendoorn. I have spent time at Mitsubishi Electric Research Laboratories, Qualcomm AI Research and Google Research.

My research interests include all aspects of efficiency in Deep Learning, such as data efficiency, computational efficiency, and parameter efficiency. My specific focus is on continuous relaxations and parameterizations of Deep Learning methods, such as Continuous Kernel Convolutions. Continuous Kernel Convolutions are a new family of neural parameterizations with interesting efficiency properties for which I received the Qualcomm Innovation Fellowship.

In my free time, I enjoy learning new things, such as coffee making and carpentry, and doing sports like fitness and basketball.


  • Efficient Deep Learning
  • Continuous parameterizations
  • Symmetry equivariance


  • PhD. in Efficient Deep Learning

    Vrije Universiteit Amsterdam

  • MSc. Computational Engineering, 2018

    Technische Universität Berlin

  • BSc. Mechatronic Engineering, 2016

    Universidad Nacional de Colombia


Towards a General Purpose CNN for Long Range Dependencies in $N$D

We demonstrate that Continuous Convolutional Kernels allow creating a single neural architecture that achieves state of the art on several different tasks in 1D and 2D.

Learning Partial Equivariances From Data

We provide a method with which equivariant networks can adapt to the equivariance levels exhibited in data via backpropagation.

FlexConv: Continuous Kernel Convolutions With Differentiable Kernel Sizes

We demonstrate that the kernel size of CNNs can be learned via backpropagation by using continuous kernel parameterizations.

CKConv: Continuous Kernel Convolution For Sequential Data

We provide a way to model long-term interactions, without vanishing gradients, in parallel and within a single layer. It naturally handles irregularly-sampled data and sampling rate discrepancies.

Group Equivariant Stand-Alone Self-Attention For Vision

We provide a general self-attention formulation to impose group equivariance to arbitrary symmetry groups.

Wavelet Networks: Scale Equivariant Learning From Raw Waveforms

We utilize scale-translation equivariance for learning on raw time-series, e.g., audio.

Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring in Data

We utilize attention to focus learning on the set of co-occurring transformations in data.



  Honors and Awards



Fully Professional Proficiency (C1)


Fully Professional Proficiency (C1)


Fully Professional Proficiency (C1)


Native Proficiency (C2)