David W. Romero

David W. Romero

Student Researcher
PhD. Student in Deep Learning

Google Brain

Vrije Universiteit Amsterdam

Biography

I am a 3rd year PhD Student at the Vrije Universiteit Amsterdam supervised by Erik Bekkers (UvA), Jakub Tomczak & Mark Hoogendoorn. Currently, I am a Student Researcher at Google Brain in Paris. Previously, I’ve spent some time at Qualcomm AI Research, and Mitsubishi Electric Research Laboratories (MERL).

My research is focused on data efficiency, computation efficiency and parameter efficiency aspects of Deep Learning models. Currently, I am particularly interested in neural architectures with extensive parameter sharing such as Continuous kernel CNNs, Group equivariant networks and Self-attention networks. Continuous Kernel CNNs are a new family of neural networks with interesting efficiency properties for which I recently received the Qualcomm Innovation Fellowship Europe (2021).

My PhD is part of the Efficient Deep Learning (EDL) project funded by the NWO. I collaborate with Samotics to apply developments resulting from my research in time-series analysis.

In my free time, I like learning about new things (e.g., coffee, wine, carpentry) and doing sports (e.g., fitness, basketball).

Interests

  • Representation Learning
  • Efficiency in Deep Learning.

Education

  • PhD. in Efficient Deep Learning

    Vrije Universiteit Amsterdam

  • MSc. Computational Engineering, 2018

    Technische Universität Berlin

  • BSc. Mechatronic Engineering, 2016

    Universidad Nacional de Colombia

Publications

Towards a General Purpose CNN for Long Range Dependencies in $N$D

We demonstrate that Continuous Convolutional Kernels allow creating a single neural architecture that achieves state of the art on several different tasks in 1D and 2D.

Learning Equivariances and Partial Equivariances From Data

We provide a method with which equivariant networks can adapt to the equivariance levels exhibited in data via backpropagation.

FlexConv: Continuous Kernel Convolutions With Differentiable Kernel Sizes

We demonstrate that the kernel size of CNNs can be learned via backpropagation by using continuous kernel parameterizations.

CKConv: Continuous Kernel Convolution For Sequential Data

We provide a way to model long-term interactions, without vanishing gradients, in parallel and within a single layer. It naturally handles irregularly-sampled data and sampling rate discrepancies.

Group Equivariant Stand-Alone Self-Attention For Vision

We provide a general self-attention formulation to impose group equivariance to arbitrary symmetry groups.

Wavelet Networks: Scale Equivariant Learning From Raw Waveforms

We utilize scale-translation equivariance for learning on raw time-series, e.g., audio.

Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring in Data

We utilize attention to focus learning on the set of co-occurring transformations in data.

  Education

  Experience

  Honors and Awards

Languages

Dutch

Fully Professional Proficiency (C1)

German

Fully Professional Proficiency (C1)

English

Fully Professional Proficiency (C1)

Spanish

Native Proficiency (C2)

Contact