David W. Romero

David W. Romero

PhD. Student in Deep Learning

Vrije Universiteit Amsterdam


I am a 3rd year PhD Student at the Vrije Universiteit Amsterdam supervised by Erik Bekkers (UvA), Jakub Tomczak & Mark Hoogendoorn. Currently, I am a Research Intern at Qualcomm AI Research, and previously this year I spent some time at Mitsubishi Electric Research Laboratories (MERL) as a Research Consultant.

My research is focused on data efficiency, computation efficiency and parameter efficiency aspects of Deep Learning models. Currently, I am particularly interested in neural architectures with extensive parameter sharing such as Continuous kernel CNNs, Group equivariant networks and Self-attention networks. Continuous Kernel CNNs are a new family of neural networks with interesting efficiency properties for which I recently received the Qualcomm Innovation Fellowship Europe (2021).

My PhD is part of the Efficient Deep Learning (EDL) project funded by the NWO. I collaborate with Samotics to apply developments resulting from my research in time-series analysis.

In my free time, I like learning about new things (e.g., coffee, wine, carpentry) and doing sports (e.g., fitness, basketball).


  • Representation Learning
  • Efficiency in Deep Learning.


  • PhD. in Efficient Deep Learning

    Vrije Universiteit Amsterdam

  • MSc. Computational Engineering, 2018

    Technische Universität Berlin

  • BSc. Mechatronic Engineering, 2016

    Universidad Nacional de Colombia


CKConv: Continuous Kernel Convolution For Sequential Data

We provide a way to model long-term interactions, without vanishing gradients, in parallel and within a single layer. It naturally handles irregularly-sampled data and sampling rate discrepancies.

Group Equivariant Stand-Alone Self-Attention For Vision

We provide a general self-attention formulation to impose group equivariance to arbitrary symmetry groups.

Wavelet Networks: Scale Equivariant Learning From Raw Waveforms

We utilize scale-translation equivariance for learning on raw time-series, e.g., audio.

Attentive Group Equivariant Convolutional Networks

A generalization of equivariant visual attention to arbitrary groups.

Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring in Data

We utilize attention to focus learning on the set of co-occurring transformations in data.



  Honors and Awards



Fully Professional Proficiency (C1)


Fully Professional Proficiency (C1)


Fully Professional Proficiency (C1)


Native Proficiency (C2)